1

Why AR is Worth a Thousand Words to Frontline Workers

Due to Covid-19, restricted travel and social distancing requirements have accelerated AR-based solutions for remote assistance, since it has now become a necessity for industry operations. As opposed to video-calling applications such as Zoom or Skype, Augmented Reality is software technology allowing users to overlay graphic material onto video images via mobile network. For instance, an expert and on-site technician could both view a panel of switches on an annotatable screen connected via their own PC, phone, or tablet. The expert could then circle or drop a virtual arrow on the part that needs attention for the technician to see on the screen.

Key advantages of AR-based remote assistance noted in the article include:

  • Instantaneous feedback system – users interact with the elements of their work in addition to with one another
  • Highly mobile form of communication – includes annotations, on-site images, and graphic augmentation as well as a two-way voice connection
  • Increases safety – e.g. reduces number of healthcare workers needed in a hospital room, limiting exposure to Covid-19
  • Helps maintain workflow continuity
  • More intermittent, special-purpose vehicle of communication than video conferencing

Other potential features of AR, dependent on the specific software, include:

  • Retaining images
  • Issuing push notifications
  • Looping in multiple users
  • Recording a session for future training
  • Object character recognition
  • Transferring files

The article concludes by stating that, even post-pandemic, AR is likely to become a prominent tool for operating personnel and frontline technicians to rely on.




Equipping the AR workforce of tomorrow

As part of the AREA’s mission to help accelerate the adoption of Enterprise Augmented Reality (AR) by supporting the growth of a comprehensive ecosystem, we are further engaging with academic institutions to provide feedback on how they can help equip the graduates of tomorrow with the AR skills needed to positively contribute to the workforce.

The AREA, together with our academic partners, has created a very short survey to capture your perspectives on educational needs for future graduates.

We would gratefully appreciate you completing this survey – it should take no longer than 10 minutes to complete.

The survey is available HERE and runs until July 31st 2020. All contributors will receive a report summarising the findings. If you do have any questions, please contact [email protected].

Thank you for helping shape the educational future of our workforce.

The AREA Team




Progress Report on AREA 3D Asset Reuse Research Project

Researcher Eric Lyman of 3XR has provided an update on the progress of the AREA’s 7th research project. Eric and his team are tasked with examining barriers to, and recommending approaches for, using existing enterprise 3D assets in AR experiences. The project will also test the ingestion and use of enterprise 3D assets in a set of limited but representative environments. 

Research began in April when all enterprise AREA members were contacted to provide sample 3D models for testing and participate in an interview with Eric. Designed to help ascertain the most popular tools, the most compelling 3D AR use cases, and the most important 3D optimization criteria, interviews have been conducted with representatives of Boeing, Newport News Shipbuilding, Merck, and the AEC Hackathon. Eric has also interviewed AR providers Theorem, ARVizio, InstaLOD, and Hexagon, as well as NIST and MakeSEA/Catapult. 

Three organizations have generously contributed 3D CAD files to the project: Boeing, DIME Lab, Medtronic, Newport News Shipbuilding, and NIST. 

The following AR tools will be used to test reuse of the 3D CAD files: Rapid Compact DGG, InstaLOD, Simplygon, PiXYZ, Meshlab, and possibly ARVizio. 

When completed, the research project is intended to reveal: 

  • The most popular AR execution and rendering engines and frameworks that support dynamic 3D asset ingestion 
  • The key toolchains being used to generate 3D assets for AR applications 
  • Which formats (inputs and outputs) the toolchains and frameworks support 
  • Which standards are supported by the 3D and AR toolchains and frameworks
  • Any failures or incompatibilities that arise when using a subset of toolchains and delivering the final models to a few specific AR devices used in enterprise. 

The final research project report, for AREA members only, will deliver an overview of the most optimal conversion processes to bring 3D assets into AR platforms. This will include: 

  • A full overview of steps required, while illustrating the degree of success with each process and format tested; 
  • A table graph that clearly illustrates the advantages / disadvantages of these processes both from the perspective of conversion ease, and final usability; 
  • An analysis of pre-existing commercial platforms, and the creation of a table graph illustrating the pros / cons of each. 

Eric expects the work to be completed by the end of July. 




COVID-19: How Augmented Reality is helping mitigate business impact

This editorial has been developed as part of the AREA Thought Leaders Network content, in collaboration with selected AREA members.


Short of time? Listen to the accompanying podcast (~10 minutes) available here.

An imperative to overcome limitations

The COVID-19 pandemic has unleashed an unprecedented impact across the global business landscape. Over recent months, many countries have implemented various forms of lockdown, severely limiting the ways that companies can do business, and, in many cases causing operations to cease. This crisis is likely to have an ongoing impact in the months ahead as we transition to a “new normal” and beyond.

This editorial discusses ways in which Augmented Reality (AR) can help mitigate the societal and business impact while supporting business continuity through the pandemic.

The restrictions placed upon both individuals and organizations have resulted in an upsurge in the use of digital technologies to support a variety of activities, including online shopping, digital and contactless payments, remote working, online education, telehealth, and entertainment. The ability to support these activities is heavily reliant upon the availability of “digital-ready” infrastructure and services.

Enterprise AR builds upon this digital infrastructure by offering the ability to juxtapose digital content over a live view of the physical world to support business processes. So how can AR help?

First, let’s examine the impacts that COVID-19 and subsequent responses have had upon business and society:

  1. Social distancing measures hinder our ability to have traditional face-to-face interactions in addition to often limiting the size of groups able to gather.
  2. The inability to travel and prevalence of key staff working from home are viewed as impacting the ability to conduct business, manage effective team operations, and provide local expertise where it is needed, amongst others.
  3. Fewer on-site staff due to illness, self-isolation and financial restrictions impedes an organization’s ability to continue operations “as before.”
  4. A lack of classroom and hands-on training makes it difficult to quickly upskill new staff or train existing staff on products and processes.
  5. Disrupted supply chains are requiring manufacturing and sourcing processes to become more flexible to help ensure continuity of production.
  6. The potential for virus transmission has caused a reluctance among workers to touch surfaces and objects that may have been touched by others.

Clearly, to help address these challenges, new or enhanced tools and ways of working are required. At the AREA, we believe that AR can play an effective role in mitigating a number of these obstacles and, at the same time, offering new opportunities to provide long-term business improvements.

AR can help address COVID-19 restrictions with remote assistance

A key use case of Enterprise AR is in the realm of remote assistance.  AR-enhanced remote assistance provides a live video-sharing experience between two or more people. This differs from traditional videoconferencing in that such tools use computer vision technology to “track” the movements of the device’s camera across the scene. This enables the participants to add annotations (such as redlining or other simple graphics) that “stick” onto elements in the scene and therefore remain in the same place in the physical world as viewed by the users. Such applications support highly effective collaboration between, for example, a person attending a faulty machine and a remote expert, who may be working from home. This use case helps mitigate impacts of travel reduction, reduced staffing, and, of course, social distancing.

 

AR-enhanced remote assistance for medical equipment procedures (YouTube movie). Image and movie courtesy of RE’FLEKT.

 

Sarah Reynolds, Vice President of Marketing, PTC comments, “As organizations look to maintain business continuity in this new normal, they are embracing AR to address travel restrictions, social distancing measures, and other challenges impacting their front-line workers’ ability to go on-site and operate, maintain, and repair machines of all kinds. Even when equipment or product experts can’t address technical issues in person, AR-enhanced remote assistance enables them to connect with on-site employees and even end customers to offer them contextualized information and expert guidance, helping them resolve these issues quickly and ultimately reduce downtime. AR-enabled remote assistance marries the physical and the digital worlds – allowing experts and front-line workers to digitally annotate the physical world around them to improve the clarity, precision, and accuracy of their communication and collaboration.”

AR-enhanced remote assistance enables business continuity for machine operations, servicing and repair. Image courtesy of PTC.

AR enables no-touch product interaction via virtual interfaces

A key capability of AR is the ability to superimpose a digital virtual user interface on physical equipment that may have a limited or non-existent user interface. The user is able to, depending upon the technology used, select actions by tapping on the screen of the device or, alternatively, use hand gestures or verbal commands to interact with the equipment via the AR-rendered “proxy” user interface. The provision of such abstracted interactions is key to reducing the amount of touching required by physical objects that may be used by numerous people.

There are many ways in which such AR capabilities can help medical professionals carry out their duties during the current pandemic and beyond. The BBC has reported on one such application that helps reduce the amount of physical contact between doctor and patient, while still enabling them to communicate with colleagues outside of the COVID-19 treatment area. Here, a doctor wearing a Mixed Reality headset is able to interact with medical content such as x-rays, scans or test results using hand gestures while others are able to participate in the consultation from a safe location. The article points out that this way of working also reduces the need for Personal Protective Equipment (PPE) as colleagues are able to participate from a safe distance.

Example of a virtual user interface projected into the physical world. Image courtesy of Augumenta.

 

Eve Lindroth, Marketing and Communications at Augumenta, comments, “Today, the devices and applications can be controlled hands-free. This also addresses the problem of being able to work hygienically. You do not need to touch anything to get data in front of your eyes, control processes, or to document things. You can simply use gestures or voice to tell the device what to do. Tap air, not a keyboard.”

AR can help medical equipment training

AR can also be used to help assist medical professionals by providing highly efficient and interactive training methods that can streamline the process of learning new equipment and other necessary procedures. This is critical when experienced staff are unwell and replacements need to be trained as quickly as possible.

Harry Hulme, Marketing and Communications Manager at RE’FLEKT, comments, “We’re seeing that AR is a key tool for healthcare workers during these testing times. For medical training and equipment changeovers, AR solutions substantially reduce the risk of human error while significantly reducing training and onboarding times. Moreover, the time-critical process of equipment changeover is accelerated with AR-enhanced methods.”

 

AR-based training with REFLEKT ONE and Microsoft HoloLens in medical and healthcare. Image courtesy of RE’FLEKT.

 

AR supports remote collaboration

The remote assistance use case can be generalized further to include remote collaboration.  AR enables users who are physically separated to be able to “inhabit” a shared virtual space, distributed by the AR application. This ability enables the support of numerous use cases, including shared design reviews. In this scenario, multiple users can see the 3D product models and supporting information projected onto their view (and from their relative position) of the physical world, via their AR-enabled devices.

Different design alternatives can be presented and viewed in real-time by all participants, each of whom can position themselves in their physical space to obtain a particular aspect of the digital rendition. Further, users can annotate and redline the shared environment, providing immediate visual feedback to all others.  Such capabilities are key factors in mitigating the restrictions imposed upon travel, the forming of groups and close-proximity human-to-human interaction

 

Immersive collaboration: A design review of a motorbike in 1:1 scale with a remote team. Image courtesy of Masters of Pie.

 

Karl Maddix, CEO of Masters of Pie, comments: “Video conferencing solves the immediate need to bring people together whereas collaboration, as enabled by Masters of Pie, is built for industry to bring both people and 3D data together. Real-time access to the underlying 3D data is imperative for effective collaboration and business continuity purposes.”

AR supports remote sales activities

AR is also proving an effective sales tool, enabling the all-important sales process to continue during the pandemic. Home shoppers can examine digital renditions of home appliances, furniture, etc. presented within their own physical space, for example. Moreover, the use of rich and interactive sales demonstrations facilitated by AR allow the potential buyer to understand the form, fit and function of a product without the need for travel, touch or close interaction with a salesperson.

AR enriches the remote shopping experience, allowing buyers to place and interact with products in their own physical environment. Image courtesy of PTC.

 

Sarah Reynolds of PTC comments, “AR experiences improve the end-to-end customer experience, improve purchase confidence, and ultimately streamline sales cycles, especially when customers are not able to shop in person.”

Take the next steps

In this editorial we’ve discussed a number of ways in which AR technology can help ensure business and healthcare continuity by mitigating the impacts of the various restrictions placed on the way we work. Recognizing this, many AREA member companies have introduced special offers and services to help industry during the pandemic and we applaud their support. Learn more about them here.

We invite you to discover more about how Enterprise AR is helping industry improve its business processes at The AREA.




SAS Institute is Bringing “Intelligent Realities” to the Enterprise

The SAS Institute has been a world leader in analytics software for more than four decades. Today, the privately-held North Carolina-based company is expanding its reach into Augmented Reality (AR). We recently spoke with Michael Thomas, SAS Systems Architect, to learn more about his company’s approach to AR and what he refers to as “Intelligent Realities.”

AREA: What is driving the SAS Institute’s interest in Augmented Reality?

MICHAEL THOMAS: We’ve always sought to deliver our data and analytics capabilities via new devices and user interfaces as they’ve become available. In the ‘80’s, we brought them to the PC. In the ‘90’s, we brought them to the Web, and then tablets. And now we’re on to this new user interface that’s penetrating the enterprise. It’s the next place for us to provide our Artificial Intelligence (AI) and analytical data value. As a Systems Architect, I’ve been looking at these emerging technologies to figure out, at an architectural level, how they fit. As part of that, I’ve been developing AR and VR for commercial use cases

AREA: Can you tell us about some of the use cases you’ve been involved in?

MICHAEL THOMAS: One topical use case we’re tracking involves using AR for germ-fighting, along with the Internet of Things (IoT) and AI. IoT sensors are used to detect areas meriting closer scrutiny due to germ-spreading behavior, such as coughing. Custodians assigned to keeping those areas clean can then focus their efforts by using either headset AR or a spatial AR approach. Another example is in manufacturing – being able to use AR combined with IoT data and AI to give technicians the ability to more rapidly repair and proactively address issues to keep manufacturing equipment available and online. That also involves tying in remote experts. But while many remote expertise use cases are built around the idea that the expert sees the video that the proximate user is gathering with their headset, we go beyond that to take the IoT data from that piece of equipment, analyze it in real time, and give the most pertinent information to that remote expert. They can then use VR technology to better advise the remote technician.

AREA: In one of your blog posts, you argue that enterprises should not fixate on head-mounted AR devices and rather think more in broader terms of what you call “intelligent realities.” What do you mean by that?

MICHAEL THOMAS: Intelligent realities for workers means you improve work by making the reality of work better. SAS is not an AR vendor so, rather than thinking in terms of devices, we look at what form factor will enable us to manifest our value and make the customer better. It’s wide open. Does a tablet do what you need to do? If so, that’s great. We’ve had customers who have experimented with head-mounted devices and been disappointed. So they’ve shifted to pursuing other ways to make those realities more intelligent. That gets them into spatial AR, but also more pedestrian things like using transparent LED screens or projected light. As headsets get better, we expect some of that resistance to go away. But we’re just taking a broader perspective on how to make that reality better that isn’t just the latest technology.

AREA: What do you see as the next significant milestone in the adoption of AR?

MICHAEL THOMAS: I think this year will be a good year for headsets. We’re getting to a second generation of Mixed Reality headsets with a form factor where you can actually expect people to wear them for a long time. And then from there, as we get focused on commercial AR, we at SAS have the technology and the ability to give you the content that’s going to improve your reality right now. That’s our piece. And it’s going to be very exciting to see that new growth develop.

Michael Thomas has authored several thought leadership publications on Intelligent Realities that we would like to share with AREA readers. They include:

 




Enterprise Augmented Reality Solutions – Build or Buy?

Short of time? Listen to the accompanying podcast (~8 minutes) available here.

This is a question with no simple answers. As enterprises contemplate deploying AR solutions, one of the first questions to confront them is a fundamental one: should we build or buy? This AREA editorial explores the factors that may help organizations answer this critical question.

The build-or-buy decision essentially boils down to determining the relative priorities of cost, control, solution availability and time to market.  In traditional solution deployments, the advantages and disadvantages of each approach can be summarised as in the table below.

Consideration for Enterprise Augmented Reality

When it comes to enterprise AR deployment, the build-or-buy deliberations need to take in additional considerations. These may include some or all of the following:

  • Implementations on novel or unfamiliar hardware
  • Development of advanced computer vision capabilities
  • Application development based upon AR toolkits
  • Data processing, protection and optimisation (e.g. for 3D models)
  • Integration into enterprise business systems
  • Development of custom content for new methods of deployment and user interaction
  • Customisation of the base solution to meet specific needs

A previous AREA editorial explored how AR should be considered within the scope of a technology strategy. For the purposes of this editorial, we shall omit custom hardware development from the discussion, but rather, focus on the software build-or-buy decision. There may also be wider implications if significant customisations are needed or content must be created by internal or external personnel.

Here’s a typical set of steps leading to the decision-making phase:

  1. Identify business use case, perform investment analysis and secure budget.
  2. Define weighted requirements for the solution to the identified business problems or opportunities.
  3. Identify potential vendors and their commercial solutions.
  4. Perform a gap analysis between commercial offerings and solution requirements.
  5. Identify whether gaps can be closed with customisation or custom development.
  6. Perform cost analysis of internal/external development versus commercial solution.
  7. Evaluate options and make strategy decision.

Target use cases are an important factor

It’s important to understand that enterprise AR-based solution needs may vary significantly according to the target use case. For example, the table below provides a view on how aspects of the solution needs vary across four example applications of AR for business use cases:

Such factors may play an important influence in the build-or-buy process. Take the AR-enhanced product demonstrator (sales) use case above, for example. The low levels of integration with business systems and data that this solution requires, coupled with other factors such as time criticality and reduced longevity needs, may make it appropriate to subcontract all software development and content creation to a third party.  

If your use case is unusual, then you may need to consider purchasing an AR platform that allows custom development (whether via drag’n’drop authoring, coding or other mechanisms).

Example checklists

Typical questions to consider when making the build-or-buy decision are as follows:

  • Have you identified the business applications (or problems to solved)?
  • Have you developed the requirements needed to address the business problem?
  • Are there commercial offerings claiming to provide a solution for your use cases?
  • Are you confident that the solution meets your functional requirements?
  • Would more than one commercial product be needed to provide the solution?
  • Are you confident of the solution provider’s financial viability?
  • Are there gaps between the commercial solution and your requirements? Are these gaps important and/or able to be closed? Are there other edge cases to be considered?
  • Do you have the required skills in-house? Alternatively, are there vendors who can supply the skills within budget?
  • What toolkits are available that can help provide the underpinnings of a custom solution?
  • Is complete control and ownership of the solution important to your business (for reasons of market differentiation, security or others)?

The following table offer some additional important considerations more specific to an AR-based solution:

Choose wisely – and consult experts

This editorial has explored a number of considerations that are important when seeking to adopt AR in an enterprise setting. Companies may be tempted to develop prototype applications when first investigating AR, perhaps using one or more of the commercially available toolkits. However, there are clearly a number of important aspects to consider in reaching a build-or-buy decision.

It is unlikely that an industrial company will develop an in-house AR application from the ground up, as this requires significant expertise in numerous areas, including computer vision, 3D computer graphics, mobile device management, etc. If your use case is truly unique and there are no commercial products that support the use case, then your only option may be to develop the solution this way.

Far more likely, however, is the decision to purchase a commercial-off-the-shelf solution. As we’ve discussed, and depending upon your target use case, there may be significant requirements on systems integration, data processing, content creation and other forms of customisation required prior to considering a deployable solution.

As discussed above, the decision is often driven by requirements of cost, control and timing. If cost and timing are a higher priority, then a commercial offering is likely the more appropriate solution. If control is most important, then it is perhaps better to pursue internal development or, more likely, contracting the work to a third party.

Ultimately, the decision is yours. However, prior to making that decision, we recommend that you look at the offerings of the AREA solution provider members who will be happy to discuss and hopefully meet your requirements.




Mixing and Matching Standards to Ease AR Integration within Factories

AREA member Bill Bernstein of the National Institute of Standards and Technology (NIST) shares his organization’s early work to improve AR interoperability.  

Today, most industrial Augmented Reality (AR) implementations are based on prototypes built in testbeds designed to determine if some AR components are sufficiently mature to solve real world challenges. Since manufacturing is a mature industry, there are widely accepted principles and best practices. In the real world, however, companies “grow” their factories organically. There’s a vast mixing and matching of domain-specific models (e.g., machining performance models, digital solid models, and user manuals) tightly coupled with domain-agnostic interfaces (e.g., rendering modules, presentation modalities, and, in a few cases, AR engines)  

As a result, after organizations have spent years developing their own one-off installations, integrating AR for visualizing these models is still largely a pipedream. Using standards could ease the challenges of integration, but experience with tying them all together in a practical solution is severely lacking.  

To address the needs of engineers facing an array of different technologies under one roof, standards development organizations, such as the Institute of Electrical and Electronics Engineers (IEEE)the Open Geospatial Consortium (OGC)and the Khronos Group, have proposed standard representations, modules, and languages. Since the experts of one standards development organization (SDO) are often isolated from the experts in another domain or SDO when developing their specifications, the results are not easily implemented in the real world where there is a mixture of pre-existing and new standards. The problem of low or poor communications between SDOs during standard development is especially true for domain-agnostic groups (e.g., the World Wide Web Consortium (W3C) and Khronos Group) communicating with domain-heavy groups (e.g., The American Society of Mechanical Engineers, the MTConnect Institute, and the Open Platform Communications (OPC) Foundation).  

However, both perspectives – domain-specific thinking (e.g., for manufacturing or field maintenance) and AR-specific and domain-agnostic concerns (e.g., real-world capture, tracking, or scene rendering) – are vital for successfully introducing and producing long term value from AR.  

Smart Manufacturing Environments 

In the case of smart manufacturing systems (SMS), SMS-specific standards (e.g., MTConnect and OPC-Unified Architecture) provide the necessary semantic and syntactic descriptions of concepts, such as information about devices, people, and materials. Figure 1 showcases the current state of an industrial AR prototype with examples of standards to inform processes.  

 

Figure 1: General workflow for generating industrial AR prototypes. The dotted purple lines signify flows that are currently achieved through significant human labor and expertise.  

From a high-level view, the AR community is focused on two separate efforts: 

  • Digitizing real-world information (shown on the left of Figure 1); 
  • Rendering and presenting AR scenes to the appropriate visualization modalities (shown on the right of Figure 1).  

To produce successful and meaningful AR experiences, it is vital to connect to domainspecific models with domain-neutral technologiesIn the current state of AR development where few or no standards have been implemented by vendors, this task is expert-driven and requires many iterations, human hours, and experience. There are significant opportunities for improvement if these transformations (indicated by the purple dashed lines in Fig. 1) could be automated.  

In the Product Lifecyle Data Exploration and Visualization (PLDEV) project at NIST, we are experimenting with the idea of leveraging standards developed in the two separate worlds: geospatial and smart manufacturing or industry 4.0. One project, shown in Figure 2, integrates both IndoorGML, a standard to support indoor navigation, and CityGML, a much more detailed and expressive standard that can be used for contextually describing objects in buildings, with MTConnect, a standard that semantically defines manufacturing technologies, such as machine tools. All these standards have broad support in their separate communities. Seemingly every day, supporting tools that interface directly with these representations are pushed to public repositories.  

Figure 2: One instance of combining disparate standards for quick AR prototype deployment for situational awareness and indoor navigation in smart manufacturing systems.  

In Figure 2, we show the use of IndoorGML and CityGML in a machine shop that has previously been digitalized according to the MTConnect standard. In doing so, we leverage existing AR visualization tools to render the scene. We then connect to the streaming data from the shop to indicate whether a machine is available (green), unavailable (yellow), or in-use (red). Though this is a simple example, it showcases that when standards are appropriately implemented and deployed, developers can acquire capabilities “for free.” In other words, we can leverage domain-specific and -agnostic tools that are already built to support existing standards, helping realize a more interoperable AR prototyping workflow.  

Future Research Directions 

This project has also demonstrated significant future research opportunities in sensor fusion for more precise geospatial alignment between the digital and real worlds. One example is leveraging onboard sensors from automated guided vehicles (AGVs) and more contextually defined, static geospatial models described using OGC standards IndoorGML and CityGML  

Moving forward, we will focus on enhancing geospatial representations with additional context.  For example, (1) leveraging such context for AGVs to treat task-specific obstacles (like worktables) differently than disruptive ones (like walls and columns) and (2) helping avoid safety hazards for human operators equipped with wearables by more intelligent rendering of digital objects.  We are currently collaborating with the Measurement Science for Manufacturing Robotics program at NIST to investigate these ideas.  

If successfully integrated, we will be able to demonstrate what we encourage others to practice: adoption of standards for faster and lower cost integrations as well as safer equipment installations and factory environments. Stay tuned for the next episode in this mashup of standards!  

Disclaimer 

No endorsement of any commercial product by NIST is intended.  Commercial materials are identified in this report to facilitate better understanding.  Such identification does not imply endorsement by NIST nor does it imply the materials identified are necessarily the best for the purpose. 




Taqtile Focuses on Eliminating the Skills Gap for Frontline Workers

There’s a growing problem affecting multiple industries. Experienced workers are retiring at an alarming rate, while equipment, plants, and infrastructure are increasing in complexity and sophistication. These trends are converging to create a significant shortage of skilled workers. New AREA member Taqtile is addressing this challenge with its Manifest software. Manifest is designed for easy knowledge capture and reuse, enabling non-technical subject-matter experts to capture how to operate and/or repair equipment step-by-step, incorporating audio, video, and other media. Stored in the cloud, that knowledge is then available for less experienced workers to access and follow the step-by-step instructions. We spoke recently with John Mathieu, Taqtile’s European Managing Director.

AREA: Why don’t you start by giving us a quick history of Taqtile?

Mathieu: Taqtile is backed by a team with many years of experience across the industry, including in mobile, apps, etc. The start of Manifest was realizing that there was an entire area of business that had not been touched by digital transformation but could really use digital capabilities to make their jobs faster, safer, and easier. We saw a problem we could fix, and a team that had a passion to make a change. The development of Manifest allows organizations to capture all the knowledge already out there and make it available in a new and sexy way that could be scaled across the entire workforce.

AREA: When you visit the Taqtile website, the first thing that leaps out is this concept of “immediately deployable,” which in the AR world sounds too good to be true. How is that possible?

Mathieu: One of the things we recognized early on is that a lot of the companies and industries that could use this technology don’t have AR/VR programmers or 3D CAD/model experts on staff ready to start creating for the very specific content required. Our goal is to make everyone an expert, so we choose to provide a solution that literally anyone can use, no programming experience required, making it immediately deployable. Additionally, we deploy our solution in the cloud, which allows for the solution to be provisioned, deployed and put to use immediately. We have enabled front line workers to capture everything that they already know by providing an application that literally anyone can pick up and use. As soon as they put on or pick up their Mixed-Reality-capable device, they can begin documenting and capturing the expertise they have right then and there. With Manifest, I’ll do a ten-minute training session on HoloLens, Magic Leap, or even iPad, and within about five minutes, subject matter experts are creating content.

AREA: And once that initial knowledge capture is done, it all gets saved in the cloud. Can you go back and build on that knowledge base iteratively?

Mathieu: Absolutely, and that’s one of the great features of the platform; you don’t need a developer, and no one needs to touch a PC. All the content that is captured is able to be improved upon over time or changed as necessary. They can put on or hold a Mixed Reality-capable device and walk up to a manufacturing device, place a QR code on a piece of paper, scan it on an area of that machine to create a spatial anchor, and then just go. If you have existing training manuals, we do have the ability to use Manifest through our web portal in a copy-and-paste operation. So, if it’s step 1, you would copy and paste the text of step 1 into the text field on the web portal, and when you fire that template up, that text appears holographically. And then you click to the next step and you take some evidence. You might shoot a picture of that part of the machine or shoot a video of your hand manipulating that lever. And then once you’re done, you can go back, open that template you’ve already written, and add a step, or insert additional ancillary media.

AREA: Are there particular vertical markets or industries where you’ve gained the most traction so far?

Mathieu: That’s a really great question. To date, we have gained the most traction in manufacturing, government and defense, and utilities. We’re working to solve a very large problem: over the next five years, up to 30 percent of the workforce is retiring across all industries all over the world. For instance, I was sitting across the table from the managing director a major electric company, and she had calculated that 42 percent of her workforce would be retiring by 2021. If you can’t get people into a utility company to perform those tasks and literally keep the lights on, that’s going to impact an entire economy. So spatial computing solutions are absolutely huge in helping to solve these major skill and personnel gaps.

AREA: Given your title, your focus is Europe. Are you a global company at this point, or are there certain geographies that you’re focused on?

 Mathieu: We’re becoming more global. I’m the Managing Director of Taqtile Europe based in Paris and we have representation in Sydney, Australia that provides coverage for key markets in AsiaPac, and then of course we’re all over North America, which is where our company is headquartered.

 AREA: We’re at the start of a new year. What can people expect from Taqtile over the next 12 months or so?

Mathieu:  Taqtile has a lot of great things coming in 2020. We will be launching a new product that will enable our customers to expand their Manifest production deployments, and further their expert training, maintenance and safety measures. Additionally, we are continuing our work on leveraging the power of spatial computing, so that we can walk into any environment and Manifest will know what machine A is, what machine B is, what machine C is, and be able to leverage the capabilities of this next generation of machines. Taqitle has a lot of great things coming, and we are excited to show them all to you soon.

 AREA: Tell us why you joined the AREA and what you hope to get from being a member.

Mathieu: We’re in an exciting, growing field, with new opportunities opening every day. Being a part of the AREA allows us to dig deep into our industry, and contribute to the research the AREA is developing, as well as market development. With every interaction we have, either with a customer or a competitor, we all get an opportunity to share knowledge in a space that is changing every day. The AREA allows all of us to share best practices, to help educate customers, and to expand the reach of AR in the industry.  We look forward to partnering with the AREA in outreach and market development as the AREA enables us all to benefit from sharing within the community.




How does AR fit into a company technology strategy?

Enterprise Augmented Reality (AR) offers countless opportunities to companies looking to improve the efficiency and effectiveness of their business. Many enterprises are pursuing Digital Transformation initiatives that focus on delivering technology strategies that drive innovation in support of the overall business goals.

Read on as we discuss the topic of technology strategies and how they relate to embracing enterprise AR in this, our latest AREA editorial. We’ve also created a complimentary handy podcast (>12 mins) for you to listen to on the go.

Robust technology strategies include the following components:

  1. Executive overview of strategic objectives

This covers the question: “What are the overall business drivers and how can technology advance them?” Such drivers can be evolutionary goals (e.g., improving profitability of certain activities within the business or reducing operating costs) or more revolutionary, for example, opening new lines of business.  

  1. Situational review

The technology strategy review should include a description of the current state of the business, what technologies are being used and how well they are working. The situational review should also offer commentary on the areas of the business (or potential new opportunities) that need to be improved or offer the greatest potential. These can be specific financial objectives (e.g., “reduce costs and improve efficiency within the services business”) or may address more “soft” objectives, such as reducing staff churn and therefore expertise transfer and retention.  

  1. Technology assessment and selection

As the strategy development continues, it quickly becomes important to assess which technologies can assist in supporting the business needs. At this phase, it’s important to take an outside-in view and gain perspectives on industry trends, perhaps hiring external experts or engaging with industry affiliations such as the AREA in order to determine the selection of the most appropriate technology.

The AREA can, for example, provide a neutral and independent view on the current technology state-of-the-art, its application to specific use cases and example case studies showing how the technology is being used within various industrial sectors.

  1. Strategic planning, resourcing and leadership

Next comes the determination of the implementation plan of the technology strategy. This phase should clearly identify potential vendors, internal staffing requirements and, most importantly, the internal champions and leadership (stakeholders) necessary to ensure alignment and roll out the solutions.

It is often helpful in this section of the strategy definition to include a maturity model, providing an internal roadmap over time of what is typically a growing adoption and leverage of the technologies within the strategy.

  1. Deployment

Lastly, the strategy execution – i.e., the rollout – commences. This will often include staff training, systems integration, custom development and more. Many companies will also implement a governance model that ties key performance indicators back to the original goals defined in the strategy.

This framework is typically used to support significant technology overhauls or new implementations, but what does this mean in relationship to adopting enterprise AR technologies?

Depending upon how and where AR is to be used, one or more of the following considerations will arise:

  1. Process impacts

Often, the adoption of AR will involve changing how certain business processes are performed. This will involve IT impacts (new IT infrastructure to manage the process) and human impacts – how the “new way of working” is rolled out to the organization.

  1. New hardware implications

AR may involve the usage of new hardware technologies (e.g., digital eyewear, wearables) and therefore the IT organization must be involved in actively supporting the needs of this hardware, which, initially, may apply only to a select and small proportion of the workforce.

  1. The “content creation to consumption” pipeline

Many AR solutions require the development of new content or may incorporate reuse of existing digital assets. These may include procedural definitions (step-by-step instructions), 3D models (ideally derived from the CAD master models) and more. This data pipeline needs careful planning and architecting to ensure enterprise needs of scalability and cost-control are met.

  1. Data and systems integration

Some AR solution deployments harness AR’s unique ability to place digital content directly into the visual context of a user performing a task. As this is a unique selling point of AR, it is important to consider the architectural needs to ensure that data from enterprise business systems, such as PLM, SLM, ERP and IoT data streams, may be presented within the AR application. Ideally, the AR technology should incorporate mechanisms to complement existing technology platforms and tools by ensuring communication and display of information from these systems.

  1. Pace of change

As with any new technology domain, the pace of change can be dramatic. A robust technology strategy should be flexible in its definition in order to adapt to later developments or to offerings from new vendors, rather than be locked into a potentially obsolete technology or insolvent vendor.

  1. Human factors, safety and security

AR solutions exhibit other factors that should be incorporated into a robust technology strategy, including safety aspects (users are now watching a screen rather than their surroundings and may lose situational awareness), and security (AR devices may be delivering high-value intellectual property that must be secure against malicious acts), amongst others.

Some of these challenges may be familiar to IT executives, while others may be new.

With these points in mind, and from the perspective of determining, planning and implementing a technology strategy, what does this mean to companies wishing to embrace enterprise AR?

Given the nature of the earlier points, and the depths of integration that may be required, one might think that AR needs to be considered only as part of a ground-up technology strategy definition. However, as with many technologies, integration and planning can happen at a later stage.

Mike Campbell, Executive VP, Augmented Reality Products at PTC, comments “Augmented Reality may be new, and its impact may be disruptive, but that doesn’t mean it can’t be woven seamlessly into your existing strategies. AR can plug into and enhance your existing technology stack, improving productivity and communications, helping to modernize training, and ultimately driving more contextual insights for employees.”

Mike makes an important point. Given that AR offers new “windows” into existing data and systems and provides new process methods, it remains important for many businesses that any disruption is a positive one for their business and not a negative one for their existing IT systems infrastructure. Meshing with existing infrastructure is key to enterprise adoption.

Mike Campbell continues: “Leaders in the AR industry work hard to make software and hardware scalable and simple for enterprise implementation. It can be integrated into a technology strategy to enhance the solutions you already have to offer in an efficient and engaging way to visualize information. You can leverage your existing CAD models or IoT data and extend their reach through AR, creating a strong digital thread in your organization and helping your employees access critical digital data in the context of the physical world where they’re doing their work.”

Given the fast pace of change in emerging technologies such as AR, businesses typically prefer not to be locked into the technological minutiae of specific vendors and clearly wish to leverage the investment in applications across multiple domains of their business, where it makes sense to do so.

Mike Campbell puts it this way: “Choosing a cross-platform AR technology that partners with powerful hardware, whether headsets or tablets, can give you more flexibility in how you want to deploy this information across your workforce, enabling you to provide solutions for employees in the field, on the factory floor, and even in the back office.”

AR can be considered a strategic technology initiative in its own right but the real power of AR is unleashed when it complements and supports other technology and business strategies. A common place for AR to really shine is at the intersection of Product Lifecycle Management (PLM), the Internet of Things (IoT) and, often, Service Lifecycle Management solutions.

AR is often used as an industrial sales and marketing tool, which typically requires a thin veneer of enterprise systems strategic alignment. However, the greatest value of enterprise AR comes when it is integrated with other technology strategies to be part of a larger and holistic strategic technology arsenal to transform specific business areas.

Commenting on this, Mike Campbell opines: “How exactly you choose to deploy AR will depend on your business needs. If you have existing CAD models, you can build these into AR experiences to offer immersive training, maintenance, or assembly instructions that overlay these models on top of the physical machines with which they correspond.

This can drastically improve your workforce productivity and shorten the time it takes to train someone by offering in-context information where and when it’s needed. If you have IoT data, enabling employees to visualize this data in AR can provide real-time insights into the machines they’re working on, letting them quickly and easily identify problems while on the shop floor.

In summary, considering how technology strategies are often defined, AR can be treated as revolutionary or evolutionary, enabling businesses to try, assess, learn and expand without disrupting existing IT infrastructure.  

We’ll conclude with one final thought from Mike Campbell: “The question really isn’t ‘how does AR fit into a company’s technology strategy’, but how do you want it to fit. There are countless ways AR can bring value to your business, and AR software and hardware providers are continually improving their technology to make integration powerful and simple.”

That is exactly what we’re supporting at the AREA. We’re helping a growing community of users and vendors of AR to share knowledge and tools along with developing expertise and best practices to ensure that AR adoption continues to grow in 2020 and beyond.

Within the AREA, we have several active committees that are committed to developing and driving best practices. To find out more, please visit thearea.org.

 




Arvizio’s Jonathan Reeves on Large-Scale AR

New AREA member Arvizio is in the news recently, having just released version 4.0 of its MR Studio platform. The Ottawa-based company is led by CEO Jonathan Reeves, who has founded several successful startups, including Mangrove Systems, Sirocco Systems, and Sahara Networks, as well as serving as an Operating Partner of Bessemer Venture Partners and Chairman of CloudLink.

AREA: Jonathan, you’ve had an impressive track record as a serial entrepreneur and venture capitalist. You’ve been involved in a variety of companies and other areas of IT. What motivated you back in 2016 to say, “Now is the right time for me to get into the Mixed Reality business”?

JONATHAN REEVES: That’s a great question.
As you noted, I have started a number of companies over the years. When
considering a new venture, I always tend to look at where the markets are going
and which areas of technology look the most interesting, the most challenging,
or where I think there is an opportunity for innovation. It became apparent to
me and the other founders of Arvizio that Augmented and Mixed Reality
technology were getting close to a point of inflection where the market would
develop quite rapidly in a timeframe that suited us. That is, we are trying to
extrapolate where we think the market will be a couple of years ahead when you
start a technology company. Of course, things change along the way as the
market develops and the dynamics become clear. We felt the combination of
technologies, such as Microsoft HoloLens, the recently introduced Magic Leap and
the evolution of ARCore from Google and ARKit from Apple, indicated a general
movement towards immersive technology and the timing was interesting from an
enterprise perspective. What we didn’t know at the time was just how rapidly 5G
technology was going to enter into the mix. We are finding this to be an
accelerant to the market. So, we’re rather pleased with the way the market is developing.

AREA:
For our readers who are not familiar with Arvizio, can you tell us what
distinguishes your company from its competitors?

JONATHAN REEVES: We believe there are several
key AR challenges for the enterprise, particularly in the area of handling very
large 3D models and point clouds – the type of data that’s often used in
industries such as architecture, engineering, construction, energy, automotive and
so forth. The 3D models that are used in these industries are much larger than
the local rendering capability of most devices. We felt that addressing that
problem would be of great interest – that is, the ability to take full-scale CAD
models, BIM models using tools like Revit, Navisworks or using LiDAR scans, for
example, and then bringing this data into Augmented and Mixed Reality
experiences. We do not believe this has been well served in the market and
something we feel presents a significant point of differentiation for us.

AREA:
Do you plan to continue focusing exclusively on AEC, engineering, energy, and
other large-scale enterprises?

JONATHAN REEVES: That has been our
initial focus, but what we’re finding is there are other industries that also
have similar problems. Specific types of manufacturing companies such as
automotive, shipbuilding, and so forth have similar challenges because they’re also
working with immense models. While we began with a focus in the aforementioned
sectors, we are also finding the technology we’ve developed has relevance across
industries.

AREA:
Without naming any names, can you tell us about some of your customers and what
they’re doing with MR Studio?

JONATHAN REEVES: A large, multinational
top-10 AEC firm uses our platform extensively for visualization experiences in
the presales or project pursuit phase. When a new building is to be visualized
with a customer, they historically worked with Virtual Reality technology. But
the downsides of VR are quite well understood. It’s not suitable for everybody,
some people have motion sickness issues and it tends to be an insular, single
person experience. We believe there are significant benefits for AR and Mixed Reality
in the area of collaboration. They may wish to do a guided tour, as it were, of
a new project, showing a customer or our customer’s customer how it’s going to
appear, overlaid on the real world. We give them the ability to collaborate,
share the experience with multiple users participating in the session, overlay
the models on the real world and bring Mixed Reality and Augmented Reality’s
benefits to the fore. In these scenarios, you may be doing immersive walk-throughs
and design reviews, looking at features before they’re built – or in brownfield
scenarios, overlaying renovations and enhancements onto an existing physical
space. These are a few examples of what our customers are doing with the
platform.

AREA:
It’s impressive how built-out and robust your platform is for a company that’s
only three years old. Can you tell us a little bit about some of the more
significant capabilities you’ve been adding recently and how they benefit
customers?

JONATHAN REEVES: Absolutely. We are in
the process of releasing our 4.0 version of the platform. One of the key features
of that release is hybrid rendering. That’s the ability to split the rendering
between on-device – a HoloLens, Magic Leap or mobile device – and a backend
GPU-based server, where we run our MR Studio Director platform. This
essentially allows you to enter a high-definition mode. For instance, if you’re
looking at a mobile device in a tabletop format in a conference room or onsite
and you want to get a much higher level of detail associated with the model, you
can enter hybrid mode and stream the results to the mobile device from a remote
server. This allows you to get a 50 to 100x increase in the available
resolution of the model that you’re going to render. Imagine you’re doing a
walk-through of a building or visualizing a particular piece of complex machinery
and then, you want a really high level of detail on a particular area. You can now
achieve that in a seamless fashion.

This is actually being tested by a
number of service providers as we speak. There’s a lot of interest in this mode
of operation, running over 5G networks. Service providers such as Verizon,
AT&T and Deutsche Telecom are working with us to test these capabilities
and validate their operation over both existing LTE and future 5G networks.

In this scenario, latency become
important. The ability to render models at high speed and the level of detail
that you can achieve becomes significant. For example, point clouds, as used by
many industries from LiDAR scanners or used in photogrammetry as a source data,
can be hundreds of millions, or even billions of points. Typically, mobile
device processors such as used in HoloLens or Magic Leap, on a tablet or mobile
phone, are only able to render a few million points on the device. There’s an
order of magnitude discrepancy between what can be handled on-device and the
size of the model to be rendered. In this case, we use high-performance GPUs running
on edge compute servers or in the cloud to render the high-resolution model and
then stream that into the mobile device. This is a key new feature that we
showed recently at the Augmented World Expo Show.

AREA:
We spoke with Brian Vogelsang from Qualcomm a few weeks back and he was
pointing out that same idea – that as we look to the future of AR technology,
we will see more development activity devoted to dividing the computing burden
between the device and computers at the network edge, figuring out where the
work can best be done to deliver the best, fastest visualization and experience
for the user.

JONATHAN REEVES: I think it’s an
absolutely critical point and Brian is a good friend of ours. Qualcomm featured
our software running on a combination of their 5G handset reference design and
the glasses from nreal at Mobile World Congress in Barcelona a few months ago
and we will also be part of their exhibit at the Enterprise Wearables
Technology Summit in September. The ability to bring this kind of enterprise-class
rendering and large model handling to these new 5G equipped headsets is of
particular interest. The 5G work with Qualcomm and telecom providers is an
important development in Augmented and Mixed Reality because this begins to
shape the types of wireless connectivity that the headsets themselves will
provide.

AREA:
What is your sense of the state of Augmented Reality adoption and the pace of
momentum behind it?

JONATHAN REEVES: I think that’s an
interesting question that many in the industry are pondering. Several key
things have happened in the last 12 months. First of all, the widespread
adoption of ARKit and ARCore in mobile handsets has really opened the eyes of
many to the benefits of Augmented Reality. We’re at a pivotal point.

Secondly, AR glasses are going through
generational changes so we think it is important to have a cross-platform
strategy. In discussing our solution with customers, they often tell us they’re
not going to select just one platform and stick with it. It is an evolving
market and we believe it is essential that users have a common or familiar
experience when using a phone, a tablet, a HoloLens, a Magic Leap or new emerging
glasses, like the nreal; this has been an important part of our strategy and we
think this is extremely important for the evolution of the market.

We are also seeing some real changes
there in terms of the tools that vendors like Unity are offering. Their AR Foundation
is particularly helpful because it allows you to build a single application
that can run on both ARKit and ARCore, for example. A lot of effort is spent to
ensure the application we serve can be supported on multiple devices. In
addition to that, we also believe that the arrival of Magic Leap in the market
and the HoloLens 2 represent a new generation of technology with more
capabilities. The spatial mapping is significantly more powerful than before.
And we think these developments put the market at somewhat of a tipping point, where
during the second half of 2019 and into 2020, we will see a rapid acceleration.
I believe this is being demonstrated in the market as we speak.

AREA:
Why did you join the AREA and what do you hope to gain through your membership?

JONATHAN REEVES: We actually had been monitoring the AREA
over the last couple years and watching the evolution of the organization with
interest. We have seen an increasing number of partners that we’re working with
joining the AREA and customers or prospective customers that are also joining. We
felt this was a good time to join. And I must say, we’ve been very excited
about the level of exposure it has already given us and we’re very pleased and
excited to be part of it.