1

5 Reasons Why the DMDII/AREA Requirements Workshop Was a Milestone Event

At first glance, the two-day event promised to be a worthwhile exchange among parties with shared interests. On one side was the Digital Manufacturing and Design Innovation Institute (DMDII), which had invested considerable time and effort into creating a detailed set of requirements for enterprise AR with the assistance of American industry heavyweights Lockheed Martin, Procter & Gamble, and Caterpillar. On the other side was the AREA, the organization leading global efforts to drive adoption of AR in the enterprise. The AREA is to take over responsibility for the requirements document and its future.

But when the parties gathered in Chicago, the event proved to be more significant than anyone could have expected. Here’s why:

  1. It demonstrated the burgeoning interest in enterprise AR throughout the developing ecosystem. The event attracted 90 attendees from 45 companies – all deeply committed to AR and eager to share their thoughts with one another.
  2. It provided an unprecedented opportunity for AR hardware and software providers to engage directly with enterprise AR users. With the detailed requirements to refer to, participants were able to engage with each other substantively and specifically.
  3. It signified the beginning of a global effort to make the process of implementing AR projects simpler and more orderly. With a set of requirements that will grow, become more defined and use case-specific over time under the aegis of the AREA, enterprises will have the power to define their AR solution needs clearly and confidently. Our goal at the AREA is to make the requirements accessible and usable to the wider AR ecosystem.
  4. It gives AR solutions providers a vital resource for developing their product development roadmaps. The direct feedback of the user community made it clear to hardware and software providers where they need to invest their R&D budgets in the near and medium term.
  5. It created the basis for a more open, vibrant, and participatory AR ecosystem. As the AREA makes the requirements a “living document” to which all organizations can contribute, they will become an increasingly useful resource to a wider range of organizations and will accelerate the adoption of successful AR projects in the enterprise.

More information on how to review and participate in activities around the requirements will be announced soon at www.theAREA.org.




Augmented Reality and Industry 4.0: From Aerospace to Automotives

An article this week puts Augmented Reality in the spotlight for the changes it is bringing to manufacturing in Aerospace and Automotives across the world. AR is not only increasingly being used on the factory floor but is revolutionizing it.

One of AREA’s members, Lockheed Martin has been used as an example of streamlining the manufacturing design phase and making operational efficiencies

At Lockheed Martin technicians can wear AR glasses that use cameras, depth and motion sensors to overlay images. This new method means vastly improved accuracy and speed. At their Collaborative Human Immersive Lab in Colorado, this technology is being used on a variety of spacecraft, including detailed virtual reality examination of the next Mars lander.

Augmented Reality technology is being used at many other manufacturing facilities around the world, as manufacturers begin to realise its potential for saving time and costs as well as accuracy and improved safety.

The manager of Lockheed Martin’s Collaborative Human Immersive Lab (CHIL) Darin Bolthouse says:

“I think virtual and Augmented Reality ply into this idea of industry 4.0 and automating and digitising how we do our work. So it’s the ability to more easily present complex sets of information to people. These technologies are tremendous.”




Augmented Reality and the Internet of Things boost human performance

Smart connected objects allow extensive optimizations and accurate predictions in the production line. However, this is not the only benefit that IoT can generate in industrial settings.

The purpose of this post is to explain how Augmented Reality (AR) can provide additional value to IoT data serving as a visualization tool on the shop floor. Operators can achieve better results in less time in a number of use cases by using AR devices to consume up-to-date contextually relevant information about IoT-enabled machines.

Industry 4.0 and the Internet of Things

The extensive use of Information and Communication Technologies (ICT) in industry is gradually leading the sector to what is called the “fourth industrial revolution,” also known as Industry 4.0. In the Industry 4.0 production line, sensors, machines, workers and IT systems will be more deeply integrated than ever before in the enterprise and in the value chain. The complete integration will ultimately optimize the industrial process, fostering its growth and driving greater competition within markets. A report from the Boston Consulting Group, summarizes the nine technology advancements that are driving this revolution and will eventually define its success:

  • Big Data and Analytics
  • Autonomous Robots
  • Simulation
  • Horizontal and Vertical Integration
  • The Internet of Things
  • Cybersecurity
  • Cloud Computing
  • Additive Manufacturing
  • Augmented Reality

The Internet of Things (IoT) leads the advancements in the field as an enabling technology. The IoT concept is based on building intelligence into objects, equipment and machinery and enabling data about their status to be transmitted over the Internet for human or software use. Through connectivity and unique addressing schemes things are able to cooperate in order to reach a common goal. Research has identified the three basic characteristics of smart objects:

  • to be identifiable through unique addresses or naming systems,
  • to be able to connect to a network,
  • to be able to interact with each other, end users or other automatic components.

Industrial settings are paving the way for the introduction of IoT into modern society. In the Industrial IoT (IIoT) vision, any single segment of the production line can be constantly monitored through the introduction of sensors, and intelligent machine and pervasive networking capabilities. Central data gathering systems can collect and analyze data about the status of the entire supply chain and dynamically react in case of failures, resource shortages and demand variations. The value brought to industry by IoT is cumulative, as more devices are brought online and their interactions captured and analyzed. In fact, data gathering and aggregation of supply chain variables can help to optimize production in terms of reduced waste of resources, reduced downtime, improved safety, sustainability and greater throughput.

Big Data Analytics and Machine Learning are the core technologies through which the enterprise can make sense of this enormous flow of data coming from industrial facilities. These enable the creation of mathematical models that constantly improve the precision with which they represent the real-world settings as more data feeds into them. Called “digital twins”, these models are then used not only to analyze and optimize the behavior of the equipment and the production line, but also to forecast potential failures (preventive maintenance is a byproduct of Big Data analysis).

IoT as a tool for human effectiveness

The abovementioned benefits that come from the integration of IoT into advanced process automatization (using technology to allow processes to take place without human input) are not the only advantages. The introduction of smart objects into industrial contexts provides the possibility of greater effectiveness among the people working on the shop floor.

Data gathered from sensors is essential for on-site decision-making and correct completion of tasks as workers operate with smart equipment. Smart objects, also called cyber-physical systems, can support workers, improving proficiency and safety, on different levels.

Design, maintenance, repair and fault diagnosis are complex tasks that require a human operator to interact with sophisticated machinery in the new industrial paradigm. The information needed to successfully carry out these tasks is proportional to the complexity of the tasks and the equipment involved. Real-time and historical data about the functional activities of the equipment are therefore critical for the decision-making process as the complexity of the systems increases. Access to this information on the site where the operator is performing these tasks becomes essential to correctly and efficiently perform them.

To give an example, the recovery procedure of a complex machine experiencing a failure needs to be informed by the current status of the components of the machine itself. Similarly, the proper configuration of complex mechanical systems is conditional on the values of certain internal variables measured by equipped sensors. The operator in charge of these procedures needs to be able to diagnose the problem and pinpoint the exact location of the failure when in front of the equipment in order to immediately revert it to an optimal state. Generally this is done by analyzing real-time sensor data, computer-generated analyses or historically aggregated data.

Current issues with human consumption of IIoT data

In the current state of integration, in cases where IoT technologies are deployed, the data is sent to central repositories where operators in control rooms are in charge of monitoring and analyzing it. However, in most situations, these central control rooms are distant from the location where the data is actually needed. The engineer in front of the machine in need of assistance is required to cooperate remotely with the central control room in order to diagnose a fault. The interaction in this scenario can be very slow as the on-site engineer needs to verbally interpret the information provided by the remote operator, while the operators in the control room do not have the on-site engineer’s spatial reference information to guide them, thereby slowing down the cooperation and increasing the time required to solve the problem.

Some organizations have attempted to address this problem by deploying laptops on the shop floor that can access remote data. Despite being somewhat effective, laptops are only a partial solution to the problem, as the devices are usually not aware of the physical surroundings and the intention of the operator, thus dividing his attention between the object of interest and the interaction with the mobile device. In general, mobile devices currently used to interact with IoT data on the shop floor lack the ability to interpret what the operator is looking at and the intent of the operation unless the operator manually interacts with the software interface, filtering out the unneeded data.

Other companies are deploying advanced touch interfaces directly on the smart equipment. While this partially solves the issue, it also multiplies the number of screens on the shop floor and does not provide a solution for equipment that cannot be fitted with a screen (e.g., outdoor heavy machinery, oil and gas pipes, etc.).

Another crucial piece of information missing from current Human-Machine Interfaces (HMIs) is the spatial reference of the data stream. In certain situations, it is very important to visualize how the data sources are physically located in the three-dimensional space in order to diagnose a fault. This information gets lost if the data streams are visualized exclusively using 2D interfaces or schemes that do not take into account the physical structure of the equipment. For example, the figure below references two different visualizations of an oil pipeline with IoT-connected valves that stream data about their functional status. The representation on the left is not aware of the spatial disposition of the valves, while the visualization on the right makes it much easier to diagnose that the problems with the valves are caused by an external interference around the southern portion of the pipeline.

spatial Iot augmented reality
Two different representation of the same pipeline. The one on the left does not take into account the spatial disposition of the system.

AR and IoT: a match made in heaven

Augmented Reality provides an effective answer to all the aforementioned issues with IoT data consumption on the shop floor. Modern AR-enabled devices (both handheld and head-worn) provide a media-rich ubiquitous interface to any type of network data via wireless connection. Using sensing technologies, these devices are capable of understanding what the operator is looking at and therefore only display the data that is actually needed for the operation at hand. Using AR devices, the operator is empowered with the ability to visualize processed or unprocessed IoT data in an incredibly intuitive way.

The worker starts the interaction by pointing the AR-enabled device towards the piece of equipment in need of assistance. The device scans the equipment using cameras, identifies the object and reconstructs a spatial model of it. The application automatically gathers the list of available sensors connected to the machine interrogating the central repository and displays the gathered information on the equipment itself, in the exact location where the sensors are currently measuring the data. Interacting via the interface, the operator can also search for historical data needed to diagnose the fault. The data thus visualized not only contains the same informative power as it does on other mobile devices, but also provides the operator with the spatial relationship of the data with the machine itself.

AR provides a display for anything. As all the objects/screens AR devices can render are completely digital, there are no restrictions as to how and where IoT data can be visualized. Even the dirtiest and most remote oil pipe, the hottest jet engine or the loudest metal printing machine can be overlaid with a number of virtual data visualizations for the operator to analyze during the process. All in all, if an object generates IoT data, AR can visualize it.

In addition, AR allows the same information to be displayed in different, more intuitive ways. Traditionally, sensor data is visualized using a mix of numbers, graphs and gauges. However, using AR, new forms of visualization, customized for the purpose, can be designed. These visualizations can speed up the interpretation of data and better highlight faults. For example, the pressure and temperature measurements along a pump output pipe can be displayed using a color mapped three-dimensional flow visualization overlaid directly on the pipe itself, allowing the operator to virtually “visualize” the behavior of fluids inside the pipe, speeding up parameters for tuning or fault detection processes.

Use cases

AR and IoT can be combined to address a number of use cases that benefit both private and public sectors. There are some common factors shared by most of these use cases, such as mobile access to data in remote locations, the inaccessibility to certain parts of the equipment, the difficulty to fit a screen on the object of interest or the need for extreme operative precision.

  1. Complex machinery service efficiency:  for organizations that operate and maintain large fleets of complex machinery, from aircraft to locomotives, service and repairs can be slow and costly. Without specific data on particular components in need of repair or the ability to predict when service is needed, assets may be taken out of service unexpectedly and service technicians may need to spend valuable time testing and isolating issues. Organizations can accelerate the process and improve efficiency by combining IoT and AR technologies. Arming assets with sensors enables them to stream data directly from the assets. Using this data to create digital twins of the assets, organizations can self-analyze and self-predict when and how components need to be maintained. Using AR, that data can be translated into visual information  for example, highlighting which fuel injectors in an engine are causing oil pressure problems and need to be replaced. By guiding the repair technician immediately to the source of the issue, the AR/IoT combination limits the scope of the work to only what is needed. Step-by-step instructions delivered via AR ensure that the repair work is performed correctly and efficiently. GE Transportation is applying PTCs ThingWorx and Predix software to realize efficiency gains in the 1,300 locomotive engines it repairs every year.

  2. Mechanical equipment monitoring and diagnosis:  many mechanical parts, such as engines, pumps, pipelines and industrial machines, are fitted with a large number of sensors to control physical variables, such as temperature, pressure, speed, torque or humidity. These measurements are used not only to control the machine itself, but also to monitor and verify its correct functioning. During configuration and fault diagnosis, it is essential for the operator to visualize these values in real time in order to properly set up the machine in one case, and correctly identify the root of the fault in the other. Using an AR device, the operator can visualize patterns directly from these real-time measurements on the components while the machine is operating, allowing for instantaneous functional diagnosis. DAQRI implemented a similar solution to help engineers at KSP Steel to visualize data from heavy machinery directly on the shop floor.
  3. Data-driven job documentation and quality assurance:  Job documentation as well as product certification and testing usually involve long procedures during which operators test structural and functional variables of the equipment. These tests are then documented in lengthy manually written reports that are sent to a central database to serve as the basis for certification and quality assessment. The whole process can be made faster and more accurate using AR devices; the operator goes through the procedure in a step-by-step fashion, approving or rejecting the measurements taken using IoT-enabled equipment. Using AR interfaces, measurements can be visualized on the component being tested and any anomaly can be reported using automatically generated non-conformance reports sent directly to the central database alongside the related IoT data coming from the machine itself or the measurement equipment.
  4. Product design visualization:  during the process of designing electro-mechanical objects, testing prototypes is very important to identifying design flaws as early as possible. However, many of the objects of analysis during this process are variables not visible to the human eye that, after being measured through embedded sensors, are analyzed to provide feedback for the following design iterations. In some cases, AR can provide instantaneous visual feedback on these variables so that design teams can discuss the issues during the test phase and simultaneously tune the object settings at run-time, accelerating the decision-making process. This video presentation by PTC president Jim Heppelmann includes an example of how CAD tools and IoT can be combined with AR to provide real-time feedback on design choices for physical objects.

  5. Smart urban infrastructure maintenance:  similar reasoning can be applied to the public sector. Most urban infrastructure is located outdoors and in hard-to-access areas, making embedded screens very difficult to use. Operators can use AR to scan large objects and detect the point of failure from real-time data visualizations. In addition, they can easily document the status of infrastructure in a digital, data-rich manner, just by pointing the device at the system.

  6. Enhanced operator safety:  AR can also be used to provide safety information to operators interacting with machines that can cause physical harm if improperly handled. DAQRI shows how a thermal camera can be used not only to visualize a thermal map, but also to indicate to the operator where it is safe to touch the object. Although the technology used by DAQRI involves the use of a thermal camera mounted on a hard hat, the same result can be easily obtained using thermal (and other types of) sensors installed directly on the machine to inform the operator of potential hazards.

The challenges

Despite being a suitable solution for the unsolved problems of IoT data consumption on the shop floor, AR still provides challenges that AR providers are currently working on in to make it more practical and useful in real life scenarios.

The first challenge is related to the way IoT data is displayed using AR devices. As mentioned earlier, sensor data can be displayed in new, intuitive modalities using bespoke 3D visualizations, facilitating the decision-making process on-site. However, it is difficult to automatically create and scale up this type of visualization. Providers are working on systems that integrate 3D CAD models with IoT real-time data to automatically generate “datafied” 3D models that can overlay on top of physical objects to display extra layers of information.

In addition to this, the problem of visualizing multiple data points in one single visual entity is still an open issue. While there are consolidated methods that work for traditional displays (like sub-menus or scrollable areas), UI/UX designers are currently working on techniques to condense large amount of data and make it interactive using AR displays.

Another important challenge has to do with data security and integration. As operators are performing their jobs with mobile-connected AR devices that access sensitive data, providers must be sure that these devices are not vulnerable to threats using both software and hardware security protocols. The AREA has recently issued a Request for Research Proposals to members in order to foster an investigation into the issue and propose some solutions.

The future

IoT data is currently used mostly for offline processing. Many techniques allow the creation of very accurate mathematical models of the production line that enable not only cost reduction and production optimization, but also predictions of equipment performances. However, the value of this data resides also in its real-time consumption. The valuable insights generated from the real-time information produced by machines and equipment can greatly accelerate many procedures and integrate human labor even further into industrial information systems. Not taking advantage of this side of IoT means a partial waste of the deployment investment.

AR is considered one of the best tools for workers and engineers to access real-time IoT data on the shop floor, directly where it is needed. AR devices are aware of the spatial configuration of the environment around the worker and can intuitively visualize real-time data, filtering out unnecessary information. As these devices get smaller and lighter, the number of use cases to which this combination of technologies can be applied is growing rapidly, covering scenarios that could not be addressed before.

Eventually, the convergence of AR and IIoT will empower human operators with greater efficacy and will add to their skills in a knowledge-intensive working environment. With the advent of fully integrated automatization and robotics, AR provides a great opportunity for workers to retain the indisputable value of human labor and decision-making.

What the AREA is doing

The AREA is a great supporter of the integration of AR with the rest of the Industry 4.0 technologies. For this reason the AREA recently partnered with the Digital Manufacturing and Design Innovation Institute (DMDII) for a two-day workshop on AR requirement for Digital Manufacturing. The result of this workshop – a list of hardware and software requirements for the introduction of AR technology in the factory of the future – will guide both providers and users towards efficient AR adoption.




Augmented Reality Stakeholders Convene to Move Technology Forward

Katie Mulligan of UI Labs has posted a blog entry that offers the organization’s perspective on the recent Global AR Requirements Workshop convened by DMDII and the AREA: –

Sometimes in the Wild West of rapidly evolving technology, we’re stronger working together than alone.  A recent workshop proved that’s the mindset of Lockheed Martin, Caterpillar, and Procter and Gamble, which teamed up to lead a discussion of augmented reality (AR) functional requirements in hopes of moving this new technology forward and training the workforce of the future.

AR has essential applications for today’s manufacturing landscape, filling the gaps in expertise emerging as an older generation retires.

“Young people aren’t going to school to be mechanics,” said Lonny Johnson, an AR subject matter expert formerly with Caterpillar who helped facilitate the workshop. “We need to give them tools to help them learn quicker and easier.”

Unlike its cousin, virtual reality, AR is used when a machine or tool is present—to project work instructions onto an assembly line or to highlight steps to fix a machine under repair, for example. The technology’s uses run the gamut, but AR providers have struggled to understand the needs of industry, slowing wide-scale adoption by manufacturers.

To help form a consensus about the necessary functional requirements for AR, Lockheed, Caterpillar, and P&G hosted the Augmented Reality Workshop on March 1-2 in conjunction with UI LABS and the Augmented Realty for Enterprise Alliance (AREA), a global membership organization focused on reducing barriers and accelerating adoption of AR technologies.

“The AR Workshop was a truly groundbreaking event as it was the first time that enterprises, AR providers, and non-commercial organizations worked together and drafted a set of global AR requirements,” said Mark Sage, Executive Director of AREA. “These requirements will be used to help develop the AR ecosystem, and AREA is looking forward to communicating and driving future changes.”

The three corporate leaders worked collaboratively in advance to develop functional requirements; they then solicited feedback from DMDII members and other attendees during the workshop, held at the UI LABS Innovation Center on Chicago’s Goose Island. The effort originated as a Partner Innovation Project, or PIP, in which DMDII partners come together to engage in R&D outside of the traditional project call process and fund their project without government dollars.

Representatives from the three industry hosts led the discussion, addressing aspects across both software and hardware, including wearable technology, Skype, voice controls, and remote support. The moderators took feedback from an audience of nearly 100 participants from more than 50 organizations—including industry, AR providers, universities, and government agencies—who shared their needs and hopes for the future of AR, and described challenges they face using the technology today.

The output of the discussion—which will form the basis for a forthcoming report—will help educate enterprises and AR providers, serve as a tool to aid product planning, and give AR service providers insight into what enterprises want.

As with any new technology, one of the greatest challenges surrounding augmented reality is persuading users—in this case, manufacturers—to adopt it. If an individual has a bad experience, “it will die on the vine,” said Johnson. But as people discover its usefulness and it begins to infiltrate the workplace, we’ll likely see wider adoption.

“One success builds on another success, which builds on another. It’s all about culture,” he added.

Ensuring the technology is ready for widespread use requires cooperation. Lockheed, Caterpillar, P&G, and the other organizations at the workshop recognize the value of face-to-face collaboration, and the importance of working through these issues together in the name of innovation.




Enterprise AR Requirements workshop Day 2

The focus of Day 2 was Software Requirements which was brilliantly managed by Roland Joseph from Proctor and Gamble.  The day started with a real buzz of anticipation. The evening networking and demo event created new partnerships and friendships, and conversations continued over breakfast. As each requirement was explained, there were detailed questions, insights and clarification both from the audience and the Requirements Team.

With over 22 sections and many “sub requirements” within each, there were a lot of requirements to review, however, the great interaction and insight provided meant the time flew and very soon all the requirements had been covered.

The final part of the day was a “next steps” section where Mark Sage, Executive Director of The AREA, thanked the team for their work before outlining his thoughts on the next steps and asking for feedback from the audience.

The proposed next steps include:

  1. Creating an AREA Requirements Committee to own, manage and update the Enterprise benchmark requirements
  2. Committing to communicate the requirements to the wider AR ecosystem
  3. Creating ways for the wider ecosystem to provide feedback
  4. Creating processes and tools to ensure feedback is reviewed and that the Committee update them
  5. Creating further analysis and benchmarking activities

In summary, the event was a great success, as proven by the amazing feedback we received from the closing survey:

  • 100% of respondents said they would recommend the event to others
  • Anonymous feedback comments included:
  • “You’ll have double the attendees next time because people will realize that they should have been at this meeting”
  • “A Homerun”
  •  “Event was the first of its kind and it was impressive”.
  • “Wow! What a great event!”

If you are interested in joining the AREA, please contact [email protected] and keep an eye on theAREA.org for future updates




AREA Interview: Ken Lee of VanGogh Imaging

AREA: Tell us about VanGogh Imaging and how the company started.

KEN LEE: The reason I started VanGogh was I noticed an opportunity in the market. From 2005 to 2008, I worked in medical imaging where we mainly used 3D models and would rarely go back to 2D images. 3D gives you so much more information and a much better visual experience than flat 2D images. But creating 3D content was a very difficult and lengthy process. This is the one huge problem that we are solving at VanGogh Imaging.

We started when Microsoft Kinect first introduced their low-cost 3D sensoring technology. It allowed you to map in a three-dimensional way, where you can see objects and scenes and capture and track them. Van Gogh started in this field around 2011 and we’ve been steadily improving our 3D capture technology for over five years, working with several clients and differentiating ourselves by delivering the highest quality and easiest way to capture 3D models.

AREA: What is Dynamic SLAM and how does it differ from standard SLAM?

KEN LEE: Standard SLAM has been around for years. It works well when the environment is fairly static – no movements, a steady environment, no lighting changes. Dynamic SLAM is a SLAM that can adjust to these factors, from moving objects and changing scenes to people walking in front and lots of occlusions.

AREA: Are there certain use cases or applications that are particularly suited to dynamic SLAM?

KEN LEE: Dynamic SLAM is perfect for the real world, real-time environment. In our case, we are using dynamic capture mostly to enhance the 3D capture capability – so making 3D capture much easier, but still capturing at a 3D photorealistic level and fully automating the entire capture process plus dealing with any changes.

Let’s say you’re capturing a changing scene. You can update the 3D models in real time, just as you would capture 2D images with a video camera. We can do the same thing, but every output will be an updated 3D model at that given point. That’s why Dynamic SLAM is great. You can use dynamic SLAM just for tracking – for AR and VR – but that’s just one aspect. Our focus is on having the best tracking, not just for tracking purposes, but really to use that tracking capability to capture models very easily and update them in real time.

AREA: Once you have that model, can you use it for any number of different processes and applications?

KEN LEE: Sure. For example, you can do something as basic as creating 3D content to show people remotely. Let’s say I have a product on my desk and I want to show it to you. I can take a picture of it, or in less than a minute, I can scan that product, email it, and you immediately get a 3D model. Microsoft is updating its PowerPoint software next year so you will be able to embed 3D models.

There are other applications. You can use the 3D model for 3D printing. You can also use it for AR and VR, enabling users to visualize objects as true 3D models. One of the biggest challenges in both the VR and AR industry is content generation. It is very difficult to generate true 3D content in a fully automated process, on a real-time basis, that enables you to interact with other people using that same 3D model! That’s the massive problem we’re solving. We’re constantly working on scene capture, which we want to showcase this year, using the same Dynamic SLAM technology. Once you have that, anyone anywhere can instantly generate a 3D model. It’s almost as easy as generating a 2D image.

AREA: Does it require a lot of training to learn how to do the 3D capture?

KEN LEE: Absolutely not. You just grab the object in your hand, rotate it around and make sure all the views are okay, press the button, and then boom, you’ve got a fully-textured high-resolution 3D model. It takes less than a minute. You can teach a five-year-old to do it.

AREA: Tell us about your sales model. You are selling to companies that are embedding the technology in their products, but are you also selling directly to companies and users?

KEN LEE: Our business model is a licensing model, so we license our SDK on a per-unit basis. We want to stay with that. We want to stay as a core technology company for the time being. We don’t have any immediate plan for our own products.

AREA: Without giving away any trade secrets, what’s next in the product pipeline for VanGogh imaging?

KEN LEE: We just filed a patent on how to stream 3D models to remote areas in real time. Basically, we’ll be able to immediately capture any object or scene, as soon as you turn on the camera, as a true 3D model streaming in real time, through a low bandwidth wireless data network.

AREA: Do you have any advice for companies that are just getting into augmented reality and looking at their options?

KEN LEE: At this moment, Augmented Reality platforms are still immature. I would recommend that companies focus, not on technology, but on solving industry problems. What are the problems that the companies are facing and where could AR add unique value? Right now, the biggest challenge in the AR industry, and the reason why it hasn’t taken off yet, is that so much money has gone into building platforms, but no one has built real solutions for companies. I think they should look for opportunity in those spaces.




Themes and Challenges in Enterprise Wearables

Although the Augmented Reality Smart Glass Market is growing there are still challenges – an article by AREA Member BrainXChange, claims devices are still lacking.

Wearables in the workplace are becoming the ‘norm’ with them benefitting business, however, there are still some challenges ahead for this emerging technology.

Devices are not meeting industry regulations and in some fields this could have serious repercussions such as military where the article claims problems include hardware not being reliable, ergonomic, or intrinsically safe.

There are also some limitations when it comes to the working environment itself such as in the Oil and Gas industry. This had been pointed out before by Vincent Higgins of Optech4D, an AREA member organization. The Oil and Gas industry normally involves operations in explosion prone, harsh environments. This means putting infrastructure in place in order to accommodate wearables is extremely difficult.

These disadvantages need to be overcome, so that we can take advantage of wearables in all fields. Many enterprises are still producing wearables that are being put to good use. However these challenges need to be corrected so that widespread adoption by all businesses is possible.

For resources on overcoming barriers to augmented reality adoption, do search our large bank of resources including webinars.




Thalmic Labs Gesture Control Wearables

An article on zdnet in December 2016 draws together information from a variety of sources to offer clues as to the product that Thalmic Labs is on the verge of introducing: a revolutionary new gesture controlwearable tech product.

The article links various news sources that help to identify clues as to what their new gesture control software might be.

The article includes a video of someone wearing a band, called the Myo ($199) – this contains eight sensors that measure electromyographic pulses in his upper arm — electric pulses sent there by his brain to move muscles that no longer exist.

These are then transmitted to a computer that studies them and figures out what movement the person is thinking of and then commands the prosthetic limb attached to his skeleton to perform them.

Now, apparently, Thalmic Labs is using its pioneering work in gesture control — used to manipulate all manner of things ranging from computers and phones to drones, video games, touch screens, surgical robots, power-point presentations and more.

The article speculates whether voice control may have a part to play. The fact that Intel are participating may indicate there are IoT functionalities involved, where wearables play a prominent role.

To sum things up, we have the possibility of gesture control, voice control and IoT functionalities all rolled into one product based on these reports.

  • It raised $120 million in September of this year, built a new factory.
  • There has been a marketing ‘buzz’ about a “revolutionary new product that people say will radically change the way we engage with virtual reality, gaming, smartphones, manufacturing or pretty much anything you do that can be replaced with a few subtle finger taps, swipes and gestures made in thin air.”
  • Founded in 2012 out of the mechatronics program at the University of Waterloo, by Stephen Lake, Matthew Bailey and Aaron Grant.
  • Thalmic Labs hasrelocated its manufacturing from China Waterloo, a 45 minute drive from Toronto and their San Francisco office has been hiring



AR Smart Glasses at CES 2017

Augmented Reality Headsets seemed to be a running theme at CES Las Vegas this year with many companies introducing headsets, designed for business and industrial use, reports PC Mag.

Firstly the Lenovo New Glass C200, available in June, which combines AR and artificial intelligence for enterprise, while Vuzix added a new item, M3000 Smart Glasses, adding to their enterprise headset line.

Osterhout Design Group had paired up with Vuforia  to develop 2 new products: the R-8 which is consumer based and the R-9 Glasses which are enterprise focused. They are both set to support other creations of augmented reality applications with the R-9 already running Qualcomm’s new Snapdragon 835 processor.

Augmented reality is being explored through a number of enterprises already from Smartphone and tablet-based AR apps to head-mounted experiences. ODG, Vuzix and Lenovo are only a small section of businesses developing business-focused AR glasses. In the larger hardware industry, many are focusing on the enterprise of AR with companies developing 2 types of headsets:the binocular and the monocular.

Binocular Smart Glasses allow users to see 3D content aligned with the physical world, displaying in both eyes. However these type of smart glasses, allowing AR experiences, are known to be very difficult to create.

Monocular glasses, like the Lenovo and Vuzix headsets mentioned earlier, allow users to view content just outside of your field of view. This allows you to see information on objects you are looking at however they are not aligned with the underlying world. 




Smartglasses Have Value in Healthcare

A report found on M Health Intelligence dated January 12 2017 states that hospitals and health systems are finding value in smart glasses.  Digital health companies such as Augmedix and Pristine are noted.  The article features quotes by Augmedix CEO Ian Shakil.  Augmedix recently closed a $23 million round of funding, 6 months after a capital injection of $17m and partnering with four national health systems.

Benefits brought to the healthcare industry by smart glasses include:

  • Time saved in admin work (2-3 hours per day)
  • Gives physicians hands-free access to information while they’re in front of the patient.
  • “Dramatically more humane conversation with patients”
  • The article states that tech-enhanced glasses are found everywhere from operating rooms to patients’ bedsides to offices and clinics.
  • Technological advancements are in areas like battery power, CPU performance, Wi-Fi capability and software upgrades
  • The simplicity of smartglasses may give the product an advantage over bulkier and more expensive VR and AR headsets which could affect vision and interfere with patient conversations.
  • Doctors and patients seem to like smartglasses and value them

The future could focus on functionality beyond scribing and data retrieval and develop functions like care assistance reminders, content delivery and tasks and analytics.