1

Calling all AR Startups: Now There’s an AREA Membership Just for You

Are you an AR startup that would like to join the AREA but has lacked the resources for a full Contributor membership? Now you can take advantage of all the advantages of belonging to the AREA through our new Startup Membership.

The time-limited Startup membership offers you the full benefits of an AREA Contributor member:

  • Create awareness of your startup
  • Gain access to AREA thought leadership content
  • Attend AREA member events
  • Network with enterprises that are looking for AR solutions
  • Participate in AREA committees and help define the ecosystem
  • Get discounts to events negotiated by the AREA
  • Receive synopses of AREA research
  • Gain entry into the AREA marketplace (in development)
  • Contribute thought leadership content to the AREA blog
  • Get Contributor member voting rights
  • Be part of the only alliance focusing on AR in the Enterprise!

It’s a great way to develop your AR network and gain visibility with prospective enterprise customers. You get all this for $1500 per year – that’s $3500 less than the lowest annual fee for Contributor membership.

AREA Startup membership is limited to organizations that meet the following criteria:

  • Your total annual revenue is under $1 million.
  • Your staff size is 10 or fewer full-time and/or freelance employees.
  • Your organization has been trading for less than three years.

The AREA Startup membership package is only available for a two-year period. After the two years have elapsed, your company must choose a Contributor or Sponsor membership to continue as an AREA member.

Click here to take advantage of this exciting offer.




No more passing the buck on AR security

This is the third in a series of blog posts following the progress being made on the AREA’s first research project. Read the first two installments here and here.

In February 2016, the AREA performed a preliminary analysis of the field of enterprise AR security. We learned that there was virtually nothing available about the topic on the Web, low understanding among customers and suppliers, and only a few firms, such as AREA member Augmate, were exploring how to identify and address issues.

As has recently been demonstrated by the global “ransomware” attack and the Distributed Denial of Service attack caused by security breaches on webcams and IoT devices in October 2016, governments, businesses and consumers reliant on Internet-connected computers are increasingly more vulnerable to risk. In the final days of 2016, analysts and cybersecurity experts predicted that “2017 will be a critical year for security, starting with how it’s built into technology. DevOps and security will change the way they work together as they realize the need to integrate with each other in order to survive.”

Unfortunately, very little attention has been focused on enterprise AR security risks since the exploratory project in early 2016 but it’s my conviction that no one in the AR ecosystem can afford to continue ignoring or denying the security issues. Many AREA members agree that there is potentially a problem.

In April, the AREA kicked off its Research Committee’s first project with Brainwaive LLC. Brainwaive’s team of cybersecurity experts has been digging into topics pertaining to data security risks when introducing new, wearable Augmented Reality devices in the enterprise. I’m managing this project on behalf of the AREA’s members.

The project team is preparing reports to help AREA members understand the issues and prepare for the mitigation of risks. These reports are based on experience in security mitigation frameworks and tactics in IoT and other fields, interviews with different AR ecosystem stakeholders, and online research, as well as hands-on testing of wearable AR devices. The hands-on testing exposed many interesting risks as well as opportunities. The value these reports contain can’t be conveyed in a few posts on a blog. The reports will deliver practical approaches to those who will study them carefully.

What I can share that concerns me greatly as I have listened to interviews Brainwaive has recently conducted is the apparent desire by many of the stakeholders involved in wearable AR device development (and the greater AR experience design and development value chain) to pass the buck on security. There’s a widespread assumption that wearable AR devices will be managed similarly to or in the same fashion as other mobile devices. The weakness in this thinking is that, unlike wearable AR displays currently being furnished for developer use, mobile devices deployed for enterprise use are security-hardened.

Sooner or later, the prevalent “it’s not my problem” mindset must change if we expect enterprise IT managers to embrace new devices and support systems enabling the changes that AR promises to deliver. For the mindset to change we need:

  • AR customers to put security mitigation as high on their list of requirements as low latency, wide Field of View and ease of use; and
  • enterprise AR technology providers to collaborate with security community leaders to design wearable AR displays with security by default, not an add-on.

If you are an AR customer who has already put data security features on your AR requirements list, please use the comments section of this blog post to share with others in our community how you have stated those requirements.

If you are a wearable AR device manufacturer who has included security features by design, please make those features more clear so that the Brainwaive team, among others, can more easily evaluate and include them in the AREA’s upcoming security framework.

Those who wish to preserve their anonymity while contributing to this important project are invited to contact Tony Hodgson, CEO of Brainwaive, directly via e-mail at [email protected].




Addressing the Security Challenges of Wearable Devices for Enterprises

Read the first installment here.

The topic of security in enterprise AR environments is both under-addressed and vital. Our cybersecurity team at Brainwaive is excited about the opportunity to work with the AREA to protect companies’ information and assets through this first-ever AREA-funded research project. The objective is to develop and popularize a reliable, repeatable means of assessing security when adopting AR headsets/glassware solutions in industrial/enterprise settings.

With several weeks of R&D behind us, the Brainwaive cybersecurity team is beginning to finalize the scope and structure of an AR Security Framework and Testing Protocol. While most of our initial focus is on security threats and the defensive posture of wearable AR devices themselves, it’s important to recognize that the headset or smart glasses are just one element in an end-to-end AR “solution stack.” The Security Framework will eventually address all the unique elements of the AR stack, including wireless networking, data gateways, cloud services, applications, and more. Additionally, full enterprise protection requires development and governance of sound use policies and procedures, and training to develop end-user competence with the systems.

From a security standpoint, wearable AR devices may seem to be similar to common mobile devices like smartphones and tablet computers. However, we’ve identified multiple important factors that make AR systems unique, and we’re mapping the new trust boundaries and roles of the users. The Brainwaive team will elaborate on these in the final report and in our presentation at the upcoming Augmented World Expo. Also, in this initial project, we’re focusing only on characterizing the inherent design characteristics of the wearable device hardware and software from a security perspective. In follow-on projects, we’ll perform active penetration testing to determine the robustness of device designs and their level of defense against malicious attacks.

Knowledge is power when it comes to protecting your enterprise assets from bad actors trying to break in and steal sensitive information or disrupt your operations. Employing the AREA AR Security Framework and Testing Protocol, enterprise users will be better equipped to select and use AR headset solutions providing the proper types and levels of security for their specific use cases.

Tony Hodgson is CEO of Brainwaive LLC.




Mark Your Calendar for May 17 – AREA Webinar on AR and IoT

Imagine an aircraft service facility where the maintenance crew has the actual performance data of each plane’s engine components right at their fingertips as soon as it arrives – including identification of faulty parts and step-by-step instructions on how to replace them.

That combination of IoT data and AR visualization is incredibly powerful. It promises to reduce downtime, ensure timely and appropriate maintenance, and prevent more costly repairs. And because these technologies can guide service technicians instantly to only the areas in need of repair – and provide hard data on when to replace a worn part before it fails – they can make service technicians significantly more productive and assets more reliable. However, there are still questions about this integration of AR and IoT:

  • How close is that scenario to reality?
  • What technologies are essential to making it happen?
  • What obstacles stand in the way?

To get the answers to these and other questions, you don’t want to miss the AREA’s upcoming webinar, Friends or Enemies – What is the Relationship Between Augmented Reality and IoT?  The event will be held May 17, 2017 at 8 AM Pacific/11 AM Eastern/4 PM UK/5 PM CET.

Speakers on the program include: Marc Schuetz, Director of ThingWorx Studio Product Management at PTC; Pontus Blomberg, Founder & VP, Business Development at 3D Studio Blomberg Ltd.; Carl Byers, Chief Strategy Officer of Contextere; and Giuseppe Scavo, Researcher, AR for Enterprise Alliance (AREA). AREA Executive Director Mark Sage will host.

What other use cases will benefit from the intersection of IoT and AR? What types of organizations and industries are best positioned to derive value from such solutions? Find out at the free webinar.  Register now to join us.




AREA Research Project Takes on the Challenges of Enterprise AR Security

In 2015, cybercrime damage cost the world $3 trillion, according to one estimate. By 2021, that number is expected to grow to $6 trillion. So any enterprise contemplating new IT investments is paying particular attention to the security ramifications. AR is no exception. When introducing mobile, wearable AR systems to the enterprise, there is a high level of concern about data security. Headset and smart glasses designers are rarely data security experts, and their unconventional connected systems can represent new kinds of cyber threats to enterprise businesses.

The AREA recently commissioned an important study with Brainwaive LLC, headquartered in Huntsville, AL, to evaluate this mission-critical topic and help AREA members better understand and mitigate these risks. Tony Hodgson, CEO of Brainwaive – a cyber security and emerging technology advisory to enterprise clients – explained elements of the study.

“Initially, we’re creating the first-ever comprehensive report to identify and characterize the data security risks enterprise IT managers should be concerned about,” said Hodgson. “Our veteran cyber experts are then drawing from similar experiences they’ve had leading initiatives, such as development of the Industrial Internet Security Framework for IoT solutions (IISF) and IEEE data security standards, to create an AREA-branded Enterprise AR Security Framework.

We’re also creating a powerful AR Device Testing Protocol, so enterprise IT managers can thoroughly evaluate threat vectors and use-case suitability of different wearable AR systems.”

Also, AR device manufacturers will have new tools to evaluate their solutions before sending them into the marketplace.

“No one can eliminate all these evolving threats, but it will certainly help AR system developers sleep better at night knowing they’ve run their device through a comprehensive analysis to understand their defensive posture,” said Hodgson. “It will also provide them with a strong and supportable answer when clients ask, ‘How safe are your systems, anyway?’.”

Tony Hodgson is looking forward to making an impact with the research project.

“It’s exciting because AR-enabled systems are really beginning to emerge on the enterprise scene,” he said. “But the menagerie of devices and all the different ways they can be used presents new, invisible routes that malicious actors will take to dodge your defenses and infect your networks. This work sponsored by the AREA will certainly help companies understand what’s under the hood of these unique devices, so they can identify and mitigate these risks.”




IoT Solutions World Congress 2017

Information about the IoT Solutions World Congress (Barcelona, 03 – 05 October 2017).  Calls for papers closes. on April 15 2017.  This event is the leading international event that links the Internet of Things with industry. Its congress will focus on IoT solutions for industries and use cases in six dedicated areas: Manufacturing, Energy & Utilities, Connected Transport, Healthcare, Buildings & Infrastructure, and Open Industry (Retail, Agriculture, Mining, Hospitality and other industries).

The event will also offer multiple networking opportunities and activities, such as our IoT Solutions Awards Gala, a Hackathon, side events organized by event partners, etc.

Whether you are an enterprise end user, an organization looking for sales leads, a researcher, an association member, or a developer, the IoT Solutions World Congress offers a high return on investment.

The IoTSWC is organized by Fira de Barcelona in partnership with the Industrial Internet Consortium, the Industrial IoT organization founded by AT&T, Cisco, General Electric, IBM, and Intel to bring together organizations and technology with the goal of accelerating the growth, adoption, and widespread use of industrial IoT.

More details about the event can be found at: http://www.iotsworldcongress.com/

 

 

 




5 Reasons Why the DMDII/AREA Requirements Workshop Was a Milestone Event

At first glance, the two-day event promised to be a worthwhile exchange among parties with shared interests. On one side was the Digital Manufacturing and Design Innovation Institute (DMDII), which had invested considerable time and effort into creating a detailed set of requirements for enterprise AR with the assistance of American industry heavyweights Lockheed Martin, Procter & Gamble, and Caterpillar. On the other side was the AREA, the organization leading global efforts to drive adoption of AR in the enterprise. The AREA is to take over responsibility for the requirements document and its future.

But when the parties gathered in Chicago, the event proved to be more significant than anyone could have expected. Here’s why:

  1. It demonstrated the burgeoning interest in enterprise AR throughout the developing ecosystem. The event attracted 90 attendees from 45 companies – all deeply committed to AR and eager to share their thoughts with one another.
  2. It provided an unprecedented opportunity for AR hardware and software providers to engage directly with enterprise AR users. With the detailed requirements to refer to, participants were able to engage with each other substantively and specifically.
  3. It signified the beginning of a global effort to make the process of implementing AR projects simpler and more orderly. With a set of requirements that will grow, become more defined and use case-specific over time under the aegis of the AREA, enterprises will have the power to define their AR solution needs clearly and confidently. Our goal at the AREA is to make the requirements accessible and usable to the wider AR ecosystem.
  4. It gives AR solutions providers a vital resource for developing their product development roadmaps. The direct feedback of the user community made it clear to hardware and software providers where they need to invest their R&D budgets in the near and medium term.
  5. It created the basis for a more open, vibrant, and participatory AR ecosystem. As the AREA makes the requirements a “living document” to which all organizations can contribute, they will become an increasingly useful resource to a wider range of organizations and will accelerate the adoption of successful AR projects in the enterprise.

More information on how to review and participate in activities around the requirements will be announced soon at www.theAREA.org.




Augmented Reality and the Internet of Things boost human performance

Smart connected objects allow extensive optimizations and accurate predictions in the production line. However, this is not the only benefit that IoT can generate in industrial settings.

The purpose of this post is to explain how Augmented Reality (AR) can provide additional value to IoT data serving as a visualization tool on the shop floor. Operators can achieve better results in less time in a number of use cases by using AR devices to consume up-to-date contextually relevant information about IoT-enabled machines.

Industry 4.0 and the Internet of Things

The extensive use of Information and Communication Technologies (ICT) in industry is gradually leading the sector to what is called the “fourth industrial revolution,” also known as Industry 4.0. In the Industry 4.0 production line, sensors, machines, workers and IT systems will be more deeply integrated than ever before in the enterprise and in the value chain. The complete integration will ultimately optimize the industrial process, fostering its growth and driving greater competition within markets. A report from the Boston Consulting Group, summarizes the nine technology advancements that are driving this revolution and will eventually define its success:

  • Big Data and Analytics
  • Autonomous Robots
  • Simulation
  • Horizontal and Vertical Integration
  • The Internet of Things
  • Cybersecurity
  • Cloud Computing
  • Additive Manufacturing
  • Augmented Reality

The Internet of Things (IoT) leads the advancements in the field as an enabling technology. The IoT concept is based on building intelligence into objects, equipment and machinery and enabling data about their status to be transmitted over the Internet for human or software use. Through connectivity and unique addressing schemes things are able to cooperate in order to reach a common goal. Research has identified the three basic characteristics of smart objects:

  • to be identifiable through unique addresses or naming systems,
  • to be able to connect to a network,
  • to be able to interact with each other, end users or other automatic components.

Industrial settings are paving the way for the introduction of IoT into modern society. In the Industrial IoT (IIoT) vision, any single segment of the production line can be constantly monitored through the introduction of sensors, and intelligent machine and pervasive networking capabilities. Central data gathering systems can collect and analyze data about the status of the entire supply chain and dynamically react in case of failures, resource shortages and demand variations. The value brought to industry by IoT is cumulative, as more devices are brought online and their interactions captured and analyzed. In fact, data gathering and aggregation of supply chain variables can help to optimize production in terms of reduced waste of resources, reduced downtime, improved safety, sustainability and greater throughput.

Big Data Analytics and Machine Learning are the core technologies through which the enterprise can make sense of this enormous flow of data coming from industrial facilities. These enable the creation of mathematical models that constantly improve the precision with which they represent the real-world settings as more data feeds into them. Called “digital twins”, these models are then used not only to analyze and optimize the behavior of the equipment and the production line, but also to forecast potential failures (preventive maintenance is a byproduct of Big Data analysis).

IoT as a tool for human effectiveness

The abovementioned benefits that come from the integration of IoT into advanced process automatization (using technology to allow processes to take place without human input) are not the only advantages. The introduction of smart objects into industrial contexts provides the possibility of greater effectiveness among the people working on the shop floor.

Data gathered from sensors is essential for on-site decision-making and correct completion of tasks as workers operate with smart equipment. Smart objects, also called cyber-physical systems, can support workers, improving proficiency and safety, on different levels.

Design, maintenance, repair and fault diagnosis are complex tasks that require a human operator to interact with sophisticated machinery in the new industrial paradigm. The information needed to successfully carry out these tasks is proportional to the complexity of the tasks and the equipment involved. Real-time and historical data about the functional activities of the equipment are therefore critical for the decision-making process as the complexity of the systems increases. Access to this information on the site where the operator is performing these tasks becomes essential to correctly and efficiently perform them.

To give an example, the recovery procedure of a complex machine experiencing a failure needs to be informed by the current status of the components of the machine itself. Similarly, the proper configuration of complex mechanical systems is conditional on the values of certain internal variables measured by equipped sensors. The operator in charge of these procedures needs to be able to diagnose the problem and pinpoint the exact location of the failure when in front of the equipment in order to immediately revert it to an optimal state. Generally this is done by analyzing real-time sensor data, computer-generated analyses or historically aggregated data.

Current issues with human consumption of IIoT data

In the current state of integration, in cases where IoT technologies are deployed, the data is sent to central repositories where operators in control rooms are in charge of monitoring and analyzing it. However, in most situations, these central control rooms are distant from the location where the data is actually needed. The engineer in front of the machine in need of assistance is required to cooperate remotely with the central control room in order to diagnose a fault. The interaction in this scenario can be very slow as the on-site engineer needs to verbally interpret the information provided by the remote operator, while the operators in the control room do not have the on-site engineer’s spatial reference information to guide them, thereby slowing down the cooperation and increasing the time required to solve the problem.

Some organizations have attempted to address this problem by deploying laptops on the shop floor that can access remote data. Despite being somewhat effective, laptops are only a partial solution to the problem, as the devices are usually not aware of the physical surroundings and the intention of the operator, thus dividing his attention between the object of interest and the interaction with the mobile device. In general, mobile devices currently used to interact with IoT data on the shop floor lack the ability to interpret what the operator is looking at and the intent of the operation unless the operator manually interacts with the software interface, filtering out the unneeded data.

Other companies are deploying advanced touch interfaces directly on the smart equipment. While this partially solves the issue, it also multiplies the number of screens on the shop floor and does not provide a solution for equipment that cannot be fitted with a screen (e.g., outdoor heavy machinery, oil and gas pipes, etc.).

Another crucial piece of information missing from current Human-Machine Interfaces (HMIs) is the spatial reference of the data stream. In certain situations, it is very important to visualize how the data sources are physically located in the three-dimensional space in order to diagnose a fault. This information gets lost if the data streams are visualized exclusively using 2D interfaces or schemes that do not take into account the physical structure of the equipment. For example, the figure below references two different visualizations of an oil pipeline with IoT-connected valves that stream data about their functional status. The representation on the left is not aware of the spatial disposition of the valves, while the visualization on the right makes it much easier to diagnose that the problems with the valves are caused by an external interference around the southern portion of the pipeline.

spatial Iot augmented reality
Two different representation of the same pipeline. The one on the left does not take into account the spatial disposition of the system.

AR and IoT: a match made in heaven

Augmented Reality provides an effective answer to all the aforementioned issues with IoT data consumption on the shop floor. Modern AR-enabled devices (both handheld and head-worn) provide a media-rich ubiquitous interface to any type of network data via wireless connection. Using sensing technologies, these devices are capable of understanding what the operator is looking at and therefore only display the data that is actually needed for the operation at hand. Using AR devices, the operator is empowered with the ability to visualize processed or unprocessed IoT data in an incredibly intuitive way.

The worker starts the interaction by pointing the AR-enabled device towards the piece of equipment in need of assistance. The device scans the equipment using cameras, identifies the object and reconstructs a spatial model of it. The application automatically gathers the list of available sensors connected to the machine interrogating the central repository and displays the gathered information on the equipment itself, in the exact location where the sensors are currently measuring the data. Interacting via the interface, the operator can also search for historical data needed to diagnose the fault. The data thus visualized not only contains the same informative power as it does on other mobile devices, but also provides the operator with the spatial relationship of the data with the machine itself.

AR provides a display for anything. As all the objects/screens AR devices can render are completely digital, there are no restrictions as to how and where IoT data can be visualized. Even the dirtiest and most remote oil pipe, the hottest jet engine or the loudest metal printing machine can be overlaid with a number of virtual data visualizations for the operator to analyze during the process. All in all, if an object generates IoT data, AR can visualize it.

In addition, AR allows the same information to be displayed in different, more intuitive ways. Traditionally, sensor data is visualized using a mix of numbers, graphs and gauges. However, using AR, new forms of visualization, customized for the purpose, can be designed. These visualizations can speed up the interpretation of data and better highlight faults. For example, the pressure and temperature measurements along a pump output pipe can be displayed using a color mapped three-dimensional flow visualization overlaid directly on the pipe itself, allowing the operator to virtually “visualize” the behavior of fluids inside the pipe, speeding up parameters for tuning or fault detection processes.

Use cases

AR and IoT can be combined to address a number of use cases that benefit both private and public sectors. There are some common factors shared by most of these use cases, such as mobile access to data in remote locations, the inaccessibility to certain parts of the equipment, the difficulty to fit a screen on the object of interest or the need for extreme operative precision.

  1. Complex machinery service efficiency:  for organizations that operate and maintain large fleets of complex machinery, from aircraft to locomotives, service and repairs can be slow and costly. Without specific data on particular components in need of repair or the ability to predict when service is needed, assets may be taken out of service unexpectedly and service technicians may need to spend valuable time testing and isolating issues. Organizations can accelerate the process and improve efficiency by combining IoT and AR technologies. Arming assets with sensors enables them to stream data directly from the assets. Using this data to create digital twins of the assets, organizations can self-analyze and self-predict when and how components need to be maintained. Using AR, that data can be translated into visual information  for example, highlighting which fuel injectors in an engine are causing oil pressure problems and need to be replaced. By guiding the repair technician immediately to the source of the issue, the AR/IoT combination limits the scope of the work to only what is needed. Step-by-step instructions delivered via AR ensure that the repair work is performed correctly and efficiently. GE Transportation is applying PTCs ThingWorx and Predix software to realize efficiency gains in the 1,300 locomotive engines it repairs every year.

  2. Mechanical equipment monitoring and diagnosis:  many mechanical parts, such as engines, pumps, pipelines and industrial machines, are fitted with a large number of sensors to control physical variables, such as temperature, pressure, speed, torque or humidity. These measurements are used not only to control the machine itself, but also to monitor and verify its correct functioning. During configuration and fault diagnosis, it is essential for the operator to visualize these values in real time in order to properly set up the machine in one case, and correctly identify the root of the fault in the other. Using an AR device, the operator can visualize patterns directly from these real-time measurements on the components while the machine is operating, allowing for instantaneous functional diagnosis. DAQRI implemented a similar solution to help engineers at KSP Steel to visualize data from heavy machinery directly on the shop floor.
  3. Data-driven job documentation and quality assurance:  Job documentation as well as product certification and testing usually involve long procedures during which operators test structural and functional variables of the equipment. These tests are then documented in lengthy manually written reports that are sent to a central database to serve as the basis for certification and quality assessment. The whole process can be made faster and more accurate using AR devices; the operator goes through the procedure in a step-by-step fashion, approving or rejecting the measurements taken using IoT-enabled equipment. Using AR interfaces, measurements can be visualized on the component being tested and any anomaly can be reported using automatically generated non-conformance reports sent directly to the central database alongside the related IoT data coming from the machine itself or the measurement equipment.
  4. Product design visualization:  during the process of designing electro-mechanical objects, testing prototypes is very important to identifying design flaws as early as possible. However, many of the objects of analysis during this process are variables not visible to the human eye that, after being measured through embedded sensors, are analyzed to provide feedback for the following design iterations. In some cases, AR can provide instantaneous visual feedback on these variables so that design teams can discuss the issues during the test phase and simultaneously tune the object settings at run-time, accelerating the decision-making process. This video presentation by PTC president Jim Heppelmann includes an example of how CAD tools and IoT can be combined with AR to provide real-time feedback on design choices for physical objects.

  5. Smart urban infrastructure maintenance:  similar reasoning can be applied to the public sector. Most urban infrastructure is located outdoors and in hard-to-access areas, making embedded screens very difficult to use. Operators can use AR to scan large objects and detect the point of failure from real-time data visualizations. In addition, they can easily document the status of infrastructure in a digital, data-rich manner, just by pointing the device at the system.

  6. Enhanced operator safety:  AR can also be used to provide safety information to operators interacting with machines that can cause physical harm if improperly handled. DAQRI shows how a thermal camera can be used not only to visualize a thermal map, but also to indicate to the operator where it is safe to touch the object. Although the technology used by DAQRI involves the use of a thermal camera mounted on a hard hat, the same result can be easily obtained using thermal (and other types of) sensors installed directly on the machine to inform the operator of potential hazards.

The challenges

Despite being a suitable solution for the unsolved problems of IoT data consumption on the shop floor, AR still provides challenges that AR providers are currently working on in to make it more practical and useful in real life scenarios.

The first challenge is related to the way IoT data is displayed using AR devices. As mentioned earlier, sensor data can be displayed in new, intuitive modalities using bespoke 3D visualizations, facilitating the decision-making process on-site. However, it is difficult to automatically create and scale up this type of visualization. Providers are working on systems that integrate 3D CAD models with IoT real-time data to automatically generate “datafied” 3D models that can overlay on top of physical objects to display extra layers of information.

In addition to this, the problem of visualizing multiple data points in one single visual entity is still an open issue. While there are consolidated methods that work for traditional displays (like sub-menus or scrollable areas), UI/UX designers are currently working on techniques to condense large amount of data and make it interactive using AR displays.

Another important challenge has to do with data security and integration. As operators are performing their jobs with mobile-connected AR devices that access sensitive data, providers must be sure that these devices are not vulnerable to threats using both software and hardware security protocols. The AREA has recently issued a Request for Research Proposals to members in order to foster an investigation into the issue and propose some solutions.

The future

IoT data is currently used mostly for offline processing. Many techniques allow the creation of very accurate mathematical models of the production line that enable not only cost reduction and production optimization, but also predictions of equipment performances. However, the value of this data resides also in its real-time consumption. The valuable insights generated from the real-time information produced by machines and equipment can greatly accelerate many procedures and integrate human labor even further into industrial information systems. Not taking advantage of this side of IoT means a partial waste of the deployment investment.

AR is considered one of the best tools for workers and engineers to access real-time IoT data on the shop floor, directly where it is needed. AR devices are aware of the spatial configuration of the environment around the worker and can intuitively visualize real-time data, filtering out unnecessary information. As these devices get smaller and lighter, the number of use cases to which this combination of technologies can be applied is growing rapidly, covering scenarios that could not be addressed before.

Eventually, the convergence of AR and IIoT will empower human operators with greater efficacy and will add to their skills in a knowledge-intensive working environment. With the advent of fully integrated automatization and robotics, AR provides a great opportunity for workers to retain the indisputable value of human labor and decision-making.

What the AREA is doing

The AREA is a great supporter of the integration of AR with the rest of the Industry 4.0 technologies. For this reason the AREA recently partnered with the Digital Manufacturing and Design Innovation Institute (DMDII) for a two-day workshop on AR requirement for Digital Manufacturing. The result of this workshop – a list of hardware and software requirements for the introduction of AR technology in the factory of the future – will guide both providers and users towards efficient AR adoption.




Three Lessons We’ve Learned Developing AR Solutions

Industries and enterprises are adopting AR solutions to strengthen their competitive advantage and get customers engaged in business activities. As an organization that has faced AR development challenges every day, we at Program-Ace have learned three essential lessons that could be handy for those seeking to create powerful and engaging augmented reality experiences.

1. AR helps tell a product’s story, so make it important for users.

Augmented reality technology enables storytelling. It makes us see everyday objects in a different light by making visible what has been invisible, enabling us to visualize 2D images, and bringing life to inanimate objects. In other words, it has the capacity to humanize the technology. This, in turn, dramatically increases the value and recognition of the product (or any other object of your choice). A good story not only positively influences the presence but also allows users to be closer to the product and engaged in the tech community.

To deliver valuable applications for the business world, Program-Ace conducts extensive marketing research that studies existing products, possible competitors, and consumer behavior in both B2B and B2C markets to discover weaknesses and consider the most profitable potential opportunities. In our development adventures, the Program-Ace team has drawn one important conclusion: AR development is not just about the smooth integration of CG content with the physical environment; it is about allowing consumers to be connected to the virtual realm. Moreover, the app ideation process (the phase in which you create the concept, define the technological feasibility, and understand the time constraints) can also be supported with product usage data and information regarding solutions already available in the market along with their strengths and weaknesses.

2. Gamification can be a successful way to drive user acceptance and productivity.

Augmented reality technology has had a significant influence on the development of various wearables, headsets, and head-mounted devices. And, of course, gamers are among the early adopters of these advanced accessories. For that reason, many people hold the opinion that it is necessary to develop games in order to be noticeable in the market. While that might be true for some industries, such as education and defense, when it comes to retail, government, or banking, you need a serious approach to the business. Still, gamification can be an effective approach for you.

Even though it originated in the gaming world, gamification has proved to be an extremely effective tool for user acquisition, virality, and customer conversion. At Program-Ace, we have long realized that companies should focus on what the gaming experience can bring to the AR application, instead of creating games. When you deliver proofs of concept to your clients using basic and advanced gamification features, such as multi-layered storytelling, competition, rewards, lifelike avatars, etc., you can drive user engagement and increase productivity.

3. Platform-specific apps are an endangered species.

Contrary to the conventional wisdom that, in the near future, one winning platform will become an AR market monopolist, we do not see any indication of this yet. Instead, the market is full of various products designed for different user needs and demands, and it is highly unlikely that in the next five years, the diversity of platforms will disappear. Accordingly, our experience has taught us to build platform-agnostic applications choosing a cross-platform approach that has worked well for our customers for more than 20 years now, helping them to pursue market supremacy while being platform independent and relevant to user requirements.

Multi-platform (or cross-platform) AR development, especially creating one application that can be deployed to any platform, is preliminarily customized to respect the features of a particular platform or device. However, in some cases, these approaches are ineffective, especially when the target audience is used to native apps. In this situation, our team eventually creates experiences aimed at a specific type of device. For instance, one of our mini games, Archy the Rabbit, was initially designed cross-platform for iOS and Android. With the introduction of HoloLens, we have ported it to this platform by changing the game UI, adding new features, and programming the app to recognize gestures, voices, and gazes. A combination of the Unity game engine and HoloToolKit helped our team to develop important app functionality such as spatial sound, voice recognition, and spatial mapping with minimal effort and improved human-computer interaction (HCI).

Shaping the future

As the next phase of computing, augmented reality offers an opportunity to shape the future of HCI and technology itself. In order to be creative and deliver compelling AR experiences, we have begun to focus on the principles above. These lessons have enabled us to design applications that maximize the value of the technology. By remembering these AR development lessons, you can crystallize your thinking and focus your efforts on developing successful and engaging AR applications.

 

Anastasiia Bobeshko is the Chief Editor at Program-Ace.




Features Worth Seeking in an Augmented Reality SDK

Interest in AR SDKs has intensified since last year, when one of the leading solutions, Metaio, was sold to Apple, leaving an estimated 150,000+ developers in search of a replacement. Vuforia remains the market leader, but there are many good alternatives in the marketplace, some of which are already quite well known, such as EasyAR、Blippar、and Wikitude.

So, what criteria should a developer apply in evaluating AR SDKs? The answer to that question will vary. There are many factors developers need to consider in choosing an SDK, including key features and cost. Portability is another issue, since some SDKs only work on certain hardware.

However, there are a handful of key features and capabilities that all developers should look for when evaluating their next AR SDK:

  • Clould-based storage to support a greater number of 2D markers. 2D object tracking is the most basic form of mapping and allows an application to recognize a flat surface which can then be used to trigger a response, such as creating a 3D image or effect to appear on top of it, or playing a movie trailer where a poster used to be. This is simple to do and all SDKs support it; however, a key difference among SDKs is the number of markers that can be recognized. Many SDKs support around 100 markers as standard, but others allow for a nearly unlimited number of markers by using very fast cloud storage software to store a much larger database of markers. When an AR application can recognize more 2D objects, it enables developers to create more robust applications that trigger more AR effects.
  • 3D object tracking. 3D object tracking expands the opportunities for AR developers by allowing 3D objects, such as a cup or a ball, to be used as AR markers that can then be recognized by the app to trigger an AR effect. This can be useful for advertising-related applications, and also for use in games. For example, toys can come alive and talk in AR because they can be recognized as unique entities by this type of tracking. While 3D tracking is not yet a universal capability among SDKs, it is becoming more common and affords a developer greater latitude in creating compelling, lifelike AR applications.
  • SLAM support. Simultaneous Localization And Mapping has become an increasingly desirable feature in an AR SDK because it allows for the development of much more sophisticated applications. In layman’s terms, SLAM allows the application to create a map of the environment while simultaneously tracking its own movement through the environment it is mapping. When done right, it allows for simple depth information to convey to the camera where things are in a room. For example, if there is a table and an AR image is appearing over the table, SLAM allows the application to remember where the table is and to keep the AR image over the table. SLAM also allows users to look around a 3D image, and move closer to it or farther from it. It combines several different input formats and is very hard to do accurately. Some SDKs offer this functionality, but it is quite challenging and processor-intensive to make it work smoothly, particularly with a single camera. Look for an SDK that can handle SLAM effectively with a single camera.
  • Unity support + native engine. For some applications, it is important that an SDK supports the Unity cross-platform game engine. Unity is one of the most accessible ways to produce games and other entertainment media, but it also simplifies the development process, since Unity applications can be run on almost all hardware. Most SDKs operate through Unity to allow for some very sophisticated AR experiences. However, using Unity as a framework can be disadvantageous in certain applications because it is highly resource-intensive and can slow down AR experiences. As a result, some SDKs offer their own engines that function natively on iOS or Android devices, without the need for Unity. This can be used to create much smoother experiences with robust tracking for each. However, it does introduce the issue of having a coding team for each. This is not an issue if a developer is only planning to release on one platform. In this case, a developer could find that an application runs substantially faster when coded natively, rather than through a Unity plug-in.
  • Wearables support. Smart glasses and other wearables allow AR experiences to be overlaid on the world we see before us, while offering a hands-free experience. As the use of wearables grows, developers producing content for future devices need to ensure that the software they are working with will support the devices they are building for.

When you have narrowed down your candidate SDKs based on these and other evaluation criteria, I recommend that you try them out. Many providers offer free trial versions that may include a subset of the features found in their professional versions. This will enable you to determine whether its interface suits your style of working and the type of application you are developing.

My final piece of advice is to examine the costs of SDKs carefully. Some have licensing models that are priced on the number of applications downloaded or AR toys sold. This may be the most critical purchase criterion, particularly for independent developers.

Albert Wang is CTO of Visionstar Information Technology (Shanghai) Co., Ltd., an AREA member and developer of the EasyAR SDK.