When a Developer Needs to Author AR Experiences, Part 2

This post is a continuation of the topic introduced in another post on the AREA site.

Choose a Development Environment

Someday, the choice of an AR development environment will be as easy as choosing a CMS for the Web or an engineering software package for generating 3D models. Today, it’s a lot more complicated for AR developers.

Most of the apps that have the ability to present AR experiences are created using a game development environment, such as Unity 3D. When the developer publishes an iOS, Windows 10 or Android app in Unity 3D, it is usually ready to load and will run using only local components (i.e., it contains the MAR Scene, all the media assets and the AR Execution Engine).

Although there’s a substantial learning curve with Unity, the developer community and the systems to support the community are very well developed. And, once using Unity, the developer is not limited to creating only those apps with AR features. The cost of the product for professional use is not insignificant but many are able to justify the investment.

An alternative to using a game development environment and AR plugin is to choose a purpose-built AR authoring platform. This is appropriate if the project has requirements that can’t be met with Unity 3D.

Though they are not widely known, there are over 25 software engineering platforms designed specifically for authoring AR experiences.

authoring-landscape

Table 1. Commercially Available AR Authoring Software Publishers and Solutions (Source: PEREY Research & Consulting).

The table above lists the platforms I identified in early 2016 as part of a research project. Please contact me directly if you would like to obtain more information about the study and the most current list of solutions.

Many of the AR authoring systems are very intuitive (featuring drag-and-drop actions and widgets presented through a Web-based interface), however most remain to be proven and their respective developer communities are relatively small.

Some developers of AR experiences won’t have to learn an entirely new platform because a few engineering software publishers have extended their platforms designed for other purposes to include authoring AR experiences as part of their larger workflow.

Or Choose a Programming Language

Finally, developers can write an AR execution engine and the components of the AR experience into an app “from scratch” in the programming language of their choice.

To take advantage of and optimize AR experiences for the best possible performance on a specific chip set or AR display, some developers use binary or hexadecimal instructions (e.g., C++) which the AR display device can run natively.

Many developers already using JavaScript are able to leverage their skills to access valuable resources such as WebGL, but creating an application in this language alone is slow and, depending on the platform, could fail to perform at the levels users expect.

To reduce some of the effort and build upon the work of other developers, Argon.js and AWE.js are Open Source JavaScript frameworks for adding Augmented Reality content to Web applications.

Results Will Depend on the Developer’s Training and Experience

In my experience, it’s difficult to draw a line between the selection of an AR authoring tool or approach and the quality or richness of the final AR application. The sophistication and quality of the AR experience in an app is a function of both the tools chosen and the skills of the engineers. When those behind the scenes (a) ensure the digital content is well suited to delivery in AR mode; (b) choose the components that match requirements; and (c) design the interactions well, a project will have the best possible impacts.

As with most things, the more experience the developer has with the components that the project requires, the better the outcomes will be. So, while the developer has the responsibility for choosing the best authoring tool, it is the AR project manager’s responsibility to choose the developer carefully.




How Optical Character Recognition Makes Augmented Reality Work Better

Today, companies in many industries seek to develop AR and VR applications for their needs, with the band of existing Augmented Reality solutions extending from gimmicky marketing solutions to B2B software. Helping production companies train their workers on the job by augmenting service steps onto broken machines is one of those solutions.

Augmented Reality could assist designers or architects to see a product while it is still in development. It could facilitate a marketing and sales process, because customers can already “try on” a product from a digital catalog. Or it could assist warehouse systems so that users get support in the picking and sorting process

The list of opportunities is endless and new use cases are constantly arising. The whole point of using AR is to make processes easier and faster. While at first, Augmented Reality and devices like smart glasses seemed way too futuristic, new use cases make them increasingly suitable for everyday life in the workplace.

Recognizing Objects and Characters

Augmented Reality is based on a vital capability: object recognition. For a human being, recognizing a multitude of different objects is not a challenge. Even if the objects are partially obstructed from their view they can still be identified. But for machines and devices this can still be a challenge. For Augmented Reality this is crucial though.

A smartphone or smart glasses can’t display augmented overlays without recognizing the object first. If needed for correct augmentation, the device has to be aware of its surroundings and adapt its display in real time according to each situation, all the while changing the device’s camera viewing angle. Augmented Reality applications use object detection and recognition to determine the relevant information needing to be added to the display. They also use object tracking technologies to continually track an object’s movements rather than redetecting it. That way the object remains in the frame of reference even if the device is moved around.

Character recognition is also crucial for a device’s understanding of the environment, as it not only needs to recognize objects, but according to the use case, it might also have to “read” it. This provides an even better discernment of the types of information that are important to process.

OCR Anyline

Optical Character Recognition

Optical Character Recognition (OCR) deals with the problem of recognizing optically processed characters, such as those in the featured image above. Both handwritten and printed characters may be recognized and converted into computer readable text. Any kind of serial number or code consisting of numbers and letters can be transformed into digital output. Put in a very simplified way, the image taken will be preprocessed and the characters extracted and recognized. Many current applications, especially in the field of automation and manufacturing, use this technology.

What OCR doesn’t take into account is the actual nature of the object being scanned. It simply “looks” at the text that should be converted. Putting together Augmented Reality and OCR therefore provides new opportunities; not only is the object itself recognized, but so is the text printed on that object. This boosts the amount of information about the environment gathered by the device, and increases the decision-support capabilities offered to users.

The Potential of OCR

Data import still requires high processor power and camera resolution and is expensive. Nevertheless OCR offers a viable alternative to voice recognition or input via typing.

Using OCR with smart glasses offers improvements for different kinds of business processes. Imagine a warehouse worker who needs both hands free to do his job efficiently. Using smart glasses to overlay virtual information on his environment can make him more efficient. But the ability to automatically scan codes printed on objects just by glancing at them frees his hands for other tasks.

Another example would be the automation of meter reading. When a device identifies the meter hanging on a wall, as well as its shape and size, and then automatically scans its values, a greater amount of meters can be read per day. This use case could be useful to energy providers.

When you look around, you will realize how many numbers, letters and codes need to be either written down or typed into a system every single day. Such processes, which can be very error prone, can become much less painful using OCR.




A Partnership Model for Augmented Reality Enterprise Deployments

Due to the potential to radically change user engagement, Augmented Reality has received considerable and growing attention in recent months. Pokémon Go certainly has helped and, in turn, generated many expectations for the advancement of AR-based solutions. In fact, the game has provided the industry with a long overdue injection of mass appeal and as a result, significant investment from (and among) tech giants around the world.

From corner shops to large utility providers, the spike in popularity of this technology has everyone buzzing about how it could improve their business. The flexibility of implementation, from improving processes to stand-out marketing solutions, has also altered the expectations of these prospective clients as they seek personalized enterprise-level AR-based solutions. Consequently, the time has come for vendors and suppliers to consider a new model when it comes to managing customer expectations.

When deploying Augmented Reality solutions in an enterprise context, it is essential to build strong partnerships with your customers, and in many cases to take on the role of a trusted advisor. This becomes more important through the stages of delivering a project—starting with defining a proof of concept (POC) to implement bleeding edge solutions with operational teams and ultimately end users, who in fact are the actual users of the technology.

While the primary value of Augmented Reality systems is to allow for the contextual overlay of information to enable better decision making, the visual data overlay and various data sources and devices that trigger location sensors all come into play—converging in the form of a complex mesh. Vendors must note that partnerships are key to solving the pieces of this puzzle.

Service Delivery—Creating Value from the Complex Mesh

This complex mesh is what ultimately garners value as the assimilation of these technologies creates new and innovative social and business ecosystems and associated processes. When addressing enterprise adaptation, one must be aware of the following questions:

  • How best can value be driven into workable solutions in an enterprise?
  • How well does it integrate with existing legacy systems?
  • Would new skills be required to introduce and manage the change?
  • Does the solution deliver increased productivity or efficiencies, i.e., better utilization of resources or allow for better decision making through information?
  • Does the solution enable new revenue models for the organization that are consistent with the existing product and service offerings?
  • In turn, how does this solution affect the profitability of the organization?
  • Last, but not least, is the business rationale clear for the implementation of such a solution?

The move towards customer-centric systems means that your customer (or your customer’s customer) is at the center of all decision making. This may be a shift from their existing system practices, meaning it’s even more critical that the chosen change management process be well aligned to the client’s corporate culture.

The Client’s Point of View—Questions to Ask When Going Beyond the POC

Some of the questions that vendors need to consider when it comes to implementing the solution beyond the POC are:

  • What is changing?
  • Why are we making the change?
  • Who will be impacted by the change?
  • How will they react to the change?
  • What can we do to proactively identify and mitigate their resistance to the change?
  • Will the solution introduce new business or revenue models?

Working as one with your customers through innovations to operations is a key factor for success. The complex mesh of AR, VR, IoT and Big Data technologies makes this even more critical as enterprises see an integration of their digital content, systems and processes.

It is essential to take a partnership mindset—where the Augmented Reality innovation solution is built both for and with the customer, and through a customer-implemented change management process—to quickly and easily create ROI as well as tangible, actionable outcomes.




Two Months In: An Update From the Executive Director

Further to my last post about the AWE ’16 conference, I want to share some thoughts and areas for future focus from my first two months as the Executive Director of the AREA.

It’s exciting to be involved in such a vibrant and dynamic ecosystem of AR providers, customers and research institutions. I’m amazed at the sheer breadth of the kinds of member organizations, their offerings, skills, achievements and desire to work with the AREA to help achieve our shared mission of enabling greater operational efficiencies through smooth introduction and widespread adoption of interoperable AR-enabled enterprise systems.

Success and Challenges

Through my initial conversations with the members, I’ve learned of many success stories and also the challenges of working in a relatively young and rapidly changing industry.

For example, AREA members talk about the prototypes they’re delivering with the support of software, hardware and service providers. However, I would like to see more examples of wider rollouts, beyond the prototype stage, which will encourage more buying organizations to investigate AR and understand its massive potential.

The AWE conference in Santa Clara in June, and the subsequent AREA Members Meeting added emphasis to my initial thoughts. The AR in Enterprise track of AWE, sponsored by the AREA, highlighted a number of organizations who are already using AR to create real benefits, ranging from the enabling of real time compliance, better use of resources, applying the most relevant data and the reduction of time, errors and costs. It was great to see that many member companies understand the benefit of working together to enable the whole AR ecosystem to become successful.

Carrying on the Momentum

My continued focus over the coming weeks will be to carry on the great momentum that has been started. I’m briefing more organizations from all over the world about the benefits of becoming an AREA member. I’ll continue the focus on developing and curating thought leadership content including case studies, frameworks and uses cases, and deliver them via the AREA website, webinars and social media. We’re enhancing our value proposition through our development of research committees that increase the capabilities of the industry.

This is an exciting time for the enterprise AR industry and the AREA; I’m very interested in any feedback or comments you may have so please contact me at [email protected]. I look forward to hearing from you and working with our growing membership to meet our goals of realizing the potential of Augmented Reality in the workplace.




Augmented Reality and Gartner’s Hype Cycle

Industry watcher and analyst firm Gartner has been studying emerging technologies for over 20 years. The company has become widely recognized for publishing its annual Hype Cycle, the chart that captures Gartner analysts’ assessments of the maturity of emerging information and communication technologies.

Interpreting the Hype Cycle

As stated on Gartner’s website, the chart is designed for the firm’s clients : “Clients use Hype Cycles to get educated about the promise of an emerging technology within the context of their industry and individual appetite for risk.”

The sidebar on the same page goes on to suggest that the Hype Cycle:

  • Separates hype from the real drivers of a technology’s commercial promise
  • Reduces the risk of your technology investment decisions
  • Compares your understanding of a technology’s business value with the objectivity of experienced IT analysts

In my slides introducing the March AREA webinar on the topic of forecasting the growth of enterprise Augmented Reality, I provided 15 Hype Cycle figures of the years between 2000 and 2015 showing where Gartner placed Augmented Reality. These figures were compiled by Dr. Robin (Rab) Scott of the AMRC, an AREA member, and used in the webinar with Dr. Scott’s permission.

The figures show how this influential firm has followed Augmented Reality for over a decade. I pointed out in my remarks that readers should not interpret the position of any technology on the Gartner curve as highly definitive.

Looking at Gartner’s positioning of Augmented Reality over the years, and anticipating the 2016 Hype Cycle to be published, I am recommending in this post that Gartner consider treating Augmented Reality and its associated technologies as separate nodes on the cycle. By giving more attention to AR’s enabling technologies Gartner will help its clients better achieve their goals and better serve our industry.

Augmented Reality Isn’t One Technology

My primary concern about Augmented Reality appearing as a dot on the Gartner 2015 Hype Cycle is that it suggests that Augmented Reality is one technology. I don’t think this was ever the case in the past and it certainly isn’t today.

In its press release about last year’s Hype Cycle, the company stated that more than 2,000 technologies were studied. It would be helpful if the firm pointed out which of the hundreds of AR-enabling technologies it considered in positioning the “whole AR” on its cycle.

In my opinion Gartner needs to begin explaining how technologies are treated differently. For instance, some technologies on the cycle are “general” (representing many enablers at different stages of evolution), and others are not. In 2015, for example, brain-computer interfaces are in the first phase. Gesture control technologies, another relatively precise technology label, are on the slope of enlightenment. Another example is natural language question answering (very specific technology, in my framework but probably also composed of many enablers), which is positioned on the line between Peak of Inflated Expectations and the Trough of Disillusionment. And, by the way, when will the questions asked be answered correctly all the time?

On the other hand, Augmented Reality is not the only example of the ambiguity and confusion caused when a general category is represented as a dot on the cycle. For example, wearables and Internet of Things are other labels (represented as dots) on the 2015 Hype Cycle that could benefit from being represented by an array of enabling technologies (or are they enablers?).

In my opinion, the company would better serve its clients and readers by tracing the progress of some of the important enablers or components for AR and other technologically-powered systems, such as autonomous driving vehicles. A few components that I have recently studied for a technology maturity assessment, and that I believe should be added to the Hype Cycle, include:

  • Depth-sensing technologies
  • Computer vision-based 3D target object recognition and tracking
  • Optics for use in wearable displays
  • Gaze detection and tracking technologies

Enlightenment Is a Process

Enlightenment about the benefits of a technology does not happen by simply turning on a light. The processes by which technologies move from barely understood to mainstream use differ widely.

Twenty-five years ago I began reading and writing about the future with multimedia information. Multimedia was not on the Gartner curve in 2000 because it had already reached something approaching maturity; now it’s an archaic term. I have been an outspoken proponent for the adoption of mobile technologies for over 12 years. Mobile technology was not a dot on the Gartner curve in 2004 but its enablers such as MMS and 802.11 g certainly were.

Would it not be better to leave Augmented Reality off of the 2016 and future Hype Cycle figures and, rather, to point the spotlight on the state of dozens of key enablers?

Do you feel the Gartner Hype Cycle correctly portrays the state of Augmented Reality? What would you like to see added or removed from the Gartner Hype Cycle in 2016?




Augmented Reality Boosts Efficiency in Logistics

Fulfilling customer orders at a warehouse, or order picking, can be costly. A well-known study on warehouse management cited the typical costs of order picking as being nearly 20% of all logistics costs and up to 55% of the total cost of warehousing. The use of technology to streamline order picking offers an important opportunity to reduce cost.  

While great strides have been made in automating warehouse processes, customer expectations also continue to rise. For example, Amazon offers same-day delivery in many US metropolitan areas and this is becoming a standard elsewhere. Increasing fulfillment and delivery speeds may result in increased errors that are not caught prior to shipment.

Four panel image

Augmented Reality can significantly increasing order picking efficiency. An AR-enabled device can display task information in the warehouse employee’s field of view. Logistics companies such as DHL, TNT Innight and others have been collaborating with providers of software and hardware systems to test the use of Augmented Reality in their warehouses.

A recent study by Maastricht University conducted in partnership with Realtime Solutions, Evolar and Flos brings to light the impact smart glasses can have on order fulfillment. The research sought to:

  • Confirm prior research that smart glasses improve efficiency compared with paper-based approaches
  • Study usability, required physical and mental effort and potential empowerment effects of the technology in a real world environment
  • Assess the impact of an individual’s technology readiness on previously introduced performance and well-being measures

Design of the Study

Sixty-five business students at the University of Maastricht participated in a three-day study conducted in a controlled environment. Study participants were given instructions to pick individual items from bins containing items and place them into appropriate customer bins:

  • One group picked items from 28 bins using item IDs printed on paper and then matched those to IDs on customer bins. The study assessed order picking efficiency by measuring the ability and speed of participants to place the items in the correct customer bins.
  • The other group used AR-enabled smart glasses to scan barcodes in item bins and follow the displayed instructions to place them in the customer bins.

The researchers evaluated metrics such as:

  • Performance measures of error rates and picking times per bin
  • Health and psychological measures such as heart rate variability, cognitive load and psychological empowerment
  • Usability measures such as perceived ease of use
  • “Technology readiness” on a scale measuring personal characteristics such as optimism for, and insecurity with new technologies

View through smartglasses

Faster with Smart Glasses

The researchers found that smart glasses using code scanners permitted users to work 45% faster than those using paper-based checklists, while reducing error rates to 1% (smart glasses users made ten times less picking errors than the control group).

The smart glasses group also expended significantly less mental effort to find the items with the same heart rate variability as the group using paper.

Overall the usage of smart glasses empowers users and engenders positive attitudes toward their work and the technology: in comparison with the group following checklists, they felt the successful completion of tasks was more attributable to their own behavior. This corroborates other studies in efficiency gains such as this one, and demonstrates the level of impact Augmented Reality can have in the workplace.

You can read about more Augmented Reality research from Maastricht University and other university partners at this portal.

Maastricht University Logo




Factories of the Future

In a blog post last month, Giuseppe Scavo explored the Industrial Internet of Things (IIoT) and the growing trend of connected devices in factories. Smart devices and sensors can bring down production and maintenance costs while providing data for visualization in Augmented Reality devices.

Connecting AR and IIoT requires applied research. In this article we’ll look at the EU-sponsored SatisFactory project, which is focusing on employee satisfaction in factories by way of technology introduction.

Innovation in Industrial Production

In 2014, the European Union launched Horizon 2020, a seven-year research and innovation program (ending in 2020) dedicated to enhancing European competitiveness. Horizon 2020 is a partnership between public and private entities and receives nearly $90 billion in public funds. As the program’s website describes, Horizon 2020 aims to drive smart, sustainable and inclusive growth and jobs.

Factories

Within this push is the Factories of the Future initiative, a roadmap providing a vision and plan for adding new manufacturing technologies to the European production infrastructure. Objectives of Factories of the Future initiative include:

  • Increasing manufacturing competitiveness, sustainability, automation
  • Promoting energy-efficient processes, attractive workplaces, best practices and entrepreneurship
  • Supporting EU industrial policies and goals

To meet these objectives, ten partner companies and institutions from five European countries founded the SatisFactory consortium in 2015. SatisFactory is a three-year project aiming at developing and deploying technologies such as Augmented Reality, wearables and ubiquitous computing (e.g., AR-enabled smart glasses, etc.) and customized social communication and gamification platforms for context-aware control and adaptation of manufacturing processes and facilities.

SatisFactory-developed solutions seek higher productivity and flexibility, job education of workers, incident management, proactive maintenance and above all a balance between workers’ performance and satisfaction. The solutions are currently being validated at three pilot sites (one small- and two large-scale industrial facilities) pending release for use at industrial facilities throughout Europe.

Factories

Industry 4.0

SatisFactory’s vision of Industry 4.0 includes a framework with four sets of technologies:

  • Smart sensors and data analytics for collecting and processing multi-modal data of all types. The results of this real time data aggregation will include diagnosing and predicting production issues, understanding the evolution of the workplace occupancy model (e.g., balancing numbers of workers per shift) and enhancing context-aware control of production facilities (e.g., semantically enhanced knowledge for intra-factory information concerning production facilities, re-adaptation of production facilities, etc.).
  • Decision support systems for production line efficiency and worker safety and well-being. These systems can take many forms, ranging from Augmented Reality for human visualization of data to systems for incident detection and radio frequency localization.
  • Tools for collaboration and knowledge sharing, including knowledge bases and social collaboration platforms. Augmented Reality for training by remote instructors will provide flexibility and increase engagement. Collaborative tools also allow employees to exchange information and experiences, and these tools are combined with learning systems.
  • Augmented Reality and gamification can increase engagement. SatisFactory will use tools previously developed by consortium partners and, in pilot projects, explore use of smart glasses and human-machine interfaces. Interaction techniques and ubiquitous interfaces are also being explored.

satisfactory8-jaune

Pilot Sites

SatisFactory solutions are being tested at the pilot sites of three European companies:

  • The Chemical Process Engineering Research Institute (CPERI) is a non-profit research and technological development organization based in Thessaloniki, Greece. The company provides a test site for continuous manufacturing processes.
  • Comau S.p.A is a global supplier of industrial automation systems and services and is based in Turin, Italy. The company provides manufacturing systems for the automotive, aerospace, steel and petrochemical industries.
  • Systems Sunlight S.A. is headquartered in Athens, Greece, and produces energy storage and power systems for industrial, advanced technology and consumer applications.

In the next post, we’ll look at activities at the sites and how the project is applying Augmented Reality at the different production facilities.




Unity Gives Augmented Reality the Nod during Vision Summit 2016

If you saw the headlines coming out of Unity’s Vision Summit, you probably noticed a trend: Virtual Reality was the star of Vision Summit 2016. Valve’s Gabe Newell gave everyone an HTC Vive Pre. The Oculus Rift will come with a four-month Unity license. Unity is getting native support for Google Cardboard. At the summit, the expo floor had long lines for the “big three” head-mounted displays (HMDs): Sony’s PlayStation VR, Oculus Rift and HTC Vive.

It’s not that Augmented Reality was absent from what was billed as “The Definitive Event for Innovators in VR/AR,” but rather that the technology was in the minority of tools. This is the year of Virtual Reality, with the big three VR providers launching major products in March (Oculus), April (HTC) and sometime in the fall (Sony). The event was hosted by Unity, which caters almost exclusively to game developers needing comprehensive cross-platform development tools, and gaming in VR is expected to be huge. Virtual Reality was even the focus of the keynote, but astute observers might have noticed something.

Best Days of Augmented Reality Are Ahead

Unity’s own keynote referenced a report by Digi-Capital which predicts that the AR industry will have negligible revenue in 2016, but will surpass VR in 2019. In 2020, the AR industry is predicted to triple the value of VR. Take this with a grain of salt; Unity is in the business of selling licenses for their cross-platform game development toolset, so they’re incentivized to predict massive growth, but even reducing these numbers to a cynical level shows massive promise in a new field.

Most of this growth may be in gaming, but the AR presence on the expo floor leaned toward enterprise use. Epson was demonstrating their Moverio line of smart glasses, which has been around since 2012. Vuzix had their M-100 available to try, and they were eager to tout their upcoming AR3000 smartglasses.  In its booth, Vuforia demonstrated a Mixed Reality application on Gear VR that allowed the viewer to disassemble a motorcycle and view each part individually, which could be handy for vehicle technicians.

Of course, you can learn the most from hands-on experience with enterprise AR, which is exactly what NASA presented. They showed how they replaced complicated written procedures with contextual, relevant, clear instructions with AR using HoloLens. They also had a suite of visualization tools for collaborating on equipment design.

I presented the results of a year-long collaboration between Float and the CTTSO to develop an AR application designed to assist users in operational environments. We discussed the ins and outs of developing a “true AR” experience from the ground up, in addition to all of the lessons we learned doing image processing, using Project Tango, and more. At the end, I demonstrated the finished app, with integrated face recognition, text recognition, and navigation assistance supported either on an Epson Moverio or the Osterhout R-6.

An Increasing Focus

Vision Summit 2016 may have been a largely focused on VR, but that’s not a reflection of a lack of interest in AR. In our own research, we estimated that AR was lagging behind VR in terms of the technology readiness level by a few years. This was confirmed at the Vision Summit, but there’s still plenty of AR to get excited about. Valve even stated that they’d let developers access the external camera on the HTC Vive “in the long run” for Augmented and Mixed Reality applications. Expect next year’s Vision Summit to have a much larger focus on AR as this industry begins to truly take shape.

Did you attend Vision Summit 2016? What did you observe? Do you plan to attend the Unity event in 2017?




Augmented Reality: the Human Interface with the Industrial Internet of Things

Are you noticing an emerging trend in manufacturing? After years of hype about Industry 4.0 and digital manufacturing, companies with industrial facilities are beginning to install Internet-connected sensors organized in networks of connected devices, also known as the Industrial Internet of Things (IIoT), in growing numbers.

Industrial IoT Is Not a Fad

According to a recent report published by Verizon, the number of IoT connections in the manufacturing sector rose 204% from 2013 to 2014. These connect data from industrial machines to services that provide alerts and instructions on consoles in control rooms to reduce plant downtime. The same Verizon study provides many examples of IIoT benefits in other industries as well: companies that move merchandise are reducing fuel consumption using data captured, transmitted and analyzed in near real time. Connected “smart” streetlights require less planned maintenance when their sensors send an alert for needed repairs. Other examples include smart meters in homes, which reduce the cost of operations for utilities. An analysis from the World Economic Forum describes other near-term advantages of globally introducing IIoT such as operational cost reduction, increasing worker efficiency and data monetization. These are only the tip of the iceberg of benefits.

Many predict that as a result of IIoT adoption, the global industrial landscape is shifting towards a more resource efficient, sustainable production economy. Part of the equation includes combining IIoT with other technologies. Companies that deploy IIoT must also build and maintain advanced systems to manage and mine Big Data.

Big Data

To act upon and even predict factory-related events in the future, companies need to mine Big Data and continually detect patterns in large-scale data sets with Deep Learning technologies. Combined with vast processing power “for hire” in the cloud, these technologies are putting cost-saving processes like predictive maintenance and dynamic fault correction within reach of many more companies. With predictive technologies, managers can optimize responses better and adapt their organizations more quickly to address incidents. A study from General Electric in collaboration with Accenture highlights that for this reason, two managers out of three are already planning to implement Big Data Mining as a follow up to IIoT implementation.

Data and Objects Also Need Human Interfaces

Having post-processing analytics and predictive technologies is valuable to those who are in control centers, but what happens when a technician is dispatched to the field or in the factory to service a connected machine? Augmented Reality provides the human workforce with an interface between the data from these sensors and the real world.

The real time visualization (or “consumption”) of sensor data is an important component of the larger equation. Sensor tracking protocols are not new. In fact, SCADA can be traced back to the ‘70s but when combined with Augmented Reality, new options are available. As industrial equipment becomes more and more complex, workers constantly face long procedures that often involve monitoring and decision-making. When assisted by Augmented Reality during this process, the worker with the contextual guidance as well as all the up-to-date information required for successful decision-making can perform tasks more quickly and with lower errors.

How It Works

Let’s examine a compelling use case for AR and IIoT: maintenance of Internet-connected machines. A worker servicing a machine facing a fault needs to access the real time data readings of the internal variables of all the machine components in order to diagnose the problem and choose the right procedure to apply. In current scenarios the worker needs to phone the central control room in order to access the data or, in some cases, retrieve the data readings from a nearby terminal, then return to the machine. With an AR-enabled device, the worker can simply point the device at the machine, visualize the real time internal readings overlaid on top of the respective components, and decide the best procedure (as shown in the ARise event presentation about data integration). The same device can then provide guidance for the procedure, informing the worker with the contextual data needed at every step.

Another use case that can benefit from the combination of AR and IoT is job documentation. Through the interaction with real time sensor data, workers can document the status of machines during each step, feeding data directly into ERP systems, without having to fill out long paper-based forms as part of their service documentation. Procedures can be documented with greater precision, eliminating the possibility for human error during data gathering.

Big Data and Augmented Reality

When deploying IoT in industrial contexts, entrepreneurs should take into account the two faces of the value of the data produced by this technology. The offline processing capabilities of Big Data Mining algorithms provide a powerful prediction and analysis tool. In parallel, implementing Augmented Reality allows those who are in the field to reap the benefits of having real time onsite contextual data. 

Some AREA members are already able to demonstrate the potential of combining sensors, Big Data and Augmented Reality. Have you heard of projects that tap IIoT in new and interesting ways with Augmented Reality? Share with us in the comments of this post.




Efficiency Climbs Where Augmented Reality Meets Building Information Management

At Talent Swarm we envisage that by using pre-existing platforms and standards for technical communication, our customers will reach new and higher levels of efficiency. Our vision relies on video calling to make highly qualified remote experts available on demand, and the data from Building Information Management (BIM) systems will enhance those live video communications using Augmented Reality.

Converging Worlds

There have been significant improvements in video calling and data sharing platforms and protocols since their introduction two decades ago. The technologies have expanded in terms of features and ability to support large groups simultaneously. Using H.264 and custom extensions, a platform or “communal space” permits people to interact seamlessly with remote presence tools.  The technology for these real time, parallel digital and physical worlds is already commonplace in online video gaming. 

But there are many differences between what gamers do at their consoles and enterprise employees do on job sites. As our professional workforce increasingly uses high-performance mobile devices and networks, these differences will decline. Protocols and platforms will connect a global, professionally certified talent pool to collaborate with their peers on-site. 

Enterprises also have the ability to log communications and activities in the physical world in a completely accurate, parallel digital world.

Growth with Lower Risk

We believe that introducing next generation Collaborative Work Environments (CWE) will empower managers in many large industries, such as engineering, construction, aviation and defense. They will begin tapping the significant infrastructure now available to address the needs of technical personnel, as well as scientific research and e-commerce challenges. When companies in these industries put the latest technologies to work for their projects, risks will decline.

Most IT groups in large-scale engineering and construction companies now have an exhaustive register of 3D models that describe every part of a project. These are developed individually and used from initial design through construction. But these have yet to be put to their full use. One reason is that they are costly to produce, and companies are not able to re-use models created by third parties. There are no codes or systems that help the companies’ IT departments determine origins of models or if the proposed model is accurate. The risks of relying on uncertified models, then learning that there is a shortcoming or the model is not available when needed, are too great.

Another barrier to our vision is that risk-averse industries and enterprises are slow in evaluating and adopting new hardware. Meanwhile, hardware evolves rapidly. In recent years, video conferencing has matured in parallel with faster processors and runs on many mobile platforms. Specialized glasses (such as ODG´s R-7s, Atheer Air and, soon, Microsoft’s HoloLens), helmets (DAQRI´s Smart Helmet), real time point-cloud scanners (such as those provided by Leica or Dot Products) or even tablets and cell phones can capture the physical world to generate “virtual environments.”

With enterprise-ready versions of these tools coupled with existing standards adopted for use in specific industries, the digital and physical worlds can be linked, with data flowing bi-directionally in real time. For example, a control room operator can see a local operator as an avatar in the digital world. By viewing the video streaming from a camera mounted on the local operator’s glasses, the remote operator can provide remote guidance in real time. 

Standards are Important Building Blocks

At Talent Swarm, we have undertaken a detailed analysis of the standards in the construction industry and explored how to leverage and extend these standards to build a large-scale, cloud-based repository for building design, construction and operation.

We’ve concluded that Building Information Management (BIM) standards are reaching a level of maturity that makes them well suited for developing a parallel digital world as we suggest. Such a repository of 3D models of standard parts and components will permit an industry, and eventually many disparate industries, to reduce significant barriers to efficiency. Engineers will not need to spend days or weeks developing the models they need to describe a buttress or other standard components.

Partnerships are Essential

The project we have in mind is large and we are looking for qualified partners in the engineering, construction and oil and gas industries, and with government agencies, to begin developing initial repositories of 3D models of the physical world.

By structuring these repositories during the design phase, and maintaining and adding to this information in real time from on-site cameras, we will be able to refine and prove CWE concepts and get closer to delivering on the promise.

Gradually, throughout the assembly and construction phases we will build a database that tracks the real world from cradle to grave. Analyzing these databases of objects and traces of physical world changes with Big Data tools will render improvement and maintenance insights previously impossible to extract from disjointed, incomplete records. We believe that such a collaborative project will pave the way towards self-repairing, sentient systems.

We look forward to hearing from those who are interested in testing the concepts in this post and collaborating towards the development of unprecedented Collaborative Work Environments.