Watch the AREA Research Committee Webinar Recording Featuring Dr. Rafael Radkowski
19th July 2019
At an AREA-hosted webinar on July 15th, Dr. Radkowski shared his latest work on:
Computer vision capabilities for tracking and scene understanding
Developing AR visualization and tracking capabilities for manual inspection support; and
Designing AR/MR visual widgets for AR-assisted training.
The webinar was recorded and is now available for viewing here. Discover more about how different computer vision techniques for tracking and scene understanding can benefit AR-supported tasks.
How to Get Beyond the “Cool Demo” to Full Deployment
19th July 2019
But the cool demo can result in a so-called “proof of concept purgatory”
where enterprises get locked into a sequence of demonstrations but fail to move
beyond these to proceed with solution deployment within their businesses.
In keeping with the AREA’s commitment to advancing the AR ecosystem for
the benefit of technology suppliers and enterprise users, we believe this is an
important consideration to overcome. That’s why we asked AREA members for their
perspectives on how best to proceed from the cool demo to enterprise adoption.
Here’s what they told us:
Peter Antoniac, CTO, Augumenta:
An
industrial AR project should always start with solving a concrete customer
problem. A cool demo does not mean it is useful for the end user. A best
practice is to start with a clear problem and find a usable and efficient way
to solve it for the end user – taking into account all the variables, like
device usability, environment, workers habits, and narrow it to the most
reliable way possible including picking the best hardware for the deployment.
That means working very closely with end users, listening to their feedback,
and responding to it as diligently as possible.
Harry Hulme, Marketing and Communications Manager, RE’FLEKT:
Scaling AR solutions into production
and breaking through the pilot purgatory is a problem faced by many businesses
today. Countless companies are making substantial technological investments but
fail to plan correctly before implementation. Like any investment, it is unwise
to simply rush in. Instead, you should ensure to optimize AR deployments around
the factors that will make or break its success.
The name of the game is to set up an
AR deployment to succeed. That happens by winning over the key stakeholders who
can share in its victory (people); solving the biggest operational problems
(product); and doing all of the above in ways that are methodical, strategic
and follow best practices (process).
The success gained by following
these steps will protect your technology investment. After investing time and
money in vetting AR, launching pilots and proving its value, that value will
only be realized if it’s given the chance to succeed. And once it does succeed,
there is real bottom-line value to be gained.
Damien Douxchamps, Head of R&D, Augumenta:
In
manufacturing use cases, deployment requires integration with the factory
backend, and that’s where the big challenge is. In addition to that, sturdy
hardware, reliable applications and means of interaction are needed when the
user base increases. With that larger user base also come different people, and
the hardware and the software must fit each and every one of them.
It is commonplace for an
organisation that wants to start an XR project to either go to an external
agency or to develop a capability in-house as a limited-scope proof-of-concept
(PoC). That’s because it is difficult to go “beyond the cool demo” until you
know in some detail what you need to do and how XR will benefit your
organisation. So, the only way to get the answers is to run a PoC.
The problem with this
approach is that the scope of the exercise either hasn’t been fully considered,
or it is extremely restricted simply because it is a PoC. Getting buy-in from
the senior leadership is difficult as you are trying to get approval for
something that hasn’t been tried and tested. Therefore, the budget is usually only
sufficient for the one use case that is within the scope of the PoC. Of course, I have identified these as
negatives, but if there is a likelihood that the PoC will not lead onto
something bigger, or might fail, then this is the best and most pragmatic
approach, isn’t it? But, what if it
doesn’t fail?
It doesn’t have to be this
way. There are now technologies and technology partners that can help develop
the business case. The technology doesn’t have to be suitable for that one-off
PoC. But, if you develop in isolation
(i.e., in-house or with a creative agency) then it probably will be.
One of the largest problems
of using this technology is getting the 3D content into XR in the first place.
There are lots of importers on the market that are transactional (i.e., they do
conversions one at a time manually), which for a PoC may be fine, but this
isn’t scalable. If, for example, your
use case is manufacturing, then you don’t want to be manually importing 3D CAD
assemblies every time something changes. You’ll need a scalable, automated
process. And there is absolutely nothing wrong with identifying this as a
must-have requirement right from the start. Just having some isolated data in
XR will not adequately prove that your solution is fit-for-purpose; you’ll only
be testing that one aspect.
So you need to really
understand why you think you need XR; what is the value to your business and
how would you implement it if you weren’t doing “just a PoC”? In fact, if you don’t do this, then your PoC
isn’t really valid. Often, we are so keen to get our project started, that we
skip past these steps, and even reduce the scope in order to get just enough
money to be able to “have a go” with exciting new technology. You must resist
the urge to do this, as whilst this may get your project off the starting
block, it will not do you any favours further downstream.
In order to prove the value,
you must adequately specify your project to prove the value of all of the
requirements. If, to achieve the
business value you require, regular 3D data changes, then specify it. If you require a collaborative experience,
then you must specify it. Additionally,
you must also consider the output device; these things change regularly with
new devices popping up on the market every few weeks, so make sure you specify
a device-agnostic approach.
Tero Aaltonen, CEO, Augumenta:
Measuring
results is vital for making any decisions about continuing to a wider
deployment from a pilot. You should make sure that there are proper meters in
place to observe the productivity, safety, quality or other factors, so that
customers can calculate the ROI based on facts, not opinions and guesses.
There are two common themes that run
through these various perspectives from AREA members. The first is that
diligence and planning, as in most successful endeavors, are critical to ensure
that there is a tangible way forward from the cool demo to enterprise
deployment. This can help mitigate any possible perception that the cool demo
is simply a dead end. Secondly, ensure that the cool demo adds identifiable
business value by solving a problem or enabling an opportunity.
These are just some ideas for
getting past the cool demo to full deployment. You can find more ideas and advice
at thearea.org.
A New Division of Labor: IoT, Wearables and the Human Workforce
19th July 2019
As in previous generations of technology innovation, the deployment of desktop computers initially required a considerable amount of abstraction and steep learning curves: creating even a simple sketch on a screen required coding and math skills. Experts and many mitigated layers of knowledge were required to effectively use this newly-created work resource. As computers and software evolved, using them became more intuitive, but their users were still tied to desks.
The ensuing era of mobility helped greatly. It unchained the device and led to the creation of wholly new solutions that overcame the challenges of location, real time and visual consumption of the world.
But there is one group that – comparatively speaking – benefited much less from all these changes: the legions of non-desk workers — those on the factory floor, on telephone poles, in mines, on oil rigs or on the farm for whom even a rugged laptop or tablet is impractical or inconvenient. The mobile era unchained desk workers from their desks but its contribution to workers in the field, to the folks who work on things rather than information, was negligible. Working on things often requires both hands to get the job done, and also doesn’t map well to a desktop abstraction.
Enter the wearable device, a new device class enabled by mobile-driven miniaturization of components, the proliferation of affordable sensor technology, and the movement to the cloud.
Wearable devices started as a consumer phenomenon (think smartwatches), mostly built around sensors. Initially, they focused on elevating the utility of the incorporated sensor and their market success was commensurate with how well the sensor data stream could be laddered up to meaningful and personalized insights. With the entrance of the “traditional” mobile actors, wearables’ role expanded into facilitating access, in a simplified way, to the more powerful devices in a user’s possession (e.g., their smartphone). The consumer market for wearables continues to pivot around the twin notions of access and self-monitoring. However, to understand the deeper and longer-term implications of the emergence of intelligent wearable devices, we need to look to the industrial world.
An important, new chapter in wearable history was written by Google Glass, the first affordable commercial Head-Mounted Display (HMD). Although it failed as a consumer device, it successfully catalyzed the introduction of HMDs in the enterprise. Perhaps even more importantly, this new device type led the way in integrating with other enterprise systems, aggregating the compute power of a node and the cloud – centered on a wearer. Unlike the shift to mobile devices, however, this has the potential to drive profound changes in the lives of field workers and could be a harbinger of even deeper changes in how all of us interact with the digital world.
Division of Labor: Re-empowering the Human Workforce
Computers and handheld devices had a limited impact on non-desk workers. But technological changes such as automation, robotics, and the Internet of Things (IoT) had a profound impact, effectively splitting the industrial world into work that is fit for robots and work that isn’t. And the line demarcating this division itself is in continuous motion.
Early robotic systems focused on automating precise, repetitive, and often physically demanding activities. More recent advances in analytics and decision support technology (e.g., Machine Learning and Artificial Intelligence [AI]) and integration via IoT have led to the extension of physical robots into the digital domain, coupling them with software counterparts (software agents, bots, etc.) capable of more dynamic response to the world around them. Automation is thus becoming more autonomous and, as it does so, it’s increasingly moving out of its isolated, tightly controlled confines and becoming ever more entwined with human activity.
Because automation inherently displaces human participation in industrial processes, the rapid advances in analytics, complex event processing, and digital decision-making have prompted concerns about the possibility of “human obsolescence.” In terms of the role of bulk labor, this is a real concern. However, the AI community has perpetually underestimated the sophistication of the human brain and the limits to AI-based machine autonomy in the real world have remained clear: creativity, decision-making, complex, non-repetitive activity, untrainable pattern recognition, self-directed evolution, and intuition are still largely the domains of the human workforce, and are likely to remain so for some time.
Even the most sophisticated autonomous machines can only operate in a highly constrained environment. Self-driving vehicles, for example, depend on well-marked, regular roads and the goal of an “unattended autonomous vehicle” is very likely to require extensive orchestration and physical infrastructure, and the resolution of some very serious security challenges. By contrast, the human brain is extraordinarily well adapted to operating in the extreme fuzziness of the real world and is a marvel of efficiency. Rather than try to replace it with fully digital processes, a safer, and more cost-effective strategy would be to find ever better and closer ways to integrate human processing with the digital world. The role of wearable technology provides a first path forward in this regard.
Initial industrial use cases for wearables have tended to emphasize human productivity through the incorporation of monitoring and “field appropriate” access to task-specific information. The first use cases included training and enabling less experienced field personnel to operate with less guidance and oversight. Some good examples are Librestream’s Onsight which creates “virtual experts,” Ubimax’s X-pick that guides warehouse pickers, or Atheer’s AR-Training solutions. Honeywell’s Connected Plant solution goes a step beyond: it is an “Industrial Internet of Things (IIoT) style” platform that already connects industrial assets and processes for diagnostic and maintenance purposes, a new dimension of value.
The introduction of increasingly robust autonomous machines and the consideration of productivity and monitoring across more complex use cases involving multiple workers and longer spans of time will drive the next generation of use cases.
Next Reality
Consider the following – still hypothetical, although reality based – use case:
Iron ore mining is a complex operation involving machines (some of which are very large), stationary objects and human workers – all sharing the same confined space with limited visibility. It is critical not only to be able to direct the flow of these participants for safety reasons but also to optimize it for maximum productivity.
The first step in accomplishing this requires deploying sensors at the edge that create awareness of context: state, condition, location. Sensors on large machines or objects are not new and increasingly, miners carry an array of sensors built into their hard hats, vests, and wrist-worn devices. But “sense” is not enough – optimization requires a change in behavior. For this, a feedback loop is needed, which is comparatively easy to accomplish with machines. For workers, a display mounted on the hard hat, and haptic actuators embedded in their vest and wrist devices close the feedback loop.
Thus equipped, both human and machine participants in the mining ecosystem can be continuously aware of each other, getting a heads up – or even a warning – about proximity. Beyond awareness, this also allows for independent action: for example, stopping vehicles or giving directional instructions via the HMD or haptic feedback.
Being connected in this way helps to promote safety, but isn’t enough for optimization. For that a backend system that uses historical data, rules and ML algorithms to predict and ultimately prescribe optimum paths is required. This provides humans with key decision support capabilities and a means to provide guidance to machines without explicitly having to operate them. Practically speaking: they operate machines via their presence. Considering the confined environment, this means that sometimes the worker needs to give way to the 50-ton hauler and other times the other way around. What needs to happen gets deduced from the actual conditions, decided in real time, on the edge.
As this use case illustrates, wearable devices are emerging as a new way for humans to interact with machines (physical or digital). The sensors on these devices are also being used in a new and more dynamic way. Whereas each sensor in a traditional industrial context provides a very tightly defined window into a specific operating parameter of a specific asset, sensor data in the emerging paradigm is interpreted situationally. Temperature, speed, vibration may carry very different meanings depending on the task and situation at hand. The Key Performance Indicators (KPIs) to be extracted from these data streams are also task- and situation-specific, as are the ways in which these KPIs are used to validate, certify, and optimize both the individual tasks and the overarching process or mission in which these tasks are embedded.
A key takeaway in considering this new human-machine interaction paradigm is that almost everything is dynamic and situational. And, at least in the industrial context, the logical container for managing all of this is what we’re calling the “Mission.” This has important ramifications for considering what systems need to be in place to enable workers and machines to interoperate in this way and to make possible an IIoT that effectively leverages the unique features of the human brain.
A bit about the authors:
Keith Deutsch is an experienced CTO, VP Engineering, Chief Architect based in the San Francisco Bay area. Peter Orban, based out of New York, leads, builds and supports experience-led, tech-driven organizations globally to grow the share of problems they can solve effectively.
What is Assisted Reality – and How Can You Benefit from It?
19th July 2019
AREA:
Jay, perhaps you could begin by giving us a quick update on the progress of
Upskill.
Kim: As the person in charge of our product roadmap
and product development, I’ve been very busy taking advantage of the
advancements that have been happening in this space. Upskill’s flagship software
platform for industrial AR applications, Skylight, had been historically
focused on simpler, wearable 2D Augmented Reality applications.
However, we’ve been busy over the last year or so
expanding our portfolio to include Mixed Reality solutions running on Microsoft
HoloLens, for which we announced a partnership with Microsoft, as well as
supporting mobile phones and tablets and transforming Skylight into a multi-experience
platform. We announced this a week before Mobile World Congress in February,
which was very exciting because a number of our customers had already been
taking advantage of these feature sets and are excited that they can publicly
talk about them. We continue to be doing well on the business front, engaging
with customers globally, really being focused on bringing the digital
enterprise to the hands-on worker. There’s a lot to do and there’s a lot of
growth that’s happening within our business, which is also a microcosm of the
AR industry.
AREA:
In The AREA-sponsored enterprise track at AWE, you delivered a really
interesting and exciting keynote talk that introduced the phrase “Assisted
Reality,” a concept that Upskill has brought to the ecosystem. Can you explain more
about what Assisted Reality is and how it’s different from Augmented Reality?
Kim: Assisted Reality is a wearable, non-immersive
visualization of content that still has contextual awareness. So, what do we
mean by non-immersive? A good hardware example would be what Google introduced
with their Glass product – a heads-up display that’s in your line of sight and
you can glance at the content, but the goal of the user experience isn’t to
deliver object tracking or any kind of immersion – no 3D visualizations, object
overlays, or the like.
It’s really intended to deliver pre-existing
information, text, diagrams, images – maybe short videos – as-is to help the
user understand what needs to be done at any given point in time and enhance
the person’s situational awareness. The
goal is no different from Augmented Reality.
Assisted Reality was born as Upskill was striving
to define and differentiate the various user experiences within the broader
Augmented Reality context. If you think about Augmented Reality, it can be delivered
in mobile forms, wearable forms, it can be even a projected display or an
audible experience. The term is actually very broad. So, Assisted Reality was coined to specifically focus on non-immersive wearable experiences that
boost the person’s situational awareness.
We consider Assisted Reality a subdomain or an experience within the Augmented
Reality spectrum.
AREA:
What are some of the benefits of Assisted Reality?
Kim: The benefits span several areas. First of
all, because Assisted Reality is a wearable user experience, it’s important to
talk about the different types of devices that it supports. Generally speaking,
Assisted Reality devices tend to be more wearable than their Mixed Reality counterparts.
That gap may be closing a bit with the introduction of HoloLens 2, but this has
historically been the case. Because Assisted Reality has less stringent
hardware requirements, it can deliver a positive user experience when worn for
a full work shift, and the battery life is quite good. Assisted Reality devices
are simpler and frankly, a bit cheaper. So, the leading vendors in that space
would be companies like RealWear, which has been gaining a tremendous amount of
traction recently and Vuzix, which is a vendor that invested in the industry
very early on, as did Google. These companies have been driving quite a bit of
success with this and that’s certainly one benefit!
The other benefit to enterprises is that Assisted
Reality often doesn’t require any kind of data preparation or formatting. It’s
really focused on being able to deliver content that was historically being
delivered to your hands-on workers on the manufacturing floor or out on the
field or moving about in the warehouse, on paper forms and PCs.
So, it takes away the need to go and figure out
your 3D content pipeline, understand how to convert that into an AR-ready
format, and other tasks. Enterprises can focus on leveraging content that exists
within their organization, which significantly cuts back on the cost, as well
as the time to get that initial return on investment from the solution.
So, it’s a lower cost and faster time to
implement. That’s how we’ve been able to steer a number of our customers
towards starting the journey, beginning with Assisted Reality and eventually building
up their capabilities with more immersive, Mixed Reality solutions.
AREA:
What would you recommend to a company reading this blog post; what steps do
they need to take to learn more about Assisted Reality or the broader Augmented
Reality spectrum?
Kim: Because there’s been so much activity in the
marketplace, we’re very fortunate as a community to have a number of very strong
case studies describing successful implementations of Assisted Reality, Mixed Reality,
mobile phone AR, projection AR – you name it, they’re all there. Chances are,
there is another company within your particular industry that has successfully
deployed AR solutions and has actually spoken about them at events or published
papers about them.
I would highly encourage some peer learning and
of course, the impetus is on the people who want to experiment with the
technology that’s out there – whether that’s starting with a proof of concept
and graduating to a pilot and hopefully getting into full deployments, or
making deeper initial investments because you have a better sense of the
business case. It doesn’t matter, but being able to get started in any kind of
capacity is critical for your learning. There’s only so much that you can learn
from research. But you can certainly be inspired by hundreds of companies out
there now deploying these kinds of solutions, reading about what worked well
for them and what hasn’t worked well for them.
My other advice is to talk to your own end users within
your organization to better understand their pain points. AR, like most
successful tools, shouldn’t be considered a hammer looking for nails, but rather
a solution to a well-defined set of problems.
And of course, I would be remiss if I didn’t
mention that people who are in the learning phase would actually benefit the
most by joining the AREA. The AREA is a global community of organizations that
have been doing this for a very long time, whether they’re providers, end
users, or research institutions. The formal and informal interactions that
people can have as a part of the AREA could really accelerate your learning.
AREA:
Jay, thanks to you and Upskill for bringing the term Assisted Reality to the
fore because it’s a really important part of the solutions spectrum.
Kim: You’re welcome. We’re very excited for the
continued growth of the industry and look forward to working with the rest of
the community.
Research: Augmented Reality Marketing Can Be Effective
19th July 2019
A study on Augmented Reality Marketing and
Branding
The study was conducted by AREA research partner Prof. Philipp Rauschnabel (Universität der Bundeswehr München, Germany) in partnership with Prof. Reto Felix (University of Texas Rio Grande Valley, USA) and Prof. Chris Hinsch (Grand Valley State University, USA) and published in the Journal of Retailing and Consumer Services.
In their
study, the authors measured consumers’ brand attitudes before and after using a
branded AR application. Half of respondents used an IKEA app and half used an
app for a German Hip Hop band. Even among the IKEA app users, the authors
detected improvements in brand evaluations. This is significant because
attitudes towards established brands are notoriously difficult to change.
The
researchers also asked consumers to rate their evaluation of the app and how
inspired they felt after using the app. Based on statistical driver analyses,
they could then explain why and when brand evaluations improved.
Counterintuitively,
the extent to which consumers rate an app as positive or negative seems to be
unrelated to overall brand attitude. However, the extent to which consumers
felt inspired is a major driver of improvements in brand attitude. More
specifically, among highly inspired consumers, the brand improvements were
about four times stronger than among the less inspired users. In addition, the
quality of the augmentation is a main driver of inspiration. Users who
experienced problems in AR technology (e.g., a virtual object behaved
unrealistically) felt less inspired then those who did not.
Findings: AR can be effective!
The study
provides some key findings and calls to action for marketers:
Augmented Reality Marketing can
improve brand attitudes and positively impact a brand’s bottom line. Marketers should consider adding
AR apps to their marketing and branding toolbox.
The degree to which the AR app
inspired the user was more predictive of brand attitude change than an
evaluation of the app itself. Marketers should measure the degree to which app users are inspired by
the app.
A bad augmentation of the real world
can negatively impact evaluations of the overall brand. Marketers interested in pursuing
AR should invest in high quality 3D content and state-of-the-art AR technology.
As the
study authors wrote, “Consumers will operate in a reality that is consistently
enriched with virtual content,and marketers need to find ways to integrate
these new realities into their marketing strategies.”
The entire
research report can be downloaded here for free during the month of April 2019. After
April, the report will be found here. To read a more academic summary,
please visit Philipp Rauschnabel’s personal website.
Augmented Reality a hot topic at MWC Barcelona 2019
19th July 2019
By Christine Perey, PEREY Research & Consulting, AREA Board member and chair of the AREA’s Membership and Research Committees.
Augmented Reality, and how it converges with IoT, AI, Cloud and Edge Computing technologies, was among the loudest and brightest themes of Mobile World Congress (MWC) Barcelona 2019.
AREA Completes Safety and Human Factors Research Project
19th July 2019
The AREA Research Committee recently distributed to members two deliverables produced as part of the organization’s third research project, Assessing Safety and Human Factors of AR in the Workplace. This groundbreaking, member-exclusive research project produced the first framework for assessing and managing safety and human factors risks when introducing AR in the workplace. In addition to a tool to support decision-making, members also received an in-depth report of findings based on primary research.
Through the knowledge of its members and detailed interviews and research conducted with the wider enterprise AR ecosystem, the AREA’s reusable framework will promote a consistent approach to assessing safety and human factors of AR solutions.
“For the first time, AREA members have a framework that will enable them to consider important requirements from the perspectives of key project roles and at each stage of the AR project,” said Perey. “The framework and supporting report are invaluable tools, built on the experience and knowledge gained by members and the larger community through many AR projects.”
“Through a combination of desk research and interviews with experts in the enterprise AR field, we captured rich and comprehensive insights into best practices and potential issues to overcome in these previously under-researched areas,” noted Amina Naqvi of the MTC, the author of the framework and research paper.
“This is another great example of the value the AREA brings to its members and the wider enterprise AR ecosystem,” said Mark Sage, Executive Director of the AREA. “By working together and learning from our fellow members, we’ve been able to produce research results that bring real benefits, and help to reduce the barriers to adoption for AR projects.”
The AREA has prepared a free Executive Summary of the Best Practice Report and a case study for non-members, “Assessing AR for Safety and Usability in Manufacturing” to help companies in the AR ecosystem to adopt or design safer and more usable wearable AR solutions.
If you’d like access to these resources please follow the links belowhereto download them.
AREA Launches Research into AR Manufacturing Barriers
19th July 2019
Hot on the heels of delivering its third research project, the AREA has launched a new project, defined and voted for by the AREA members, targeting barriers to AR adoption in manufacturing.
While many manufacturers have implemented AR trials, proofs of concept, and tests, relatively few have rolled out fully industrialized solutions throughout their organizations. The goal of the fourth AREA research project is to identify issues and possible strategies to overcome these barriers.
This is the first AREA research project that focuses on a single industry in which there are many use cases that can improve performance, productivity and safety, and reduce risks and downtime. The project will have both quantitative and qualitative components and the deliverables will include an AREA member-exclusive report and a framework for identification of common barriers and the best mitigation strategies. In addition, there will be a case study illustrating the use of the framework that will be published for the AR community.
Dr. Philipp Rauschnabel of the xReality Lab at Universität der Bundeswehr in Munich and his team will be leading this research. Enterprises interested in providing input to the project may complete this form or send an email to [email protected].
AREA Mark Sage AR&VR World Interview: “The Ecosystem’s Beginning to Mature”
19th July 2019
What is the AREA doing to foster the adoption of Augmented Reality in the enterprise? What kinds of benefits are AREA member organizations beginning to realize from their AR deployments? AREA Executive Director Mark Sage provided the answers to these and other questions in a video interview by TechTV at the AR & VR World conference at the TechXLR8 event in London in mid-June. Watch the interview here.
Putting the ‘work’ into ‘AR Workshop’
19th July 2019
Deep in the snow of a wintery Chicago, the annual AREA/DMDII workshop was a hotbed of activity!
The sessions attracted around 120 attendees comprising speakers, exhibitors, academics and those representing both commercial AR technology providers and companies using or looking to use AR within their business. Given the rarity of having such a collection of AR practitioners in one place, Glen Oliver (Lockheed Martin) and I wanted to harness this collective brainpower! Together, we represented the AREA Requirements Committee whose remit is to develop a set of industry requirements and use cases to help promote the adoption of AR.
The AREA Requirements Committee strongly believes that in order to benefit the entire ecosystem we need to effectively and impactfully articulate how AR technology can be applied to business problems, what capabilities are needed within AR solutions and, perhaps more importantly, what is the business value of these tools? This will help both vendors and users of AR.
So, with three hours allotted from a precious agenda, how to best use this time? The approach taken was to introduce the importance of developing a linked and connected schema of needs followed by group activities. Here’s what followed:
Backdrop
We began with a summary of the requirements capture, already started at the previous AREA/DMDII workshop. At that session, we captured 96 requirements, split roughly equally between hardware and software. Whilst this was a great start, the outcome resulted in a list of requirements with little context, structure, priority and limited ability to leverage the community to contribute towards them. At the same time, the AREA has collected a number of great use cases which have value to companies wishing to investigate where AR may be applied but the current use cases need more detail to be actionable and linked to derived requirements. More needed to be done!
So, we presented a proposed ‘AREA Schema of Needs, as shown below.
The idea is quite simple. We need to build a hierarchically linked set of needs, in various technology areas, that have bi-directional linkages to the use cases which incorporate the requirements. In turn, the use cases are linked to scenarios which define an end-to-end business process.
These scenarios occur in various settings (including engineering, manufacturing, field service, user operation, etc.) and, ultimately, are relevant in one or more industries (automotive, health care, industrial equipment and other industry verticals).
In order to set the scene, the presenters walked through examples of each of the taxonomy fields. For example, a sample field service scenario was provided as follows:
A field service technician arrives at the site of an industrial generator. They use their portable device to connect to a live data stream of IoT data from the generator to view a set of diagnostics and service history of the generator.
Using the AR device and app they are able to pinpoint the spatial location of the reported error code on the generator. The AR service app suggests a number of procedures to perform. One of the procedures requires a minor disassembly.
The technician is presented with set of step by step instructions, each of which provides an in-context 3D display of the step.
With a subsequent procedure, there is an anomaly which neither the technician nor the app is able to diagnose. The technician makes an interactive call to a remotesubject matter expertwho connects into the live session. Following a discussion, the SME annotates visual locations over the shared display, resulting in a successful repair.
The job requires approximately one hour to perform. The device should allow for uninterrupted working during the task.
With the job finished, the technician completes the digital paperwork and marks the job complete (which is duly stored in the on-line service record of the generator).
In this example, the items in blue are links to underlying use cases which need to be supported in order to enable this scenario. Similarly, examples were presented for use cases and requirements needs.
We also introduced the notion of “Levels of Maturity.” This is a useful concept as it enables both users and suppliers of technology solutions to identify roadmap progression, with an eye on future, richer adoption or delivery. Alternatively, not all users of the technology need the most advanced solution now, but they can identify what might make business sense to them in the shorter term.
Group Exercise
With the backdrop complete, we moved into the interactive portion of the session. The audience was split into 17 table groups, each with a mix of people from industrial users, commercial suppliers and academics. The idea was to get a blend of perspectives for the group activity.
Delegates hard at work!
Armed with a set of templates furnished by Glen, the 17 teams were set the following exercise:Delegates hard at work!
Choose your industry and setting
Provide a written definition of the scenario
Highlight the “use case” chunks that form the scenario
Describe at least three of the supporting use cases
Capture some of the derived requirements/needs
Construct a maturity model
BONUS: Describe the value proposition of using AR in this scenario
Whilst each team was given a high-level scenario (e.g. “manufacturing operation” or “design review”), they were free to choose their own, if they wished.
It was great to see the level of discussion taking place across all of the tables! One of our objectives for the exercise was to use the outputs from the team as further content for helping populate a future database. However, the primary point of the exercise was to mix the attendees and have them focused on articulating scenarios, use cases and requirements in a structured way that can also be tied back to business value.
At the end of the session, a spokesperson for each team stood up and summarised the results of their work.
Outcome
Each team duly handed in their handwritten efforts, which were transcribed into a more usable digital form and are now available to AREA members by following this link below and opening up the transcription of the group’s outputs:
The teams have supplied an impressive amount of ideas which are summarised in the PDF. One unfortunate aspect of this is that we were unable to capture what were clearly detailed and illuminating discussions that were taking place across all of the tables. In some ways, perhaps, the ability to openly discuss these topics was possibly more valuable to the teams than what was written down.
The scenarios discussed included (but were not limited to) the following:
Remote design review
City building planning
Factory floor – optics manufacturing
Optimising manufacturing operations
Onsite field service task
New product training – customer
New equipment commissioning
Domestic owner repair procedure
Assembly assistance
Maintenance for new staff
Collaborative landing gear inspection
‘Unusual’ field service tasks
Construction design change optimisation
Multi-stakeholder design review
Additionally, these scenarios were described within a number of industries and settings.
Furthermore, we received some very positive anecdotal feedback from the delegates. One person stated, “This exercise was worth the trip in itself!”
One of the aims of the AREA Requirements Committee is to develop an online database to enable community participation in defining these needs and use cases. This exercise was a great incremental step in that journey. We look forward to building out this model for the benefit of the AR ecosystem and encouraging all to participate.
Acknowledgements
Thanks to the DMDII team for onsite support and to all of the workshop delegates for making this a highly productive exercise.