1

A New Division of Labor: IoT, Wearables and the Human Workforce

As in previous generations of technology innovation, the deployment of desktop computers initially required a considerable amount of abstraction and steep learning curves: creating even a simple sketch on a screen required coding and math skills. Experts and many mitigated layers of knowledge were required to effectively use this newly-created work resource. As computers and software evolved, using them became more intuitive, but their users were still tied to desks.

The ensuing era of mobility helped greatly. It unchained the device and led to the creation of wholly new solutions that overcame the challenges of location, real time and visual consumption of the world.

But there is one group that – comparatively speaking – benefited much less from all these changes: the legions of non-desk workers — those on the factory floor, on telephone poles, in mines, on oil rigs or on the farm for whom even a rugged laptop or tablet is impractical or inconvenient. The mobile era unchained desk workers from their desks but its contribution to workers in the field, to the folks who work on things rather than information, was negligible. Working on things often requires both hands to get the job done, and also doesn’t map well to a desktop abstraction.

Enter the wearable device, a new device class enabled by mobile-driven miniaturization of components, the proliferation of affordable sensor technology, and the movement to the cloud.

Wearable devices started as a consumer phenomenon (think smartwatches), mostly built around sensors. Initially, they focused on elevating the utility of the incorporated sensor and their market success was commensurate with how well the sensor data stream could be laddered up to meaningful and personalized insights. With the entrance of the “traditional” mobile actors, wearables’ role expanded into facilitating access, in a simplified way, to the more powerful devices in a user’s possession (e.g., their smartphone). The consumer market for wearables continues to pivot around the twin notions of access and self-monitoring. However, to understand the deeper and longer-term implications of the emergence of intelligent wearable devices, we need to look to the industrial world.

An important, new chapter in wearable history was written by Google Glass, the first affordable commercial Head-Mounted Display (HMD). Although it failed as a consumer device, it successfully catalyzed the introduction of HMDs in the enterprise. Perhaps even more importantly, this new device type led the way in integrating with other enterprise systems, aggregating the compute power of a node and the cloud – centered on a wearer. Unlike the shift to mobile devices, however, this has the potential to drive profound changes in the lives of field workers and could be a harbinger of even deeper changes in how all of us interact with the digital world.

Division of Labor: Re-empowering the Human Workforce

Computers and handheld devices had a limited impact on non-desk workers. But technological changes such as automation, robotics, and the Internet of Things (IoT) had a profound impact, effectively splitting the industrial world into work that is fit for robots and work that isn’t. And the line demarcating this division itself is in continuous motion.

Early robotic systems focused on automating precise, repetitive, and often physically demanding activities. More recent advances in analytics and decision support technology (e.g., Machine Learning and Artificial Intelligence [AI]) and integration via IoT have led to the extension of physical robots into the digital domain, coupling them with software counterparts (software agents, bots, etc.) capable of more dynamic response to the world around them. Automation is thus becoming more autonomous and, as it does so, it’s increasingly moving out of its isolated, tightly controlled confines and becoming ever more entwined with human activity.

Because automation inherently displaces human participation in industrial processes, the rapid advances in analytics, complex event processing, and digital decision-making have prompted concerns about the possibility of “human obsolescence.” In terms of the role of bulk labor, this is a real concern. However, the AI community has perpetually underestimated the sophistication of the human brain and the limits to AI-based machine autonomy in the real world have remained clear: creativity, decision-making, complex, non-repetitive activity, untrainable pattern recognition, self-directed evolution, and intuition are still largely the domains of the human workforce, and are likely to remain so for some time.

Even the most sophisticated autonomous machines can only operate in a highly constrained environment. Self-driving vehicles, for example, depend on well-marked, regular roads and the goal of an “unattended autonomous vehicle” is very likely to require extensive orchestration and physical infrastructure, and the resolution of some very serious security challenges. By contrast, the human brain is extraordinarily well adapted to operating in the extreme fuzziness of the real world and is a marvel of efficiency. Rather than try to replace it with fully digital processes, a safer, and more cost-effective strategy would be to find ever better and closer ways to integrate human processing with the digital world. The role of wearable technology provides a first path forward in this regard.

Initial industrial use cases for wearables have tended to emphasize human productivity through the incorporation of monitoring and “field appropriate” access to task-specific information. The first use cases included training and enabling less experienced field personnel to operate with less guidance and oversight. Some good examples are Librestream’s Onsight which creates “virtual experts,” Ubimax’s X-pick that guides warehouse pickers, or Atheer’s AR-Training solutions. Honeywell’s Connected Plant solution goes a step beyond: it is an “Industrial Internet of Things (IIoT) style” platform that already connects industrial assets and processes for diagnostic and maintenance purposes, a new dimension of value.

The introduction of increasingly robust autonomous machines and the consideration of productivity and monitoring across more complex use cases involving multiple workers and longer spans of time will drive the next generation of use cases.

Next Reality

Consider the following – still hypothetical, although reality based – use case:

Iron ore mining is a complex operation involving machines (some of which are very large), stationary objects and human workers – all sharing the same confined space with limited visibility. It is critical not only to be able to direct the flow of these participants for safety reasons but also to optimize it for maximum productivity.

The first step in accomplishing this requires deploying sensors at the edge that create awareness of context: state, condition, location. Sensors on large machines or objects are not new and increasingly, miners carry an array of sensors built into their hard hats, vests, and wrist-worn devices. But “sense” is not enough – optimization requires a change in behavior. For this, a feedback loop is needed, which is comparatively easy to accomplish with machines. For workers, a display mounted on the hard hat, and haptic actuators embedded in their vest and wrist devices close the feedback loop.

Thus equipped, both human and machine participants in the mining ecosystem can be continuously aware of each other, getting a heads up – or even a warning – about proximity. Beyond awareness, this also allows for independent action: for example, stopping vehicles or giving directional instructions via the HMD or haptic feedback.

Being connected in this way helps to promote safety, but isn’t enough for optimization. For that a backend system that uses historical data, rules and ML algorithms to predict and ultimately prescribe optimum paths is required. This provides humans with key decision support capabilities and a means to provide guidance to machines without explicitly having to operate them. Practically speaking: they operate machines via their presence. Considering the confined environment, this means that sometimes the worker needs to give way to the 50-ton hauler and other times the other way around. What needs to happen gets deduced from the actual conditions, decided in real time, on the edge.

As this use case illustrates, wearable devices are emerging as a new way for humans to interact with machines (physical or digital). The sensors on these devices are also being used in a new and more dynamic way. Whereas each sensor in a traditional industrial context provides a very tightly defined window into a specific operating parameter of a specific asset, sensor data in the emerging paradigm is interpreted situationally. Temperature, speed, vibration may carry very different meanings depending on the task and situation at hand. The Key Performance Indicators (KPIs) to be extracted from these data streams are also task- and situation-specific, as are the ways in which these KPIs are used to validate, certify, and optimize both the individual tasks and the overarching process or mission in which these tasks are embedded.

A key takeaway in considering this new human-machine interaction paradigm is that almost everything is dynamic and situational. And, at least in the industrial context, the logical container for managing all of this is what we’re calling the “Mission.” This has important ramifications for considering what systems need to be in place to enable workers and machines to interoperate in this way and to make possible an IIoT that effectively leverages the unique features of the human brain.

A bit about the authors:

Keith Deutsch is an experienced CTO, VP Engineering, Chief Architect based in the San Francisco Bay area. Peter Orban, based out of New York, leads, builds and supports experience-led, tech-driven organizations globally to grow the share of problems they can solve effectively.




Interview with Brian Vogelsang of Qualcomm

AREA: How would you describe Qualcomm’s role in the enterprise AR ecosystem?

Vogelsang: We’re a technology
provider in the ecosystem, delivering chipsets that power AR experiences. Our
Qualcomm Snapdragon platform provides the best silicon/chipset that we can
customize to meet the needs of the XR enterprise ecosystem. You’ll see them in
products today from customers like Vuzix and RealWear. Then there’s the
Microsoft HoloLens 2 that was announced at Mobile World Congress; it uses our Snapdragon
850 Mobile Platform. Vuzix also announced at Mobile World Congress their M400
platform, which is powered by the Qualcomm Snapdragon XR1 platform. Finally,
there are new, emerging OEMs, such as nreal, Realmax, Shadow Creator, and ThirdEye.
Our goal is to optimize technology to put more capability in lighter weight
designs that can drive more immersive experiences at the lowest possible power
levels, but with full connectivity.

AREA: People might have thought that Qualcomm was getting out of AR
when it sold the Vuforia business to PTC three years ago, but the company is
still very much committed to VR and AR, isn’t it?

Vogelsang: That’s correct. We’ve
been working for over a decade in this space. We have a long history of
computer vision expertise and exploring how to build the technology and optimize
it in hardware in ways that will allow more immersive experiences while running
at the lowest possible power. To date, that has been predominantly on smartphones.
However, our long-term vision is that within a decade, we will start
transitioning from a handheld device (smartphone) to a head-worn device or a
sleek AR glass that people use the whole day. And that’s really what we’re
looking at: how do we accelerate that innovation and make those kinds of experiences
happen – initially for enterprises, but long term for consumers.

AREA: So, you expect enterprises to be the early adopters of
wearables, then the consumer market will develop from there?

Vogelsang: That’s right. Today,
in the wearable form factor, there’s a spectrum of devices, from Assisted Reality
devices for remote expert or guided work instructions, to full augmented or
mixed reality devices like HoloLens or Magic Leap. Enterprises are willing to
adopt these technologies if they solve a problem and deliver an ROI – and we’re
excited about that. But long term, we think that the technology needs to get
smaller, lighter weight, and more ergonomic.  More like your standard eyeglasses. Because of
these size requirements, that’s going to be particularly challenging
technically. To deliver an immersive experience at the lowest possible power
requires deep systems expertise. That’s right in Qualcomm’s wheelhouse. It’s
going to take a few years for the industry to deliver mass adoption of consumer
class AR eyewear. So for the short term, the enterprise is going to be doing a
lot to drive the market.

AREA: How closely do you work with wearables manufacturers?

Vogelsang: We work really
closely with them on their products and roadmaps, collaborating with them to
achieve their market objectives. There are always tradeoffs as OEMS balance
cost, weight, form factor and ergonomics, optics and display capability, performance,
thermals, and often these impact immersiveness. And so we work really closely
with them to understand their use cases and objectives and then help them with
hardware, software, and support to meet their objectives. We also give them
insight into future technology developments and their future requirements
inform our chipset roadmap. We can’t solve all the problems. Things like
displays and optics as well as camera modules are a big part of the equation in
building an AR device, and while we don’t build those technologies, we work
closely with the suppliers of these components and assist OEMs with integration
through our reference designs and HMD Accelerator Program, which pre-validates
and qualifies components so OEMs can get to market more quickly.  

AREA: It seems as if technologies are starting to converge in new
ways: 5G networks, Artificial Intelligence, the Internet of Things, and AR. Do
you get that impression as well?

Vogelsang: Definitely. We see
5G as the connectivity fabric that’s going to allow the mobile network to not
only interconnect people, but also interconnect and control machines and
objects and devices. 5G is going to deliver performance and efficiency that
will enable these new experiences and connect new industries, delivering multi-gigabit-per-second
rates of connectivity at ultra-low latency. Latency is hugely important when it
comes to Augmented and Virtual Reality experiences. And of course, 5G means more
capacity. But AI is already being used in Augmented Reality experiences, enabling
things like head tracking, hand tracking, 3D reconstruction and object
recognition or estimating light. AI is a really important part of that. And I
think 5G also will enable some capabilities to be moved off the device to the
edge of the mobile network – taking some capability and moving it to be
processed at the network edge. And that ultimately will help us enable lighter
weight designs with richer, more immersive graphics at that low power threshold
that we need. So all three – 5G, AI and AR – are coming together. And I think
IoT will be a part of AR in terms of syndicating information contextually about
the environment in an enterprise to an AR experience. IoT will feed the
insights, which will be bubbled up as AR experiences.

AREA: What do you hope to get out of being a member of the AREA?

Vogelsang: Qualcomm’s
customers are OEMs. We don’t sell to end customers, the people who would buy
those devices or experiences. However, we do need to understand what their
needs are so that we can better evolve our technology roadmap to support where
those end users want to go. So, one of the things that excites us about
becoming a member of the AREA is to begin hearing directly from some of the end
customers who are deploying wearable AR technology. We know this is a marathon
and we believe XR – spanning both Augmented and Virtual Reality – will be the
next computing platform. So, we’re taking a long-term view and investing now in
the technology that will enable this market. As a result, we’re very interested
in learning from other AREA members about how the technology is being applied
today to solve concrete problems in the enterprise so we can inform our roadmap.
Those learnings will help us deliver products that can accelerate the pace of innovation
and grow the overall AR wearable market. 

We’re doing some trials and
proofs of concept and other things where we get more directly engaged with end
customer use cases. So, being able to collaborate with other AREA members in
that space would be really good. Also, we’d like to get involved in the
committees. We have a human factors team here, and I’d like to get them engaged
with the work that’s being done on the human factors side. While we don’t build
end devices ourselves, we still need to understand as we’re building out
technology how human factors, such as weight, size, or thermals impact the user
experience and ergonomics.

We’d also like to get involved
in requirements. We think we’d really benefit from learning more about requirements
from a horizontal cross-section of the AREA membership. And finally, I think
we’d like to get involved in the marketing side, as well. We would be
interested in using our platform to help tell the story and accelerate industry
adoption.

AREA: Where do you see things headed in XR over the next three to five
years? What are the next big milestones people should be looking for?

Vogelsang: I think that we’ll
see a transition from smart glasses or Assisted Reality experiences to more
Augmented Reality or spatial immersive computing type experiences. Over the
next few years, that transition will really start to accelerate. We’re already seeing
the early promise of what’s to come with technology such as HoloLens or Magic
Leap. I’m really excited about seeing the companies who are deploying smart
glasses or Assisted Reality experiences today start to adopt Augmented Reality
or immersive computing in a much larger way.




What is Assisted Reality – and How Can You Benefit from It?

AREA:
Jay, perhaps you could begin by giving us a quick update on the progress of
Upskill.

Kim: As the person in charge of our product roadmap
and product development, I’ve been very busy taking advantage of the
advancements that have been happening in this space. Upskill’s flagship software
platform for industrial AR applications, Skylight, had been historically
focused on simpler, wearable 2D Augmented Reality applications.

However, we’ve been busy over the last year or so
expanding our portfolio to include Mixed Reality solutions running on Microsoft
HoloLens, for which we announced a partnership with Microsoft, as well as
supporting mobile phones and tablets and transforming Skylight into a multi-experience
platform. We announced this a week before Mobile World Congress in February,
which was very exciting because a number of our customers had already been
taking advantage of these feature sets and are excited that they can publicly
talk about them. We continue to be doing well on the business front, engaging
with customers globally, really being focused on bringing the digital
enterprise to the hands-on worker. There’s a lot to do and there’s a lot of
growth that’s happening within our business, which is also a microcosm of the
AR industry.

AREA:
In The AREA-sponsored enterprise track at AWE, you delivered a really
interesting and exciting keynote talk that introduced the phrase “Assisted
Reality,” a concept that Upskill has brought to the ecosystem. Can you explain more
about what Assisted Reality is and how it’s different from Augmented Reality?

Kim: Assisted Reality is a wearable, non-immersive
visualization of content that still has contextual awareness. So, what do we
mean by non-immersive? A good hardware example would be what Google introduced
with their Glass product – a heads-up display that’s in your line of sight and
you can glance at the content, but the goal of the user experience isn’t to
deliver object tracking or any kind of immersion – no 3D visualizations, object
overlays, or the like.

It’s really intended to deliver pre-existing
information, text, diagrams, images – maybe short videos – as-is to help the
user understand what needs to be done at any given point in time and enhance
the person’s situational awareness. The
goal is no different from Augmented Reality.

Assisted Reality was born as Upskill was striving
to define and differentiate the various user experiences within the broader
Augmented Reality context. If you think about Augmented Reality, it can be delivered
in mobile forms, wearable forms, it can be even a projected display or an
audible experience. The term is actually very broad. So, Assisted Reality was coined to specifically focus on non-immersive wearable experiences that
boost the person’s situational awareness.
We consider Assisted Reality a subdomain or an experience within the Augmented
Reality spectrum.

AREA:
What are some of the benefits of Assisted Reality?

Kim: The benefits span several areas. First of
all, because Assisted Reality is a wearable user experience, it’s important to
talk about the different types of devices that it supports. Generally speaking,
Assisted Reality devices tend to be more wearable than their Mixed Reality counterparts.
That gap may be closing a bit with the introduction of HoloLens 2, but this has
historically been the case. Because Assisted Reality has less stringent
hardware requirements, it can deliver a positive user experience when worn for
a full work shift, and the battery life is quite good. Assisted Reality devices
are simpler and frankly, a bit cheaper. So, the leading vendors in that space
would be companies like RealWear, which has been gaining a tremendous amount of
traction recently and Vuzix, which is a vendor that invested in the industry
very early on, as did Google. These companies have been driving quite a bit of
success with this and that’s certainly one benefit!

The other benefit to enterprises is that Assisted
Reality often doesn’t require any kind of data preparation or formatting. It’s
really focused on being able to deliver content that was historically being
delivered to your hands-on workers on the manufacturing floor or out on the
field or moving about in the warehouse, on paper forms and PCs.

So, it takes away the need to go and figure out
your 3D content pipeline, understand how to convert that into an AR-ready
format, and other tasks. Enterprises can focus on leveraging content that exists
within their organization, which significantly cuts back on the cost, as well
as the time to get that initial return on investment from the solution.

So, it’s a lower cost and faster time to
implement. That’s how we’ve been able to steer a number of our customers
towards starting the journey, beginning with Assisted Reality and eventually building
up their capabilities with more immersive, Mixed Reality solutions.

AREA:
What would you recommend to a company reading this blog post; what steps do
they need to take to learn more about Assisted Reality or the broader Augmented
Reality spectrum?

Kim: Because there’s been so much activity in the
marketplace, we’re very fortunate as a community to have a number of very strong
case studies describing successful implementations of Assisted Reality, Mixed Reality,
mobile phone AR, projection AR – you name it, they’re all there. Chances are,
there is another company within your particular industry that has successfully
deployed AR solutions and has actually spoken about them at events or published
papers about them.

I would highly encourage some peer learning and
of course, the impetus is on the people who want to experiment with the
technology that’s out there – whether that’s starting with a proof of concept
and graduating to a pilot and hopefully getting into full deployments, or
making deeper initial investments because you have a better sense of the
business case. It doesn’t matter, but being able to get started in any kind of
capacity is critical for your learning. There’s only so much that you can learn
from research. But you can certainly be inspired by hundreds of companies out
there now deploying these kinds of solutions, reading about what worked well
for them and what hasn’t worked well for them.

My other advice is to talk to your own end users within
your organization to better understand their pain points. AR, like most
successful tools, shouldn’t be considered a hammer looking for nails, but rather
a solution to a well-defined set of problems.

And of course, I would be remiss if I didn’t
mention that people who are in the learning phase would actually benefit the
most by joining the AREA. The AREA is a global community of organizations that
have been doing this for a very long time, whether they’re providers, end
users, or research institutions. The formal and informal interactions that
people can have as a part of the AREA could really accelerate your learning.

AREA:
Jay, thanks to you and Upskill for bringing the term Assisted Reality to the
fore because it’s a really important part of the solutions spectrum.

Kim: You’re welcome. We’re very excited for the
continued growth of the industry and look forward to working with the rest of
the community.




Research: Augmented Reality Marketing Can Be Effective


A study on Augmented Reality Marketing and
Branding

The study was conducted by AREA research partner Prof. Philipp Rauschnabel (Universität der Bundeswehr München, Germany) in partnership with Prof. Reto Felix (University of Texas Rio Grande Valley, USA) and Prof. Chris Hinsch (Grand Valley State University, USA) and published in the Journal of Retailing and Consumer Services.

In their
study, the authors measured consumers’ brand attitudes before and after using a
branded AR application. Half of respondents used an IKEA app and half used an
app for a German Hip Hop band. Even among the IKEA app users, the authors
detected improvements in brand evaluations. This is significant because
attitudes towards established brands are notoriously difficult to change.

The
researchers also asked consumers to rate their evaluation of the app and how
inspired they felt after using the app. Based on statistical driver analyses,
they could then explain why and when brand evaluations improved.

Counterintuitively,
the extent to which consumers rate an app as positive or negative seems to be
unrelated to overall brand attitude. However, the extent to which consumers
felt inspired is a major driver of improvements in brand attitude. More
specifically, among highly inspired consumers, the brand improvements were
about four times stronger than among the less inspired users. In addition, the
quality of the augmentation is a main driver of inspiration. Users who
experienced problems in AR technology (e.g., a virtual object behaved
unrealistically) felt less inspired then those who did not.

Findings: AR can be effective!

The study
provides some key findings and calls to action for marketers:

  1. Augmented Reality Marketing can
    improve brand attitudes and positively impact a brand’s bottom line. Marketers should consider adding
    AR apps to their marketing and branding toolbox.
  2. The degree to which the AR app
    inspired the user was more predictive of brand attitude change than an
    evaluation of the app itself. Marketers should measure the degree to which app users are inspired by
    the app.
  3. A bad augmentation of the real world
    can negatively impact evaluations of the overall brand. Marketers interested in pursuing
    AR should invest in high quality 3D content and state-of-the-art AR technology.

As the
study authors wrote, “Consumers will operate in a reality that is consistently
enriched with virtual content,and marketers need to find ways to integrate
these new realities into their marketing strategies.”

The entire
research report can be downloaded here for free during the month of April 2019. After
April, the report will be found here. To read a more academic summary,
please visit Philipp Rauschnabel’s personal website.




The AREA’s Annual Workshop

The Advanced Manufacturing Research Centre (AMRC) kindly
hosted the workshop which saw more than 70 participants from a range of
industries, including energy/utilities, buildings and infrastructure,
aerospace, defence, industrial equipment, mining, automotive and consumer high
tech, converge on the shop floor of Factory 2050 for a jam-packed series of
presentations, interactive workshops, demonstrations and networking.

Day 1 was opened by AREA Executive Director Mark Sage and
AREA President Paul Davies, who delivered a high-level overview of AR,
supported by leading companies and AREA members who have deployed AR.
ExxonMobil, Welsh Water and Boeing all helped paint a detailed picture by
sharing their use cases, experiences and challenges.

We then heard from Jordi Boza of Vuzix who shared his
thoughts and ideas of how to get started in AR followed by a presentation by
Atheer that took attendees through a case study showing how Porsche transformed
automotive dealer services with AR.

The last session of the day was an intense, hands-on session
presented by the AREA’s Dr. Michael Rygol who helped attendees get under the
skin of AR by discussing and documenting use cases and their key requirements
in working groups. Presentations by attendees led to some healthy debate and
interesting insights. The day was finished off with an informal networking
session where participants had the opportunity to take a closer look at some of
the organisations who were there with demo tables and to connect with
colleagues both old and new.

The second day was an early start at 8am and then straight
into a presentation from Theorem Solutions on the cognitive gap and potential
of XR technologies followed by a lively panel discussion on workforce
challenges led by AREA Board member Christine Perey of PEREY Research &
Consulting with representation from Boeing, ExxonMobil and VW Group UK. We then
explored more on the AREA’s Research capability by looking at past projects
before jumping into a master class on AR human-centred design from London-based
ThreeSixtyReality. A full agenda took us into a presentation on Human Factors
and related safety challenges and a pre-recorded session on overcoming the
challenges of AR security followed by a polished presentation from Microsoft on
their MR strategy and the eagerly anticipated HoloLens 2. A three-minute
provider pitch finished off a jam-packed day before participants headed home.

In summary, the depth and range of
content and sessions provided participants with a framework within which to navigate
(or continue navigating) their own AR journeys. Among the takeaways:

  • Staying in the AR game
    is tough. Organisations should consider both the opportunities and limitations of
    the current evolving environment.
  • The AR supplier ecosystem
    is continuing to grow, offering new and varied opportunities.
  • Clearer understanding
    and definition of the barriers to adoption (including safety, security, user
    experience) and paths forward to overcome these is essential.
  • Sound, appropriate use
    cases are key to learning more about AR. The number of use cases where AR
    delivers value continues to grow (and we need to capture and share these –
    hence the ASoN initiative from the AREA).
  • Digital eyewear to
    support AR is maturing rapidly (e.g., new models from Vuzix and Microsoft).
    Ensure you stay informed on new developments.
  • There is broad interest
    in AR across a number of industries – from industrial flooring to mining.
  • Considering the business
    benefits of AR is essential to obtaining buy-in from stakeholders and
    decision-makers.
  • There may be significant
    issues around safety and security where AR is concerned. Don’t ignore them.

The AREA annual workshop is an opportunity for members and non-members to connect, learn and share more on AR. We at the AREA are fortunate to have the opportunity to do this annually and it wouldn’t be possible this year without the valuable support of AREA members and our sponsors: Theorem Solutions, PTC, Vuzix and Atheer.




Embry-Riddle Prof. Barbara Chaparro on the Human Factors Aspects of AR

AREA: Tell us how you became interested in joining the AREA.

Dr. Chaparro: I first heard
about the AREA from Brian Laughlin at Boeing. Brian was my human factors
doctoral student when I was at Wichita State and we’ve kept in touch over the
years. I’ve seen the kinds of things he’s been working on at Boeing and how it
overlapped with my research interest in human/computer interaction and usability
and user experience. I saw an opportunity to pursue them further through the
AREA group.

AREA: Could you tell us more about your background as it relates to AR?

Dr. Chaparro: My background is
in the area of usability and user experience. I have worked with a number of
different companies and technologies focusing on implementing design principles
to make it as easy as possible for people to use devices and tools.

I became interested in AR when
Google Glass was introduced. I could see the potential in industries such as aviation,
medical, and consumer products. My initial interest with Glass was to use it as
a training tool for my students. I also worked with a colleague at Wichita
State to study user interactions with Glass versus a cell phone.

And then HoloLens came out, and
for a year and a half now, we have been exploring the user experience side of
HoloLens. We want to get an idea of how the average person experiences this
technology. For instance: What are some of the issues from a UX standpoint? The
gesturing, window manipulation, texting, voice input – all of these methods of
interaction bring usability and user experience issues to the human-technology
interaction. A lot of the literature is focused on the usability of a particular
app, but there is very little out there on the integration of multiple technologies,
working across a multitude of tasks at the same time, or task-switching between
the physical and augmented environment. That is my interest, and then seeing the
application of this to a variety of domains. I consult, for example, with
healthcare professionals who believe that AR has great potential. Whatever the
domain, there is going to be this core issue of usability that will determine whether
it takes off or not. Eventually, it comes down to the comfort and the seamlessness
of the user experience in the tasks that they are doing.

AREA: How do you expect to benefit from your membership in the AREA?

Dr. Chaparro: I see the AREA as
a fantastic mix of academic researchers and industries that are applying the
technology. Human factors is an applied field, so we’re always looking for
practical applications of the things we’re studying in the lab. So I see that
as a huge benefit of the AREA. Then we’ll benefit from the work of the various
committees. We’ve been participating in the Safety and the Research Committees,
and hopefully, the Human Factors Committee in the future. We need to understand
what the issues are, because any problem that an industry is having is a potential
research project for one of my students. And that’s the other benefit: to
recognize the needs of industries that will need to hire students that have
knowledge of this technology. We want to understand what those needs are so we
can build them into our curriculum if they are not already there.

AREA: Based on what you have learned so far, what do you see as the
major outstanding issue that needs to be addressed to make AR more usable to
the average person?

Chaparro: With these new
glasses and head-mounted devices, certainly comfort is an issue, especially in
industries where they will need to wear them for an extended period of time.
That’s going to be huge. And not from just a comfort standpoint but also visually
– going back and forth between the physical and augmented world and what that
experience is like.

AREA: In addition to the research projects you mentioned, what other
areas of AR are being explored at Embry-Riddle?

Chaparro: My colleague Dr. Joseph
Keebler has been conducting research related to marker-based AR in medical
training. His area of expertise is medical human factors, teams, and training, so
he is excited about the technology from both a training standpoint and as a
real-time use tool for high performing teams. The issue is that, while it
appears that this technology is great and effective, we really need more research
to demonstrate how and when it is working, and how to best integrate it into
modern day systems.

One challenge is that there’s
a novelty effect problem. For instance, there are research projects being done
that show AR is better for performing a task, but it is really hard to tease
away the novelty side of that. In other words – are people improving due to
increased learning from the AR system? Or is it simply the fact that it’s this
fascinating and visually impressive technology that is garnering people’s
interest and keeping them engaged? Joe and I are interested in how to structure
a study so that we are looking at the true effectiveness of the technology above
and beyond the effects of its potential novelty. Joe has published a few papers
on AR, including a chapter in the Cambridge Handbook of Workplace Training and
Employee Development (Keebler, Patzer, Wiltshire, & Fiore, 2017)[1].

Another one of our colleagues,
Dr. Alex Chaparro, has been working on the use of AR in transportation. For
example, AR has many applications in aviation, maintenance documentation, and
driving environments. His main interest is in the uses of AR and VR in these environments
to train individuals to perform complex tasks.

We also have a VR gaming lab.
Joe and I have also done some psychometric work on the validation of a new
satisfaction instrument for video games that we’re now trying to apply to the
AR world (Phan, Keebler, & Chaparro, 2016)[2]. We
definitely see the benefits of this technology and would like to see it
succeed.


[1] Keebler, J. R., Patzer, B. S., Wiltshire, T.
J., & Fiore, S. M. (2017). 12 Augmented Reality Systems in Training. The
Cambridge Handbook of Workplace Training and Employee Development
, 278.

[2] Phan, M. H., Keebler, J. R., & Chaparro, B. S. (2016). The
development and validation of the game user experience satisfaction scale
(GUESS). Human Factors, 58(8),
1217-1247.




The AREA & NIST Survey on AR Standards for Industry

To complete the survey will take approximately 5 minutes and aims to provide valuable information which will help drive and inform standards development strategies for the AR enterprise industry.

Please access the survey by following this link https://survey.zohopublic.com/zs/OwB3Gq




New AREA Statement of Needs Tool Helps You Capture and Manage Requirements

Initially
available to AREA members with a full launch to the wider ecosystem later this
year, the tool, known as ASoN (AREA Statement of Needs), has been
developed as a key aid to help fuel the growth of the Enterprise Augmented
Reality ecosystem. It enables users and suppliers of AR technologies and
solutions to collaborate on defining a set of requirements to satisfy the growing
needs of industrial enterprises.

By
providing a hierarchical, linked and tagged taxonomy of business processes, use
cases, requirements, personas and value propositions that can be targeted to
specified industries and solutions, ASoN aims to help the AR community to
develop a set of contextualised and actionable requirements that support both
enterprise users and suppliers of AR solutions.

The
ASoN tool enables participants of the ecosystem to:

  • Submit new and review existing content
  • Tag content against relevant industries,
    industrial settings and other key terms
  • Collaborate with others by commenting on
    requirements, etc.
  • Subscribe to specific content items and be
    notified if there are discussions or changes
  • Bulk load existing content (e.g., lists of
    existing requirements)
  • Create links between items (e.g., between
    individual requirements needed to support a specific use case)
  • Search, using rich query tools to, for
    example, find requirements within a field service setting in the automotive
    industry
  • Generate comprehensive reports for subsequent
    re-use

Value to the Enterprise AR Community

ASoN
helps enterprises streamline the research, planning and implementation of their
AR strategies by providing ready-built actionable use cases and requirements
which can accelerate the development of RFP/RFQ proposals and reduce the amount
of “scouting” and development of their company needs.

Moreover,
ASoN helps suppliers of AR technologies and solutions plan their development
roadmaps by providing an industry-centric view of both what is needed and by
whom.

The
AREA, as a neutral resource, offers all of its members the opportunity to
improve their businesses by harnessing this tool and its content.

Stuart
Thurlby, CEO at Theorem Solutions, commented, “The AREA’s Statements of Needs
management tool is an important enabler to drive the enterprise AR ecosystem
forward. The AREA’s provision of a neutral repository in a structured and
actionable format, based around the needs of the AR community, will likely
become a first stop shop for companies using or developing enterprise AR
solutions.”

Find
out more:

Find
out more about The AREA Requirements Committee




Augmented Reality a hot topic at MWC Barcelona 2019

By Christine Perey, PEREY Research & Consulting, AREA Board member and chair of the AREA’s Membership and Research Committees.

Augmented Reality, and how it converges with IoT, AI, Cloud and Edge Computing technologies, was among the loudest and brightest themes of Mobile World Congress (MWC) Barcelona 2019.




AREA to Lead Workshop at LiveWorx 19

LiveWorx has earned a spot on everyone’s calendar of must-attend events due to its content-packed agenda. The June conference in Boston really does deliver a year’s worth of technical learning in four days, addressing all the major technologies driving digital transformation – from AR and IIoT to blockchain and robotics.