How Secure Is Your AR Deployment?

That’s why the AREA developed the AR Security Maturity Self-Assessment: a free, practical tool to help organizations benchmark their AR security posture. Whether you’re just starting with AR or scaling up, this assessment guides you through key domains such as device management, data protection, user authentication, and incident response.

How does it work?

The self-assessment is structured around best practices and real-world scenarios, drawing on AREA’s extensive research and member expertise. You’ll answer questions about your current policies, controls, and processes. At the end, you’ll receive a maturity score and tailored recommendations for improvement.

The assessment covers five critical domains:

1. Device Management: How do you provision, update, and manage AR devices throughout their lifecycle? This includes everything from initial setup to decommissioning, ensuring devices remain secure and compliant with organizational policies.

2. Data Protection: What measures are in place to protect sensitive data processed by AR applications? This encompasses encryption, data classification, access controls, and data residency requirements.

3. User Authentication: How do you verify user identities and manage access to AR systems? Strong authentication mechanisms are crucial, especially for shared or multi-user AR devices.

4. Network Security: How do you secure communications between AR devices and enterprise systems? This includes network segmentation, traffic monitoring, and secure connectivity protocols.

5. Incident Response: Do you have procedures for detecting, responding to, and recovering from AR-related security incidents? Preparation is key to minimizing impact and ensuring business continuity.

Why take the assessment?

  • Identify gaps in your AR security strategy before they become vulnerabilities
  • Prioritize actions based on risk and organizational maturity
  • Access AREA’s library of security infographics and webinars for deeper learning
  • Benchmark your progress over time as your AR program matures
  • Connect with AREA’s community of security professionals and practitioners

The assessment isn’t just about identifying weaknesses—it’s about building a roadmap for improvement. Each domain includes specific recommendations and links to additional resources, including AREA’s comprehensive security framework and best practice guides.

Real-world insights from AREA members show that organizations using the self-assessment report improved security posture within 6-12 months. One manufacturing company discovered critical gaps in their device management processes, leading to a complete overhaul of their AR deployment strategy. Another enterprise found that their incident response procedures didn’t account for AR-specific scenarios, prompting the development of new protocols.

Next steps:

After completing the assessment, explore AREA’s fireside chats and video resources for expert insights and case studies. Security is a journey, not a destination—start yours with a clear map.

The assessment takes approximately 30-45 minutes to complete and provides immediate results with actionable recommendations. It’s designed to be revisited quarterly or after significant changes to your AR environment.

Remember: AR security isn’t just about protecting data—it’s about protecting people, processes, and your organization’s reputation. In an era where AR is becoming mission-critical for many enterprises, security can’t be an afterthought.

Take the first step toward securing your AR future. Your assessment awaits.




Stakeholder Management and Best Practices for AR ROI Success

Assembly Assistance with AR

The Human Factor in AR ROI

AREA research consistently shows that successful ROI analysis relies on implementations that involve the end user through the entire process. From defining and measuring the use case, during pilot testing, and through final implementation, effective stakeholder engagement generates significant benefits to the ROI process.

Identify and Engage Stakeholders Early

Think beyond the C-suite. Include IT, operations, HR, and end users. Each group has unique concerns and priorities:

  • Executives: Focus on financial impact and strategic alignment
  • IT: Address integration, security, and support requirements
  • Operations: Highlight process improvements and efficiency gains
  • End Users: Emphasize ease of use, training, and day-to-day benefits

AREA best practices emphasize involving stakeholders early, especially those who will use the technology daily. This approach maximizes user buy-in, ensures integrity of time and motion studies, and uncovers potentially unforeseen costs associated with different work environments.

Tailor Your Message with Data and Stories

Combine hard numbers from the AREA ROI Calculator with real-world examples. The AREA case study demonstrates how one company achieved remarkable results:

  • 45% reduction in mean-time-to-repair
  • 20% reduction in customer churn (from 2.5% to 2%)
  • 90% reduction in parts inventory
  • 25% reduction in audit costs

Use these concrete examples alongside your own projections to build credibility with different stakeholder groups.

Best Practices for Cross-Team Collaboration

AREA research identifies several critical success factors:

Establish Clear Metrics: Target specific business outcomes using Key Performance Indicators (KPIs). Common metrics for MRO applications include Mean Time to Failure (MTTF), Mean Time to Repair (MTTR), and Overall Equipment Effectiveness (OEE).

Ensure Financial Rigor: Collaborate with finance teams before pilots to ensure buy-in on business problems and measurement metrics. This cross-team collaboration is essential for ROI analysis that stands up to scrutiny.

Manage Change Effectively: Involve end users throughout the process to minimize “human” costs related to ongoing process change during deployment. Give users the ability to provide input on solutions, from hardware comfort to application value.

Assign a Champion: Organizations committed to maximizing ROI assign a “Champion” with sound grasp of both business and technology challenges. These individuals ensure projected ROI is realized after deployment and manage issues that could impact cost estimates.

Scaling Best Practices

For organizations moving beyond pilots to enterprise-wide deployments, AREA research recommends:

  • Evaluate ROI on each pilot using the same framework
  • Explore environmental factors that may differ from the pilot
  • Standardize your approach to business case development
  • Create rules-based frameworks for integration cost allocation

Leveraging AREA Resources

The AREA community provides extensive resources for stakeholder engagement:

  • Case studies and best practice documents
  • ROI Calculator with detailed instructions
  • Research reports on specific use cases and industries
  • Community insights and benchmarks
  • Templates and frameworks for standardized analysis

Conclusion

With the right tools, data, and stakeholder engagement, you can turn AR from a “nice-to-have” into a strategic advantage. ROI is your bridge between innovation and impact—use it wisely, and leverage the proven methodologies developed by the AREA community to ensure success.

As AREA research concludes: “Understanding the potential ROI and following best practices is important for enabling the broader development of the enterprise AR ecosystem and driving AR solutions into the mainstream”.




How to Measure AR ROI—A Practical Guide Using the AREA Calculator

Comparison of Visual Positioning Systems and Spatial Anchors

The AREA’s comprehensive research on AR ROI best practices reveals that organizations pioneering AR technology deployments and realizing superior ROI exhibit five common practices. This guide walks you through the practical application of these insights.

Step 1: Define Your Use Case and Gather Data

Start by clarifying your AR use case—maintenance, training, remote support, or something else. The AREA case study of an AR-enabled Maintenance Repair Operations (MRO) application shows how one healthcare services company achieved a 42% annual rate of return.

Gather baseline data through time and motion studies. In the AREA case study, the company conducted detailed studies with five maintenance technicians, revealing an average time savings of 58 minutes per repair task—a 45% improvement.

Step 2: Map Out Costs and Benefits

The AREA ROI Calculator helps you break down:

Direct Costs:
Software licenses and annual maintenance fees

  • Hardware (AR devices, network infrastructure, servers)
  • Labor costs (deployment, training, ongoing management)
  • Consulting and professional servicesDirect Benefits:
  • Productivity improvements (reduced task time, fewer errors)
  • Cost reductions (eliminated hardware, software, services)
  • Reduced inventory carrying costs
  • Lower audit and accounting costsIndirect Benefits:
  • Increased profits from improved customer satisfaction
  • Reduced customer churn
  • Enhanced safety and compliance
  • The AREA research emphasizes the importance of translating all benefits into corresponding cash flows. For example, productivity improvements should be calculated using fully loaded annual costs (wages plus benefits) for all affected employees.

Step 3: Account for Digital Readiness

One of the most significant challenges in AR ROI analysis is accounting for “digital readiness” costs. Converting paper-based operations to digital formats can consume significant resources. AREA best practices suggest:

  • Developing a formal or informal corporate digitization strategy
  • Accounting for digital readiness costs separately from project-based ROI
  • Digitizing resources in manageable chunks
  • Leveraging existing digital assets whenever possible

Step 4: Analyze and Communicate Results

The AREA ROI Calculator provides clear outputs: payback period, net present value, and ROI percentage. In the case study example, the MRO deployment delivered:

  • 42% annual rate of return
  • €129,000 annual average net benefit
  • Payback period of 3 years and 47 days

Pro Tips from AREA Best Practice:

  • Conduct time and motion studies for accurate baseline data
  • Involve end users throughout the process to ensure buy-in
  • Use pilot projects to validate assumptions and refine your model
  • Document both tangible and intangible benefits for a holistic view
  • Assign a “Champion” to follow through on implementation¹⁰

For a visual walkthrough, check out the AREA’s Introduction to ROI Calculator and Use Cases video available on their website.

 




Why ROI is the Key to Unlocking Enterprise AR Adoption

Why ROI Matters

ROI is more than a financial metric. It’s a strategic tool that helps organizations:

  • Justify AR investments to leadership and budget holders
  • Prioritize projects with the highest impact
  • Track progress and optimize deployments over time

According to AREA research, acceptable minimum threshold rates of return on investment for IT infrastructure and software range from ten percent to fifteen percent depending on the industry. But AR’s value isn’t always obvious. Benefits like reduced downtime, faster training, and improved safety can be both tangible and intangible. That’s why a structured approach to ROI is essential.

“We learned early on to quantify everything – time saved, errors reduced – so that leadership understands AR is a real value driver.” – Paul Davies, Boeing – Technical Fellow, Immersive Technologies

The Challenge of Digital Transformation

The digital transformation of enterprise through software-as-a-service driven business models, agile development processes, and connected technologies like IoT and AR have created new opportunities but also new challenges for ROI analysis. These trends are impacting AR ROI in three important ways:

1. Significant upfront investment needed for “digital readiness”
2. Shorter payback timeframes due to operational expense models
3. Stronger focus on revenue-generating investments

The AREA ROI Calculator: Your Starting Point

The AREA ROI Calculator is a free, purpose-built tool for evaluating AR investments. Developed by TechInsights in partnership with the AREA, it guides you through identifying, quantifying, and comparing costs and benefits, so you can build a compelling business case. Whether you’re just starting or scaling up, it’s the foundation for any AR ROI conversation.

The calculator prompts users to enter real or estimated metrics in formulas that then calculate the ROI. It addresses the unique challenges of evaluating early-stage technologies like AR, where operating systems, data protocols, and formats are often proprietary compared to more standardized platforms.

“Clear ROI can turn skeptics into AR champions overnight. For organizations, adoption hinges on proving measurable value – like cutting downtime by 50% – while also elevating the quality of work employees deliver. When both the bottom line and performance improve, AR stops being a novelty and becomes an indispensable tool.” – Brian Hamilton, DigiLens Inc. – Vice President, Sales & Marketing

 




The AREA Welcomes ShapesXR as a Member

Shapes XR

The Augmented Reality for Enterprise Alliance (AREA) today announced that ShapesXR has joined the consortium.

ShapesXR is an enterprise-focused, collaborative design platform built to accelerate 3D prototyping and spatial design across organizations. Used by industry leaders such as Mayo Clinic, Mondelez, Chanel, and Microsoft, it enables cross-functional teams to rapidly ideate, iterate, and communicate spatial concepts, including VR training scenarios and AR remote assistance. By bridging the gap between design and development, ShapesXR helps enterprises reduce time-to-market, minimize costly misalignments, and align stakeholders more effectively around shared visions.

As part of our commitment to advancing enterprise AR, ShapesXR has joined the AREA . The AREA provides a highly curated network of AR experts, structured engagement opportunities through workshops and working groups, and a platform to exchange best practices in human-centered design and spatial computing. It also offers valuable visibility for our solutions among decision-makers and thought leaders, as well as access to a wide range of member-exclusive resources. Joining the AREA reinforces our focus on shaping the future of immersive collaboration for the enterprise sector.

“By joining the AREA, we aim to contribute to the advancement of enterprise AR by supporting the creation of high-quality spatial content. As a creative tool purpose-built for designing XR experiences, ShapesXR is committed to empowering teams to bring their ideas to life and shaping the standards for immersive collaboration across industries.”

“We are proud to announce ShapesXR as a member of the AREA,” said Mark Sage, executive director of AREA. “Their experience with enterprise-focused solutions for 3D prototyping and spatial design is an excellent addition to the AREA as we work on enterprise AR adoption.”

About ShapesXR

ShapesXR is an advanced, collaborative design platform that allows users to prototype products and experiences in 3D within minutes. Its core mission is to democratize 3D content creation, enabling designers, developers, and business stakeholders to ideate, prototype, and communicate in 3D-without requiring prior experience in game engines or coding. For more information, visit https://www.shapesxr.com/.

About the AR for Enterprise Alliance (AREA)

The AR for Enterprise Alliance (AREA) is the only global membership-funded alliance helping to accelerate the adoption of enterprise AR by supporting the growth of a comprehensive ecosystem. The AREA accelerates AR adoption by creating a comprehensive ecosystem for enterprises, providers, and research institutions. AREA is a program of Object Management Group® (OMG®). For more information, visit the AREA website.

Object Management Group and OMG are registered trademarks of the Object Management Group. For a listing of all OMG trademarks, visit https://www.omg.org/legal/tm_list.htm. All other trademarks are the property of their respective owners.




Beyond the Headset: Empowering Frontline Inspections with the Untapped Potential of Voice

Beyond the Headset

Inspections form the bedrock of safe and efficient operations across vital industries like manufacturing, aviation, utilities, and logistics. Yet, all too often, these critical processes are weighed down by the inefficiencies of paper trails, the tedium of manual data entry, and the inconsistencies of human interpretation. As organizations embrace digital transformation, hands-free technologies like voice interfaces and augmented reality are emerging as powerful allies, promising significant gains in safety, accuracy, and overall productivity.

The Augmented Reality for Enterprise Alliance (AREA), a leading voice in the adoption of immersive technologies, recently shed light on this evolution with their report, “The Adoption of Real-Time AR-assisted Inspections for Quality and Compliance”. This insightful report highlights the transformative potential of AR headsets in frontline inspection scenarios. However, the fundamental needs identified within – hands-free access to information, real-time documentation capabilities, and intuitive guided workflows – resonate just as powerfully with the ruggedized mobile devices that remain the workhorse platform for a vast majority of frontline teams.

The Voice Advantage: Adding a Layer of Seamless Efficiency to Inspections

While AR offers a visually immersive experience, voice interfaces provide an equally compelling – and often more readily deployable – pathway to hands-free operation. Imagine inspection teams, whether clad in gloves or navigating complex machinery, interacting effortlessly with their devices simply by speaking. This capability translates directly into tangible benefits:

  • Enhanced Safety: By freeing up hands and eyes, voice interfaces allow workers to maintain complete situational awareness, crucial in potentially hazardous environments. No more fumbling with screens or looking down to input data.
  • Boosted Productivity: Voice commands streamline workflows, accelerate data capture, and provide instant access to information, significantly reducing the time spent on each inspection.
  • Improved Accuracy: Verbal confirmations and dictation minimize errors associated with manual data entry and ensure consistent adherence to protocols.
  • Reduced Cognitive Load: Voice-guided workflows and hands-free data input allow inspectors to focus on the task at hand, reducing mental fatigue and improving the quality of their observations.

Drawing inspiration from the AREA report, several compelling voice-enablement use cases emerge that directly address the evolving needs of industries where frontline workers often operate in challenging conditions. These applications aren’t confined to the realm of AR headsets; they seamlessly integrate with the ruggedized mobile devices already deployed across field service, logistics, and industrial landscapes. Voice interfaces provide a natural and intuitive way for workers to navigate complex procedures, capture critical data in real-time, and access essential information without ever breaking their stride or compromising their focus. And when powered by robust on-device speech recognition, these advantages are delivered with unwavering reliability and security, even in noisy, offline, or sensitive environments.

Unlocking Efficiency: Key Voice Use Cases for Inspection Workflows

  1. Voice-Driven Digital Checklists: Ensuring Safety and Compliance, Hands-Free
  • How it works: Replace cumbersome paper-based forms and error-prone on-screen taps with simple verbal confirmations. Voice prompts guide inspectors through each required step, demanding spoken verification to ensure thoroughness and adherence to regulations.
  • Value: Dramatically enhances safety protocols, improves inspection accuracy by ensuring no steps are missed, accelerates the process, and reduces the mental burden on inspectors.
  1. Voice-Activated App Navigation and Control: Keeping Focus Where it Matters Most
  • How it works: Empower workers to navigate intricate inspection workflows or access digital Standard Operating Procedures (SOPs) using intuitive voice commands.
  • Value: Preserves critical focus and maintains complete situational awareness, significantly mitigating the risk of accidents, especially in potentially hazardous operational zones.
  1. Voice-Based Data Entry and Inspection Logging: Capturing Insights in Real-Time
  • How it works: Enable inspectors to verbally dictate their findings, observations, and log inspection results directly while performing the task.
  • Value: Eliminates the redundant and time-consuming step of manual data entry, significantly improves the accuracy and immediacy of documentation, and creates a robust and easily auditable trail of inspections.
  1. Voice Search for Critical Reference Material: Knowledge at the Speed of Speech
  • How it works: Allow frontline teams to instantly retrieve essential manuals, detailed procedures, or historical inspection reports using simple voice commands.
  • Value: Minimizes operational downtime caused by searching for information and provides immediate access to the critical knowledge needed to make informed decisions in the field.
  1. Guided Procedures with Voice Interaction: Ensuring Consistency and Accelerating Training
  • How it works: Voice interfaces can provide clear, step-by-step verbal guidance through complex inspection workflows, acting as a virtual assistant.
  • Value: Significantly accelerates the onboarding process for new technicians and ensures consistent adherence to best practices across all inspections, regardless of experience level.

The Undeniable Value of On-Device Processing for Inspections

In the demanding environments where inspections take place, the advantages of on-device voice processing become even more pronounced. Eliminating the reliance on often unreliable or unavailable network connectivity in industrial facilities, remote sites, or high-security zones is paramount. On-device speech recognition guarantees immediate response times, keeps sensitive inspection data secure within the device, and reduces dependence on complex backend infrastructure. This translates directly into greater operational reliability, significantly lower latency, and a safer, more efficient experience for every frontline worker.

 

Keen Research provides the cutting-edge KeenASR SDK to voice-enable your frontline workflows. Contact us today at https://keenresearch.com or [email protected] to discuss your needs.




The Growing Irish Immersive Technology Sector

The Growing Irish Immersive Technology Sector

Note: This article is shared on behalf of a member company, EIRMERSIVE, and does not represent the work of the Augmented Reality for Enterprise Alliance (AREA).

The Irish immersive technology sector is emerging as a significant player on the international stage, with organizations generating over €92 million annually. According to the Irish Immersive Economy report 2022, the sector was valued at over €43 million. The global immersive technology market, which includes augmented reality (AR), virtual reality (VR), spatial computing, and mixed reality (MR), is currently valued at $65.5 billion and is projected to grow to $936.6 billion by 2030.

To capitalize on this growth potential, Cultural & Creative Industries Skillnet (CCIS) and Eirmersive have developed the Irish Immersive Technology Strategy for Growth (IITSG). This strategy aims to address the barriers to growth and provide strategic support to ensure Ireland’s place in the global market.

However, without immediate and sustained investment, Ireland risks falling behind other European countries that are actively investing in their immersive technology sectors, such as Finland with its “Finnish Metaverse Initiative”.

The IITSG was developed with input from a diverse range of stakeholders, including industry, government, large enterprises, SMEs, research, and education sectors. The strategy will be regularly updated to reflect ongoing developments in the field.

Read the full article here: Irish Immersive Technology Strategy for Growth




The evolution of delivering immersive media over 5G/Cloud

Guest blog from AREA member, Ericsson

This blog post introduces a white paper from Ericsson, an AREA Member. The full paper can be read here.

Introduction

With the availability of more Augmented Reality (AR) and Virtual Reality (VR) headsets, people are starting to experience more realistic and interactive immersive services. Thanks to the advanced technology embedded into the headset we are getting more powerful devices, able to compute and render images of increasing resolution and quality. Yet the development of longer and more realistic experiences is progressing slowly, limited by battery consumption, device form factor, and heat dissipation constraints. Many service providers have started to deploy services in the cloud to address these issues. However, running the application in the cloud imposes additional challenges: latency, bandwidth, reliability, and availability of the service. 5G cloud architecture can overcome those issues with solutions that can be applied incrementally, each differently affecting the complexity of the application, but each improving the ultimate experience for the user. Additionally, the ultimate vision for 5G architecture as applies to immersive experiences calls for new relationships among the ecosystem members – the consumer, communications service provider, hyperscale cloud provider, and developer/service provider.

This paper examines key aspects to launch an immersive service using 5G cloud infrastructure. First, reviewing recent offerings and developments, then walking through a set of use cases each exploiting more and more offload to the cloud. We follow with a description of 5G technologies that satisfy the use cases, and finally, reflect on the evolution of the stakeholders’ ecosystem in relation to their technical and commercial relationships to establish an immersive service using 5G.




Augment IT Breaks New Ground with Paraverse-Platform for Paraplegics and Prepares it for Apple Vision Pro

Augment IT, a leading international Extended Reality (XR) company, continues to develop the groundbreaking Paraverse platform for AR devices. The platform, which is specifically tailored to the needs of paraplegics, is already available for Magic Leap 2 and will be optimized for use with Apple Vision Pro.

Augment IT developed the Paraverse platform on the initiative of the Swiss Paraplegic Center (SPC) and tested it in close collaboration. Over more than a year, the team gained valuable insight into what tetraplegics and paraplegics need and how those needs can best be met.  People with spinal cord injuries are currently confined to their beds and require constant assistance, even turning a page in a book.

Paraverse-Platform for Paraplegics and Prepares it for Apple Vision Pro

This image is a screenshot of the product and may not represent the final version.

 

The Paraverse-platform provides users with barrier-free access to the digital world. This includes core functions such as making phone calls, reading messages, surfing the Internet, or continuing education and entertainment on video platforms – all controlled with the eyes.

In addition, high-resolution panoramic photos, and videos, especially the Vision Pro’s 3D videos, allow you to relive memories in a whole new way – as if you’re part of the experience. With the new 3D Personas, users can connect with family and friends outside the clinical environment using FaceTime or Microsoft Teams and thus maintain relationships.

Apple Vision Pro’s precise eye tracking allows them to make decisions completely on their own. This greatly enhances their quality of life. Privacy is also protected because, unlike traditional screens, only the patient is able to see the content in the headset.

The feedback from the limited number of initial trials of the Apple Vision Pro has been very encouraging. Patients have been very enthusiastic, and we have gained valuable insights for further development this year.

The goal is to make the platform available to hospitals and specialty clinics worldwide. The software currently runs on the Magic Leap 2 and will soon be available for the Apple Vision Pro.

Luca Jelmoni, CEO of SPC, emphasizes the importance of the Paraverse platform for patients: “The possibilities to communicate with their loved ones, reflect on experiences, learn new things independently, or transform the space you see every day into a completely new world – these possibilities can transform and significantly enrich the lives of our patients.”

The Paraverse platform is more than just a technological innovation. It improves the quality of life and can be a great help in coping with everyday life, especially in the beginning. This enormous added value has been confirmed several times by patients in the early stages of the project. The product is now being continuously developed and will gradually be made available to other clinics.

Reto Grob, CEO of Augment IT, is especially pleased with the positive feedback: “It was clear to us that immersive technologies create entirely new user experiences in many application areas. Paraverse is an initiative close to our heart because the value for the user is enormous. We have already proven this with Magic Leap 2 – and now with the launch of Apple Vision Pro, we have another technically outstanding platform in our long-term plan to significantly improve the lives of people with disabilities.”

About Augment IT
Augment IT is a leading international company in the field of Extended Reality (XR) with a clear focus on industry, transportation, and healthcare. The company delivers innovative XR software solutions that create real value for its customers. With well-known customers such as ÖBB, Hilti, and Arxada, and offices in Switzerland, Germany, and North Macedonia, Augment IT is consolidating its position as an ambitious XR start-up.




Top 2024 Enterprise AR Trends To Watch

Christine Perey, Spime Wrangler, PEREY Research & Consulting

As we ease out of the first month of 2024, we are now fully engaged in the new year. In the past 30 days, I’ve had an opportunity to learn from my peers, such as Tom Emrich of Niantic (trend watches on his newsletter) and the co-chair of the AREA Research Committee, Samuel Neblett of Boeing, and to reflect on the projects in which I’m involved.

I’ve compressed my vague sense of hope and excitement down into a few enterprise AR trends I will be watching over the next 11 months. These are not predictions but significant areas of focus that I believe will drive innovation and the adoption of enterprise AR. I’m now officially keeping track of these trends to see where, how, and if they come about.

Please share these with your colleagues and your partners. Do you have evidence that either confirms or questions any of these trends in your companies? I hope you will share your evidence, feedback, and ideas with me at [email protected].

Artificial Intelligence

AI
The convergence of AI and AR is the most significant and least surprising of the trends to watch in 2024. The signs are everywhere.

#1 Enterprises are beginning to internally test Generative AI (GenAI), including LLM lakes and private co-pilot solutions. Early adopters will increasingly combine these capabilities with AR tools. There are dozens of ways that the use of AI improves workflows and reduces the costs of enterprise AR. Well-positioned and programmed AI can extract relevant content from corporate data sets for visualization. Here are a few examples of where and how GenAI could boost AR:

Using Digital Twins for baseline and AI for detecting and matching features in 3D environments (rare in 2023), we expect enterprises to expand their interest in and need for spatially-aware apps and services. For example, we will see a proliferation of AR-assisted Visual Positioning Services for navigation and risk detection based on 3D maps.

Combined with advances in hardware (see below), GenAI will permit the automatic generation of richer AR experiences for hundreds of use cases, including but not necessarily limited to 3D spatial maps. Multi-modal LLMs, an advanced type of AI that can understand and generate not just text but other types of data, such as images, audio, and possibly even video, are on the rise. These Multi-modal AI models incorporate previously captured scenes into new instructions. They will detect sounds from the environment and predict risks or propose the user to respond in specific ways without being programmed/coded in advance.

#2 AI and computer vision advancements could address concerns over privacy in data collection and handling. Privacy and sensitivity to security risks from the use of cameras and other sensors in the workplace continue to be obstacles to large-scale AR deployments. With AI, real-time image and feature detection, blurring, and obfuscation methods can be combined with AR displays (or their associated services and software) with lower cost and power. Enterprise AR solutions for protecting the privacy of things, places, and people (AR device users and those around them) with AI in the loop will proliferate in response to the need for compliance with corporate privacy policies as well as national and international regulations.

 

Hardware

Hardwear
#3 Aside from a few roles (e.g., architects or those viewing medical imagery), knowledge workers don’t need to spend their time or money on large, virtual screens (aka Apple Vision Pro). Video see-through isn’t a viable substitute for Optical see-through in the workplace, where employee tasks require hands-free AR and peripheral vision. Video quality issues, including distortion, fixed camera IPD, high ISO, low dynamic range, low camera resolution, and low frame rate, are exceedingly difficult (think: high power use) to overcome. However, a lot of money will be invested, and marketing campaigns will make people try. Try though they will, the entire Video see-through headset push will not make a significant dent in reducing the optical see-through requirement for enterprise AR displays. I’ve heard repeatedly that any risk manager who would approve the use of video see-through XR displays for use in a production environment where risks are high is risking their employment.

#4 Smaller, more powerful, and less power-consuming sensors will be more economical to deploy and manage. In addition to the lower cost of implementation and management of IoT, more specialized semiconductor solutions, especially those specialized in computer vision but also for processing audio and motion, are increasingly being added to AR display devices. Imagine sensors on the device detecting the user’s need for corrective lenses and then generating the corrected version of the real world (enhanced with AR, of course) without the user’s being aware or needing to wear two pairs of glasses. The improvements in display capabilities, combined with cheaper hardware distributed in the user’s environment (think: intelligent spaces) and connected to AI in the display or on edge computing hardware, are making context awareness less expensive to acquire and more reliable. A deeper understanding of context translates to many of the other trends identified below.

#5 More companies will introduce lightweight, cheaper (and less capable) AR glasses to the market. Not all users need or want a full “computer” on their heads. There are more ways to add value than a helmet or a heavy and powerful wearable AR display. Some devices are offloading processing to tethered phones. Others offer wireless, monocular AR glasses to display only heads-up messages to users. We will also watch for the audio-only AR glasses segment to expand where voice prompts and AI-enabled audio responses satisfy the use case requirements.

UX

virtual keyboard#6 New modes of interaction are beginning to complement/replace/displace the need for controllers and virtual keyboards. We are already starting to see more use of eye tracking, gaze, and natural gestures (e.g., pointing with better hand tracking) for inputs. Improvements in hand gesture tracking technologies will, in many cases, translate to lower cognitive loads and lower computational loads. Neural inputs using a headband or muscular signals via a wristband allow users to control all their digital devices using natural human interfaces. The user’s tongue might even become a source of input. Also, look out for brain sensing with EMG.

#7 Similarly to #6, due to new and different sensors in devices, there will be developments in how users receive/perceive the digital data in context in the workplace. In addition to animations, video clips, still images, and text, we will see rapid experimentation and exciting opportunities to use spatial audio and to provide just-in-time instructions and information to users using combinations with other wearables (e.g., watches and smart garments).

Infrastructure5G

#8 Private 5G networks, combined with 5G compatible hardware and cloud and edge computing, will permit richer experiences without heavier or power-consuming devices. While the verdict is still out on the cost-effectiveness of private 5G networks based on current implementations and use cases, they are gradually improving. There will be more 5G support in the next-generation AR displays. These core enabling technologies will lead to increased adoption of AR experience streaming and collaborative AR experiences.

#9 Security for AR experiences may be addressed in the network using improvements in off-device and automatic authentication of AR users and devices. Ensuring corporate cybersecurity is an enormous concern for all IT departments, and most AR devices are ill-equipped to meet all the requirements. Expertise in security risk reduction is not a core competency of most AR providers. Innovations to ensure high corporate data protection, privacy and reduce exposure from AR user intentional or inadvertent actions will come from network technology providers. They and their service provider customers have solutions that are emerging from research and will be tested in the near future.

SoftwareAI softwear

#10 Low-code/no-code will continue to gain traction with the assistance of AI. There are now dozens of low-code/no-code solutions available. The problems are figuring out which ones meet the enterprise requirements, including but not limited to security concerns. While AI eats away at the need to manually code experiences, subject matter experts are becoming the authors of more and more custom experiences. The biggest winner from this trend will be medium-sized companies without the necessary engineering resources to meet all their AR use case needs. With the low-code/no-code options reaching greater maturity and ease of use, the need for dedicated and highly paid AR experience developers and tools with steep learning curves will diminish.

#11 Standards are increasingly relevant and, combined with the expanded support of open-source libraries, reduce the need to develop and maintain display-specific apps and content for delivering experiences across a range of AR devices. Although W3C WebXR continues to evolve slowly, the processing requirements for Web-based solutions are being increasingly met by the hardware in a broader range of AR display devices. The improvements in network infrastructure also make more edge processing possible. Using the Web to provide AR experience content is highly scalable and can be entirely deployed in a company’s Intranet. Khronos Group’s OpenXR is already widely adopted on AR hardware and, combined with support for glTF, is significantly simplifying the development of content creation platforms (fueling the no-code/low-code trend). We expect that other standards will be adopted for AR experiences.

#12 AR developers’ skill sets and tools become more specialized, and the learning curves become steeper. On the one hand, AI and adopting standards simplify and accelerate the creation of AR experiences; they also introduce new risks. These are golden opportunities for specialization. AR developers and those with expertise in adjacent fields will increasingly have new offerings, such as deeper integrations with Learning Management systems, Enterprise Resource Planning, and Product Lifecycle Management platforms. Editing of AR experience recordings to preserve knowledge and accelerate its transfer will combine AR expertise with AI tools.