1

Calling all AR Startups: Now There’s an AREA Membership Just for You

Are you an AR startup that would like to join the AREA but has lacked the resources for a full Contributor membership? Now you can take advantage of all the advantages of belonging to the AREA through our new Startup Membership.

The time-limited Startup membership offers you the full benefits of an AREA Contributor member:

  • Create awareness of your startup
  • Gain access to AREA thought leadership content
  • Attend AREA member events
  • Network with enterprises that are looking for AR solutions
  • Participate in AREA committees and help define the ecosystem
  • Get discounts to events negotiated by the AREA
  • Receive synopses of AREA research
  • Gain entry into the AREA marketplace (in development)
  • Contribute thought leadership content to the AREA blog
  • Get Contributor member voting rights
  • Be part of the only alliance focusing on AR in the Enterprise!

It’s a great way to develop your AR network and gain visibility with prospective enterprise customers. You get all this for $1500 per year – that’s $3500 less than the lowest annual fee for Contributor membership.

AREA Startup membership is limited to organizations that meet the following criteria:

  • Your total annual revenue is under $1 million.
  • Your staff size is 10 or fewer full-time and/or freelance employees.
  • Your organization has been trading for less than three years.

The AREA Startup membership package is only available for a two-year period. After the two years have elapsed, your company must choose a Contributor or Sponsor membership to continue as an AREA member.

Click here to take advantage of this exciting offer.




Global Smart Glass Market 2014-2021

The following is a summary of a report by DecisionDatabases.com, titled: “The Global Smart Glass Market Research Report – Industry Analysis, Size, Share, Growth, Trends and Forecast”.  This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures. 
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




Mixed Reality: Just One Click Away

Author: Aviad Almagor, Director of the Mixed Reality Program, Trimble, Inc.

Though best known for GPS technology, Trimble is a company that integrates a wide range of positioning technologies with application software, wireless communications, and services to provide complete commercial solutions. In recent years, Trimble has expanded its business in building information modeling, architecture and construction, particularly since the company’s 2012 acquisition of SketchUp 3D modeling software from Google. Mixed Reality is becoming a growing component of that business. This guest blog post by Trimble’s Aviad Almagor discusses how Trimble is delivering mixed reality solutions to its customers.

Many industries – from aerospace to architecture/engineering/construction (AEC) to mining – work almost entirely in a 3D digital environment. They harness 3D CAD packages to improve communication, performance, and the quality of their work. Their use of 3D models spans the full project lifecycle, from ideation to conceptual design and on to marketing, production, and maintenance.

Take AEC, for example. Architects design and communicate in 3D. Engineers design buildings’ structures and systems in 3D. Owners use 3D for marketing and sales. Facility managers use 3D for operation and maintenance.

And yet, we still consume digital content the same way we have for the last 50 years: behind a 2D screen. For people working in a 3D world, the display technology has become a limiting factor. Most users of 3D content have been unable to visualize the content their jobs depend on in full 3D in the real world.

However, mixed reality promises to change that. Mixed reality brings digital content into the real world and supports “real 3D” visualization.

The challenge

There are several reasons why mixed-reality 3D visualization has not yet become an everyday reality. Two of the primary reasons are the user experience and the processing requirements.

For any solution to work, it needs to let engineers, architects, and designers focus on their core expertise and tasks, following their existing workflow. Any technology that requires a heavy investment in training or major changes to the existing workflow faces an uphill battle.

Meanwhile, 3D models have become increasingly detailed and complex. It is a significant challenge – even for beefy desktop workstations – to process large models and support visualization in 60fps.

One way around that problem is to use coding and specialized applications and workflows, but that approach is only acceptable to early adopters and innovation teams within large organizations – not the majority of everyday users.

To support real projects and daily activities – and be adopted by project engineers — mixed reality needs to be easily and fully integrated into the workflow. At Trimble, we call this “one-click mixed reality” – getting data condensed into a form headsets can handle, while requiring as little effort from users as possible.

Making one-click mixed reality possible

The lure of “one-click” solutions is strong. Amazon has its one-click ordering. Many software products can be downloaded and installed with a single click. The idea of one-click mixed reality is to bring that ease and power to 3D visualization.

Delivering one-click mixed reality requires a solution that extends the capabilities of existing tools by adding mixed reality functionality without changing the current workflow. It must be a solution that’s requires little or no training. And it means that any heavy-lifting processing that’s required should be done in the background. From a technical standpoint, that means any model optimization, including polycount, occlusion culling, and texture handling, is performed automatically without the need for manual, time-consuming specialized processes.

At Trimble, we’re working to deliver one-click mixed reality by building on top of existing solutions. Take SketchUp for example, one of the most popular 3D packages in the world. We want to make it possible for users to design a 3D model in SketchUp, click to publish it, and instantly be able to visualize and share their work in mixed reality.

We’re making sure that we support users’ existing workflow in the mixed reality environment. For example, we want to enable users to use scenes from SketchUp, maintain layer control, and collaborate with other project stakeholders in the way they’re accustomed.

And we’re taking it one step further by making it possible to consume models directly from SketchUp or from cloud-based environments, such as SketchUp 3D Warehouse or Trimble Connect. This will eliminate the need to install SketchUp on the user’s device in order to visualize the content in mixed reality. As a next step, we are exploring with our pilot customers a cloud-based pre-processing solution which will optimize models for 3D visualization.

We’re making good progress. For example, in his Packard Plant project (which was selected to represent the US at the Venice Architecture Biennale), architect Greg Lynn used SketchUp and SketchUp Viewer for Microsoft HoloLens to explore and communicate his design ideas. In this complex project, a pre-processing solution was required to support mixed reality visualization.

“Mixed-reality bridges the gap between the digital and the physical. Using this technology I can make decision at the moment of inception, shorten design cycle, and improve communication with my clients” 

– Architect Greg Lynn

One-click mixed reality is coming to fruition. For project teams, that means having the ability to embed mixed reality as part of their daily workflow. This will enable users to become immediately productive with the technology, gain a richer and more complete visualization of their projects, and build on their existing processes and tools.

The advent of one-click mixed reality indicates that the world of AR/VR is rapidly approaching the time when processing requirements, latency, and user experience issues will no longer be barriers.

Aviad Almagor is Director of the Mixed Reality Program at Trimble, Inc.




AREA Members Featured in IndustryWeek Article on AR in Manufacturing

AREA members Newport News Shipbuilding (NNS), DAQRI, and Upskill and AREA Executive Director Mark Sage are featured in an article on AR at IndustryWeek, the long-running manufacturing industry publication. The article explores the state of AR adoption in manufacturing, weaving in the experiences and insights of NNS’ Patrick Ryan, DAQRI’s Matt Kammerait, and Upskill’s Jay Kim, along with observations from executives of GE Digital and Plex Systems. Find the article here.




The 1st AREA Ecosystem Survey is Here!

The Augmented Reality (AR) marketplace is evolving so rapidly, it’s a challenge to gauge the current state of market education, enterprise adoption, provider investment, and more. What are the greatest barriers to growth? How quickly are companies taking pilots into production? Where should the industry be focusing its efforts? To answer these and other questions and create a baseline to measure trends and momentum, we at the AREA are pleased to announce the launch of our first annual ecosystem survey.

Please click here to take the survey. It won’t take more than five minutes to complete. Submissions will be accepted through February 8, 2017. We’ll compile the responses and share the results as soon as they’re available.

Make sure your thoughts and observations are captured so our survey will be as comprehensive and meaningful as possible. Thank you!




GE’s Sam Murley Scopes Out the State of AR and What’s Next

General Electric (GE) has made a major commitment to Augmented Reality. The industrial giant recently announced that it plans to roll out AR in three business divisions in 2017 to help workers assemble complex machinery components. In his role leading Innovation and Digital Acceleration for Environmental Health & Safety at General Electric, Sam Murley is charged with “leading, generating and executing digital innovation projects to disrupt and streamline operations across all of GE’s business units.” To that end, Sam Murley evangelizes and deploys immersive technologies and digital tools, including Augmented Reality, Virtual Reality, Artificial Intelligence, Unmanned Aerial Vehicles, Natural Language Processing, and Machine Learning.

As the first in a series of interviews with AREA members and other ecosystem influencers, we recently spoke with Sam to get his thoughts on the state of AR, its adoption at GE, and his advice for AR novices.

AREA: How would you describe the opportunity for Augmented Reality in 2017?

SAM MURLEY: I think it’s huge — almost unprecedented — and I believe the tipping point will happen sometime this year. This tipping point has been primed over the past 12 to 18 months with large investments in new startups, successful pilots in the enterprise, and increasing business opportunities for providers and integrators of Augmented Reality.

During this time, we have witnessed examples of proven implementations – small scale pilots, larger scale pilots, and companies rolling out AR in production — and we should expect this to continue to increase in 2017. You can also expect to see continued growth of assisted reality devices, scalable for industrial use cases such as manufacturing, industrial, and services industries as well as new adoption of mixed reality and augmented reality devices, spatially-aware and consumer focused for automotive, consumer, retail, gaming, and education use cases. We’ll see new software providers emerge, existing companies taking the lead, key improvements in smart eyewear optics and usability, and a few strategic partnerships will probably form.

AREA: Since it is going to be, in your estimation, a big year, a lot of things have to fall into place. What do you think are the greatest challenges for the Augmented Reality industry in 2017?

SAM MURLEY: While it’s getting better, one challenge is interoperability and moving from proprietary and closed systems into connected systems and open frameworks. This is really important. All players — big, medium and small — need to work towards creating a connected AR ecosystem and democratize authoring and analytical tools around their technology. A tool I really like and promote is Unity3D as it has pretty quickly become the standard for AR/VR development and the environment for deployment of AR applications to dozens of different operating systems and devices.

It’s also important that we find more efficient ways to connect to existing 3D assets that are readily available, but too heavy to use organically for AR experiences. CAD files that are in the millions of polygons need some finessing before they can be imported and deployed as an Augmented Reality object or hologram. Today, a lot of texturing and reconstruction has to be performed to keep the visual integrity intact without losing the engineering accuracy. Hopefully companies such as Vuforia (an AREA member) will continue to improve this pipeline.

For practical and wide-scale deployment in an enterprise like GE, smart glasses need to be intrinsically safe, safety rated, and out-of-the box ready for outdoor use. Programmatically, IT admins and deployment teams need the ability to manage smart glasses as they would any other employee asset such as a computer or work phone.

AREA: GE seems to have been a more vocal, public proponent of Augmented Reality than a lot of other companies. With that level of commitment, what do you hope to have accomplished with Augmented Reality at GE within the next year? Are there certain goals that you’ve set or milestones you hope to achieve?

SAM MURLEY: Definitely. Within GE Corporate Environmental Health & Safety we have plans to scale AR pilots that have proven to be valuable to a broader user base and eventually into production.

Jeff Immelt, our Chairman and CEO, in a recent interview with Microsoft’s CEO Satya Nadella, talked specifically about the use of Microsoft HoloLens in the enterprise. He put it perfectly, “If we can increase productivity by one percent across the board, that’s a no brainer.” It’s all about scaling to increase productivity, scaling to reduce injuries, and scaling based on user feedback. In 2017, we will continue to transform our legacy processes and create new opportunities using AR to improve worker performance and increase safety.

AREA: Do you have visibility into all the different AR pilots or programs that are going on at GE?

SAM MURLEY: We’re actively investigating Augmented Reality and other sister technologies, in partnership with our ecosystem partners and the GE Businesses. Look, everyone knows GE has a huge global footprint and part of the reward is finding and working with other GE teams such as GE Digital, our Global Research Centers, and EHS Leaders in the business units where AR goals align with operational goals and GE’s Digital Industrial strategy.

At the 2016 GE Minds + Machines conference, our Vice President of GE Software Research, Colin Parris, showed off how the Microsoft HoloLens could help the company “talk” to machines and service malfunctioning equipment. It was a perfect example of how Augmented Reality will change the future of work, giving our customers the ability to talk directly to a Digital Twin — a virtual model of that physical asset — and ask it questions about recent performance, anomalies, potential issues and receive answers back using natural language. We will see Digital Twins of many assets, from jet engines to or compressors. Digital Twins are powerful – they allow tweaking and changing aspects of your asset in order to see how it will perform, prior to deploying in the field. GE’s Predix, the operating system for the industrial Internet, makes this cutting-edge methodology possible. “What you saw was an example of the human mind working with the mind of a machine,” said Parris. With Augmented Reality, we are able to empower the workforce with tools that increase productivity, reduce downtime, and tap into the Digital Thread and Predix. With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

However, we aren’t completely removing mobile devices and tablets from the AR equation in the short term. Smart glasses still have some growing and maturing to do. From a hardware adoption perspective, smart glasses are very new – it’s a new interface, a new form factor and the workforce is accustomed to phones, tablets, and touch screen devices. Mobile and tablet devices are widely deployed in enterprise organizations already, so part of our strategy is to deploy smart eyewear only when absolutely needed or required and piggyback on existing hardware when we can for our AR projects.

So, there is a lot going on and a lot of interest in developing and deploying AR projects in 2017 and beyond.

AREA: A big part of your job is navigating that process of turning a cool idea into a viable business model. That’s been a challenge in the AR world because of the difficulty of measuring ROI in such a new field. How have you navigated that at GE?

SAM MURLEY: That’s a good question. To start, we always talk about and promote the hands-free aspects of using AR when paired with smart glasses to access and create information. AR in general though, is a productivity driver. If, during a half-hour operation or maintenance task out in the field, we can save a worker just a few minutes, save them from having to stop what they’re doing, go back to their work vehicle, search for the right manual, find the schematic only to realize it’s out of date, and then make a phone call to try and solve a problem or get a question answered, an AR solution can pay for itself quickly as all of that abstraction is removed. We can digitize all of that with the Digital Twin and supply the workforce with a comfortable, hands-free format that also keeps them safe from equipment hazards, environmental hazards, and engaged with the task at hand.

Usability is key though – probably the last missing part to all of this – to the tipping point. Our workforce is so accustomed and trained to use traditional devices – phones, tablets, workstations, etc. Introducing smart glasses needs to be handled with care and with an end-user focus. The best AR device will be one that requires zero to no learning curve.

It is important to run a working session at the very start. Grab a few different glasses if you can and let your end users put them on and listen to their feedback. You need to baseline your project charter with pre-AR performance metrics and then create your key performance indicators.

AREA: At a company like GE, you’ve got the size and the resources to be able to explore these things. What about smaller companies?

SAM MURLEY: That’s definitely true. I hope we see some progress and maturation in the AR ecosystem so everyone can benefit – small companies, large organizations, and consumers. The cost of hardware has been a challenge for everyone. Microsoft came out with the HoloLens and then announced a couple of months later that their holographic engine in the system was going to be opened to OEMs. You could have an OEM come in and say, maybe I don’t need everything that’s packed in the HoloLens, but I still want to use the spatial sensing. That OEM can potentially build out something more focused on a specific application for a fraction of the cost. That’s going to be a game changer because, while bigger companies can absorb high-risk operations and high-risk trials, small to medium size companies cannot and may take a big hit if it doesn’t work or rollout is slow.

Hopefully we’ll see some of the prices drop in 2017 so that the level of risk is reduced.

AREA: Can you tell us about any of the more futuristic applications of AR that you’re exploring at GE?

SAM MURLEY: The HoloLens demo at Minds + Machines mentioned earlier is a futuristic but not-that-far-off view of how humans will interact with data and machines. You can take it beyond that, into your household. Whether it’s something you wear or something like the Amazon Echo sitting on your counter, you will have the ability to talk to things around as if you were carrying on a conversation with another person. Beyond that, we can expect that things, such as refrigerators, washing machines, and lights in our houses, will be powered by artificial intelligence and have embedded holographic projection capabilities.

The whole concept around digital teleportation or Shared Reality is interesting. Meron Gribetz, Meta’s CEO, showcased this on stage during his 2016 TEDx – A Glimpse of the Future Through an Augmented Reality Headset. During the presentation, he made a 3D photorealistic call to his co-founder, Ray. Ray passed a digital 3D model of the human brain to Meron as if they were standing right next to each other even though they were physically located a thousand miles apart.

That’s pretty powerful. This type of digital teleportation has the potential to change the way people collaborate, communicate, and transfer knowledge amongst each other. Imagine a worker being out in the field and he or she encounters a problem. What do they do today? They pick up their mobile device and call an expert or send an email. The digital communication stack of tomorrow won’t involve phones or 2D screens, rather holographic calls in full spatial, photorealistic, 3D.

This is really going to change a lot of, not only heavy industrial training or service applications, but also applications well beyond the enterprise over the next few decades.

AREA: One final question. People are turning to the AREA as a resource to learn about AR and to figure out what their next steps ought to be. Based on your experience at GE, do you have any advice for companies that are just embarking on this journey?

SAM MURLEY: Focus on controlled and small scale AR projects to start as pilot engagements. Really sharpen the pencil on your use case and pick one performance metric to measure and go after it. Tell the story, from the start to the end about how and what digital transformation can and will do when pitching to stakeholders and governing bodies.

My other recommendation is to leverage organizations like the AREA. The knowledge base within the AREA organization and the content that you push out on almost a daily basis is really good information. If I were just dipping my toe in the space, those are the types of things that I would be reading and would recommend other folks dig into as well. It’s a really great resource.

To sum up: stay focused with your first trial, determine what hardware is years away from real-world use and what is ready today, find early adopters willing to partner in your organization, measure effectiveness with insightful metrics and actionable analytics, reach out to industry experts for guidance, and don’t be afraid to fail.




The AR Market in 2017, Part 4: Enterprise Content is Not Ready for AR

Previous: Part 3: Augmented Reality Software is Here to Stay

 

As I discussed in a LinkedIn Pulse post about AR apps, we cannot expect users to run a different app for each real world target they want to use with AR or one monolithic AR application for everything in the physical world. It is unscalable (i.e., far too time-consuming and costly). It’s unclear precisely when, but I’m confident that we will, one day, rely on systems that make content ready for AR presentation as a natural result of digital design processes.

The procedures or tools for automatically converting documentation or any digital content into AR experiences for enterprise use cases are not available. Nor will they emerge in the next 12 to 18 months. To begin the journey, companies must develop a path that leads from current procedures that are completely separate from AR presentation to the ideal processes for continuous AR delivery.

Leaders need to collaborate with stakeholders to focus on areas where AR can make a difference quickly.

Boiling the Ocean

There are hundreds of AR use cases in every business. All AR project managers should maintain a catalog of possible use cases. Developing a catalog of use cases begins with identification of challenges that are facing a business. As simple as this sounds, revealing challenges increases exposure and reduces confidence in existing people and systems. Most of the data for this process is buried or burned before it escapes. Without data to support the size and type of challenges in a business unit, the AR advocate is shooting in the dark. The risk of not focusing on the best use case and challenges is too high.

There need to be initiatives to help AR project managers and engineers focus on the problems most likely to be addressed with AR. Organizational change would be a likely group to drive such initiatives once these managers are, themselves, trained to identify the challenges best suited for AR.

In 2017, I expect that some systems integration and IT consulting companies will begin to offer programs that take a methodical approach through the AR use case development process, as part of their services to clients.

Prioritization is Key

How do stakeholders in a company agree on the highest priority content to become AR experiences for their top use cases? It depends. On the one hand there must be consistent AR technology maturity monitoring and, in parallel, the use case requirements need to be carefully defined.

To choose the best use case, priorities need to be defined. If users perceive a strong need for AR, that should weigh heavily. If content for use in the AR experience is already available, then the costs and time required to get started will be lower.

A simple method of evaluating the requirements appears below. Each company needs to define their own priorities based on internal drivers and constraints.

ch

A simple process for prioritizing AR use cases (Source: PEREY Research & Consulting).

Markers Won’t Stick

One of the current trends in enterprise AR is to use markers as the target for AR experiences. Using computer vision with markers indicates to the user where they need to point their device/focus their attention, consumes less power and can be more robust than using 3D tracking technologies in real-world conditions.

However, for many enterprise objects that are subject to sun, wind and water, markers are not a strategy that will work outside the laboratory. Those companies that plan to use AR with real-world targets that can’t have markers attached need to begin developing a new content type: trackables using natural features.

In 2017 more enterprise AR project managers will be asking for SDKs and tools to recognize and track the physical world without markers. For most, the technologies they will test will not meet their requirements. If well managed, the results of testing in 2017 will improve the SDKs as suggested in our post about AR software.

The AR Ecosystem and Technology are Immature

While the title of this post suggests that enterprise content is not in formats and associated with metadata to make AR experiences commonplace, the reverse statement is also true: not all the required AR components are ready for enterprise introduction.

Projects I’ve been involved with in 2016 have shown that while there are a few very solid technologies (e.g., tracking with markers on print), most components of AR solutions with which we are working are still very immature. The hardware for hands-free AR presentation is one area that’s changing very rapidly. The software for enterprise AR experience authoring is another. As more investments are made, improvements in the technology components will come, but let’s be clear: 2017 will not be the year when enterprise AR goes mainstream.

For those who have seen the results of one or two good proofs of concept, there will be many people who will need your help to be educated about AR. One of the important steps in that education process is to participate in the activities of the AREA and to share with others in your company or industry how AR could improve workplace performance.

When your team is ready to introduce AR, call in your change management group. You will need all the support you can get to bring the parts of this puzzle together in a successful AR introduction project!

Do you have some predictions about what 2017 will bring enterprise AR? Please share those with us in the comments to this post. 




The AR Market in 2017, Part 3: Augmented Reality Software is Here to Stay

Previous: Part 2: Shiny Objects Attract Attention

 

There are some who advocate for integrating AR directly and deeply into enterprise content management and delivery systems in order to leverage the IT systems already in place. Integration of AR features into existing IT reduces the need for a separate technology silo for AR. I fully support this school of software architecture. But, we are far from having the tools for enterprise integration today. Before this will be possible, IT groups must learn to manage software with which they are currently unfamiliar.

An AR Software Framework

Generating and presenting AR to users requires combining hardware, software and content. Software for AR serves three purposes:

  1. To extract the features, recognize, track and “store” (manage and retrieve the data for) the unique attributes of people, places and things in the real world;
  2. To “author” interactions between the human, the digital world and real world targets found in the user’s proximity, and publish the runtime executable code that presents AR experiences; and
  3. To present the experience to, and manage the interactions with, the user while recognizing and tracking the real world.

We will see changes in all three of these segments of AR software in 2017.

Wait, applications are software, aren’t they? Why aren’t they on the list? Before reading further about the AR software trends I’m seeing, I recommend you read a post on LinkedIn Pulse in which I explain why the list above does not include thousands of AR applications.

Is it an AR SDK?

Unfortunately, there is very little consistency in how AR professionals refer to the three types of software in the framework above, so some definitions are in order. A lot of professionals just refer to everything having to do with AR as SDKs (Software Development Kits).

In my framework AR SDKs are tools with which developers create or improve required or optional components of AR experiences. They are used in all three of the purposes above. If the required and optional components of AR experiences are not familiar to you, I recommend reviewing the post mentioned above for a glimpse of (or watching this webinar for a full introduction to) the Mixed and Augmented Reality Reference Model.

Any software that extracts features of the physical world in a manner that captures the unique attributes of the target object or that recognizes and tracks those unique features in real time is an AR SDK. Examples include PTC Vuforia SDK, ARToolkit (Open Source SDK), Catchoom CraftAR SDK, Inglobe ARmedia, Wikitude SDK and SightPath’s EasyAR SDK. Some AR SDKs do significantly more, but that’s not the topic of this post.

Regardless of what it’s called, the technology to recognize and track real world targets is fundamental to Augmented Reality. We must have some breakthroughs in this area if we are to deliver the benefits AR has the potential to offer enterprises.

There are promising developments in the field and I am hopeful that these will be more evident in 2017. Each year the AR research community meets at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) and there are always exciting papers focused on tracking. At ISMAR 2016, scientists at Zhejiang University presented their Robust Keyframe-based Monocular SLAM. It appears much more tolerant to fast motion and strong rotation which we can expect to see more frequently when people who are untrained in the challenges of visual tracking use wearable AR displays such as smart glasses.

In another ISMAR paper, a group at the German Research Center for Artificial Intelligence (DFKI) published that they have used advanced sensor fusion employing a deep learning method to improve visual-inertial pose tracking. While using acceleration and angular velocity measurements from inertial sensors to improve the visual tracking has been promising results for years, we have yet to see these benefits materialize in commercial SDKs.

Like any software, the choice of AR SDK should be based on project requirements but in practical terms, the factors most important for developers today tend (or appear) to be a combination of time to market and support for Unity. I hope that with support for technology transfer with projects like those presented at ISMAR 2016, improved sensor fusion can be implemented in commercial solutions (in the OS or at the hardware level) in 2017.

Unity Dominates Today

A growing number of developers are learning to author AR experiences. Many developers find the Unity 3D game development environment highly flexible and the rich ecosystem of developers valuable. But, there are other options worthy of careful consideration. In early 2016 I identified over 25 publishers of software for enterprise AR authoring, publishing and integration. For an overview of the options, I invite you to read the AREA blog post “When a Developer Needs to Author AR Experiences.”

Products in the AR authoring group are going to slowly mature and improve. With a few mergers and acquisitions (and some complete failures), the number of choices will decline and I believe that by the end of 2017, fewer than 10 will have virtually all the market share.

By 2020 there will be a few open source solutions for general-purpose AR authoring, similar to what is available now for authoring Web content. In parallel with the general purpose options, there will emerge excellent AR authoring platforms optimized for specific industries and use cases.

Keeping Options for Presenting AR Experiences Open

Today the authoring environment defines the syntax for the presentation so there’s really little alternative for the user than to install and run the AR execution engine that is published by the authoring environment provider.

I hope that we will see a return of the browser model (or the emergence of new Web apps) so that it will be possible to separate the content for experiences from the AR presentation software. To achieve this separation and lower the overhead for developers to maintain dozens of native AR apps, there needs to be consensus on formats, metadata and workflows.

Although not in 2017, I believe some standards (it’s unclear which) will emerge to separate all presentation software from the authoring and content preparation activities. 

Which software are you using in your AR projects and what are the trends you see emerging?

 

Next: Navigating the way to continuous AR delivery




The AR Market in 2017, Part 2: Shiny Objects Attract Attention

Previous: Part 1, Connecting the Dots

 

There’s a great deal of attention being paid to the new, wearable displays for Augmented Reality. Hardware permits us to merge the digital and physical worlds in unprecedented ways. Wearable hardware delivers AR experiences while the user is also able to use one or both hands to perform tasks. The tendency to pay attention to physical objects is not unique to AR industry watchers. It is the result of natural selection: genes that gave early humans the ability to detect and respond quickly to fast moving or bright and unusual objects helped our ancestors survive while others lacking those genes did not.

Although this post focuses on the hardware for Augmented Reality, I don’t recommend focusing exclusively on advancements in AR hardware when planning for success in 2017. The hardware is only valuable when combined with the software, content and services for AR in specific use cases.

Now, considering primarily AR hardware, there are important trends that we can’t ignore. This post only serves to highlight those that, in my opinion, are the most important at an industry-wide level and will noticeably change in 2017.

Chips accelerate changes

Modern Augmented Reality hardware benefits hugely from the continued reduction in size and cost in hardware components for mass market mobile computing platforms. We need to thank all those using smart phones and watches for this trend.

As the semiconductor manufacturers gain experience and hard-code more dedicated vision-related computation into their silicon-based mix, performance of complete AR display devices is improving. Combined with the technology Intel recently acquired from Movidius (which will produce significant improvements in wearable display performance beyond 2017), Intel RealSense is an example of a chip-driven technology to monitor. Other offerings will likely follow from NVIDIA and Apple in 2017.

When available for production, the improvements in semiconductors for wearable AR devices will be measurable in terms of lower latency to recognize a user’s environment or a target object, less frequent loss of tracking, higher stability in the digital content that’s rendered, lower heat and longer battery life. All these are gradual improvements, difficult to quantify but noticeable to AR experts.

As a result of optimization of key computationally-intensive tasks (e.g., 3D capture, feature extraction, graphics rendering) in lower cost hardware, the next 12 to 18 months will bring new models of AR display devices. Not just a few models or many models in small batches.

These next-generation wearable display models with dedicated silicon will deliver at least a basic level of AR experience (delivery of text and simple recognition) for an entire work shift. Customers will begin to place orders for dozens and even, in a few cases, hundreds of units.

Optics become sharper

In addition to semiconductors, other components will be changing rapidly within the integrated wearable AR display. The next most important developments will be in the display optics. Signs of this key trend were already evident in 2016 – for example, when Epson announced the OLED optics designed for the Moverio BT-300.

It’s no secret that over the next few years, optics will shrink in size, drop in weight and demand less power. In 2017, the size and weight of fully functional systems based on improved optics for AR will decline. Expect smart glasses to weigh less than 80gms. Shrinking the optics will make longer, continuous and comfortable use more likely.

Developers raised issues about color quality and fidelity when testing devices introduced in 2015 and 2016. Color distortion (such as an oil spill rainbow effect) varies depending on the type of optics and the real world at which the user’s looking (the oil spill pattern is particularly noticeable on large white surfaces). The 2017 models will offer “true” black and higher fidelity colors in a wider range of settings. Again, the experts will feel these improvements first and “translate” them to their customers.

Another key area of improvement will be the Field of View. Some manufacturers will announce optics with 50° diagonal (a few might even reach 80° diagonal) in 2017. When combined with advanced software and content, these changes in optics will be particularly important for making AR experiences appear more realistic.

Combined with new polychromatic materials in lenses, lower weight and stronger material in the supports, optics will be more tolerant of changes in environmental conditions, such as high illumination, and will fit in more ruggedized packages.

More options to choose from

Speaking of packaging, in 2016 there are three form factors for AR displays:

  • monocular “assisted reality” hardware that clips onto other supports (frames) or can be worn over a user’s ear,
  • smart glasses that sit on the user’s nose bridge and ears, and
  • head-worn displays that use straps and pads and a combination of ears, noses and the user’s skull for support.

The first form factor does not offer an immersive experience and isn’t appropriate for all use cases, but assisted reality systems have other significant advantages (e.g., lower cost, longer battery life, lighter weight, easy to store) so they will remain popular in 2017 and beyond.

At the opposite end of the spectrum, the highly immersive experiences offered by head-worn devices will also be highly appealing for different reasons (e.g., depth sensing, accuracy of registrations, gesture-based interfaces).

We need to remember that the use cases for enterprise AR are very diverse and so can be the displays available to users. The new wearable AR display device manufacturers entering the fray in 2017 will stay with the same three general form factors but offer more models.

In addition to diversity within these three form factors there will be extensions and accessories for existing products – for example, charging cradles, corrective lenses, high fidelity audio and materials specifically designed to tolerate adverse conditions in the workplace environment.

The results of this trend are likely to include:

  • those selling wearable displays will be challenged to clearly explain new features to their potential customers and translate these features into user benefits,
  • those integrating AR displays will be more selective about the models they support, becoming partners with only a few systems providers (usually leaning towards the bigger companies with brand recognition)
  • buyers will need to spend more time explaining their requirements and aligning their needs with the solutions available in their budget range.

Wearable display product strategists will realize that with so many use cases, a single user could need to have multiple display models at their disposal. One possible consequence of this trend could be reduced emphasis on display systems that are dedicated to one user. We could see emergence of new ways for multiple users in one company or group to reserve and share display systems in order to perform specific tasks on schedule.

Rapid personalization, calibration and security will offer new opportunities to differentiate wearable AR display offerings in 2017.

Enterprise first

All of these different form factors and options are going to be challenging to sort out. Outside enterprise settings, consumers will not be exposed to the hardware diversity in 2017. They simply will not invest the time or the money.

Instead, companies offering new hardware, even the brands that have traditionally marketed to mass market audiences, will target their efforts toward enterprise and industrial users. Enterprises will increase their AR hardware budgets and develop controlled environments in which to compare AR displays before they can really make informed decisions at corporate levels. Third party services that perform rigorous product feature evaluations will be a new business opportunity.

While this post highlights the trends I feel are the most important when planning for success with AR hardware in 2017, there are certainly other trends on which companies could compete.

To learn more about other options and trends in wearable AR displays in 2016, download the EPRI Technology Innovation report about Smart Glasses for AR in which I offer more details.

What are the trends you think are most important in AR hardware and why do you think they will have a significant impact in 2017?

 

Next: AR software matures and moves toward standardization




The AR Market in 2017, Part 1: Connect the Dots

In your profession, you’re one of those who are most aware of future technologies. The proof of this fact is that you have discovered Augmented Reality and decided that it’s sufficiently important to dedicate at least a few minutes or hours to getting oriented and staying informed about the trends.

That’s the first step. But you know enough not to believe everything you read or see in a YouTube video.

The next step, if you haven’t done so already, is to train yourself to separate the biggest hype from the facts. This is not easy, but you should be able to hit this milestone by attending industry events where AR is being demonstrated and you can put your hands on the products in action, even under highly controlled conditions. Visiting one or more of the AREA members in their offices or inviting them to visit your facility will be even more valuable.

You’ll see some mock ups and, if you ask tough questions, you will also see some of the weaknesses and begin to glimpse the complexity of the problems facing adoption of these technologies. Keep a log of these experiences you have with Augmented Reality and the impressions they leave on you.

If you really want to understand the strengths and weaknesses “up close” and have budget, you can develop a project or participate in a group project that focuses on a well-defined use case.

Share what you learn

Once you’ve seen and captured notes about more than 10 live demonstrations in your journal and have personally touched AR, you can begin to “translate” for others what you’re seeing and doing.

But, wait! The insights you’ve acquired could offer a strategic advantage to your company so, why would you share them? Even if you are thinking that you should keep what you’ve gathered to yourself, I encourage you to share because AR is more than just another new technology offering you or your group a competitive advantage. This is going to be a major crowd-sourced, multi-year project. When more people are looking into AR technology, it will improve faster than when only a few are focusing on and investing in it in isolation.

Once AR is good enough to be used weekly (or daily) in more than one use case, it is going to push operational performance to new levels. Then you will be able to use it to full advantage.

AR may become as transformational to your company and industry as the Web and mobile devices during your professional career. But it requires more than one or two examples and adopters in an industry. Reaching a threshold level of adoption in your industry will be necessary. And, to begin meaningful adoption there need to be a few experts. We need people like you to translate the theory and potential of AR in your industry to practice and reality.

I’ve found that I can translate for others what I’m observing by breaking it down into four interrelated topics: hardware, software, content and services. For over a decade I’ve used these four legs of the AR platform to organize projects, to review the history of AR and to capture current status.

In a series of AREA blog posts I am sharing developments I believe will be important in AR in 2017 using this simple framework.

Connecting the dots around us

One observer can’t see all the details of the entire AR landscape, certainly not in all industries where the technology will apply. Fortunately, the AREA is a network of very bright minds that are also looking forward as well as in other directions, at the same time.

Many AREA members are in the trenches of Augmented Reality take on a forward looking challenge when, at the end of each year, they begin preparing their forecast for the following year.

I hope that these posts will permit you to find your place, connect yourself and in your comments to these posts, you will compare and contrast what you’ve observed with my experience.

If we each take a few minutes, hours or a day in this last quarter of 2016 to connect our dots together we will all be better equipped to concretely plan for an exciting year ahead!

Next: What’s new for AR hardware in 2017?