1

Global Smart Glass Market 2014-2021

This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures.
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




Global Smart Glass Market 2014-2021

The following is a summary of a report by DecisionDatabases.com, titled: “The Global Smart Glass Market Research Report – Industry Analysis, Size, Share, Growth, Trends and Forecast”.  This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures. 
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




AREA Interview: Ken Lee of VanGogh Imaging

AREA: Tell us about VanGogh Imaging and how the company started.

KEN LEE: The reason I started VanGogh was I noticed an opportunity in the market. From 2005 to 2008, I worked in medical imaging where we mainly used 3D models and would rarely go back to 2D images. 3D gives you so much more information and a much better visual experience than flat 2D images. But creating 3D content was a very difficult and lengthy process. This is the one huge problem that we are solving at VanGogh Imaging.

We started when Microsoft Kinect first introduced their low-cost 3D sensoring technology. It allowed you to map in a three-dimensional way, where you can see objects and scenes and capture and track them. Van Gogh started in this field around 2011 and we’ve been steadily improving our 3D capture technology for over five years, working with several clients and differentiating ourselves by delivering the highest quality and easiest way to capture 3D models.

AREA: What is Dynamic SLAM and how does it differ from standard SLAM?

KEN LEE: Standard SLAM has been around for years. It works well when the environment is fairly static – no movements, a steady environment, no lighting changes. Dynamic SLAM is a SLAM that can adjust to these factors, from moving objects and changing scenes to people walking in front and lots of occlusions.

AREA: Are there certain use cases or applications that are particularly suited to dynamic SLAM?

KEN LEE: Dynamic SLAM is perfect for the real world, real-time environment. In our case, we are using dynamic capture mostly to enhance the 3D capture capability – so making 3D capture much easier, but still capturing at a 3D photorealistic level and fully automating the entire capture process plus dealing with any changes.

Let’s say you’re capturing a changing scene. You can update the 3D models in real time, just as you would capture 2D images with a video camera. We can do the same thing, but every output will be an updated 3D model at that given point. That’s why Dynamic SLAM is great. You can use dynamic SLAM just for tracking – for AR and VR – but that’s just one aspect. Our focus is on having the best tracking, not just for tracking purposes, but really to use that tracking capability to capture models very easily and update them in real time.

AREA: Once you have that model, can you use it for any number of different processes and applications?

KEN LEE: Sure. For example, you can do something as basic as creating 3D content to show people remotely. Let’s say I have a product on my desk and I want to show it to you. I can take a picture of it, or in less than a minute, I can scan that product, email it, and you immediately get a 3D model. Microsoft is updating its PowerPoint software next year so you will be able to embed 3D models.

There are other applications. You can use the 3D model for 3D printing. You can also use it for AR and VR, enabling users to visualize objects as true 3D models. One of the biggest challenges in both the VR and AR industry is content generation. It is very difficult to generate true 3D content in a fully automated process, on a real-time basis, that enables you to interact with other people using that same 3D model! That’s the massive problem we’re solving. We’re constantly working on scene capture, which we want to showcase this year, using the same Dynamic SLAM technology. Once you have that, anyone anywhere can instantly generate a 3D model. It’s almost as easy as generating a 2D image.

AREA: Does it require a lot of training to learn how to do the 3D capture?

KEN LEE: Absolutely not. You just grab the object in your hand, rotate it around and make sure all the views are okay, press the button, and then boom, you’ve got a fully-textured high-resolution 3D model. It takes less than a minute. You can teach a five-year-old to do it.

AREA: Tell us about your sales model. You are selling to companies that are embedding the technology in their products, but are you also selling directly to companies and users?

KEN LEE: Our business model is a licensing model, so we license our SDK on a per-unit basis. We want to stay with that. We want to stay as a core technology company for the time being. We don’t have any immediate plan for our own products.

AREA: Without giving away any trade secrets, what’s next in the product pipeline for VanGogh imaging?

KEN LEE: We just filed a patent on how to stream 3D models to remote areas in real time. Basically, we’ll be able to immediately capture any object or scene, as soon as you turn on the camera, as a true 3D model streaming in real time, through a low bandwidth wireless data network.

AREA: Do you have any advice for companies that are just getting into augmented reality and looking at their options?

KEN LEE: At this moment, Augmented Reality platforms are still immature. I would recommend that companies focus, not on technology, but on solving industry problems. What are the problems that the companies are facing and where could AR add unique value? Right now, the biggest challenge in the AR industry, and the reason why it hasn’t taken off yet, is that so much money has gone into building platforms, but no one has built real solutions for companies. I think they should look for opportunity in those spaces.




Mixed Reality: Just One Click Away

Author: Aviad Almagor, Director of the Mixed Reality Program, Trimble, Inc.

Though best known for GPS technology, Trimble is a company that integrates a wide range of positioning technologies with application software, wireless communications, and services to provide complete commercial solutions. In recent years, Trimble has expanded its business in building information modeling, architecture and construction, particularly since the company’s 2012 acquisition of SketchUp 3D modeling software from Google. Mixed Reality is becoming a growing component of that business. This guest blog post by Trimble’s Aviad Almagor discusses how Trimble is delivering mixed reality solutions to its customers.

Many industries – from aerospace to architecture/engineering/construction (AEC) to mining – work almost entirely in a 3D digital environment. They harness 3D CAD packages to improve communication, performance, and the quality of their work. Their use of 3D models spans the full project lifecycle, from ideation to conceptual design and on to marketing, production, and maintenance.

Take AEC, for example. Architects design and communicate in 3D. Engineers design buildings’ structures and systems in 3D. Owners use 3D for marketing and sales. Facility managers use 3D for operation and maintenance.

And yet, we still consume digital content the same way we have for the last 50 years: behind a 2D screen. For people working in a 3D world, the display technology has become a limiting factor. Most users of 3D content have been unable to visualize the content their jobs depend on in full 3D in the real world.

However, mixed reality promises to change that. Mixed reality brings digital content into the real world and supports “real 3D” visualization.

The challenge

There are several reasons why mixed-reality 3D visualization has not yet become an everyday reality. Two of the primary reasons are the user experience and the processing requirements.

For any solution to work, it needs to let engineers, architects, and designers focus on their core expertise and tasks, following their existing workflow. Any technology that requires a heavy investment in training or major changes to the existing workflow faces an uphill battle.

Meanwhile, 3D models have become increasingly detailed and complex. It is a significant challenge – even for beefy desktop workstations – to process large models and support visualization in 60fps.

One way around that problem is to use coding and specialized applications and workflows, but that approach is only acceptable to early adopters and innovation teams within large organizations – not the majority of everyday users.

To support real projects and daily activities – and be adopted by project engineers — mixed reality needs to be easily and fully integrated into the workflow. At Trimble, we call this “one-click mixed reality” – getting data condensed into a form headsets can handle, while requiring as little effort from users as possible.

Making one-click mixed reality possible

The lure of “one-click” solutions is strong. Amazon has its one-click ordering. Many software products can be downloaded and installed with a single click. The idea of one-click mixed reality is to bring that ease and power to 3D visualization.

Delivering one-click mixed reality requires a solution that extends the capabilities of existing tools by adding mixed reality functionality without changing the current workflow. It must be a solution that’s requires little or no training. And it means that any heavy-lifting processing that’s required should be done in the background. From a technical standpoint, that means any model optimization, including polycount, occlusion culling, and texture handling, is performed automatically without the need for manual, time-consuming specialized processes.

At Trimble, we’re working to deliver one-click mixed reality by building on top of existing solutions. Take SketchUp for example, one of the most popular 3D packages in the world. We want to make it possible for users to design a 3D model in SketchUp, click to publish it, and instantly be able to visualize and share their work in mixed reality.

We’re making sure that we support users’ existing workflow in the mixed reality environment. For example, we want to enable users to use scenes from SketchUp, maintain layer control, and collaborate with other project stakeholders in the way they’re accustomed.

And we’re taking it one step further by making it possible to consume models directly from SketchUp or from cloud-based environments, such as SketchUp 3D Warehouse or Trimble Connect. This will eliminate the need to install SketchUp on the user’s device in order to visualize the content in mixed reality. As a next step, we are exploring with our pilot customers a cloud-based pre-processing solution which will optimize models for 3D visualization.

We’re making good progress. For example, in his Packard Plant project (which was selected to represent the US at the Venice Architecture Biennale), architect Greg Lynn used SketchUp and SketchUp Viewer for Microsoft HoloLens to explore and communicate his design ideas. In this complex project, a pre-processing solution was required to support mixed reality visualization.

“Mixed-reality bridges the gap between the digital and the physical. Using this technology I can make decision at the moment of inception, shorten design cycle, and improve communication with my clients” 

– Architect Greg Lynn

One-click mixed reality is coming to fruition. For project teams, that means having the ability to embed mixed reality as part of their daily workflow. This will enable users to become immediately productive with the technology, gain a richer and more complete visualization of their projects, and build on their existing processes and tools.

The advent of one-click mixed reality indicates that the world of AR/VR is rapidly approaching the time when processing requirements, latency, and user experience issues will no longer be barriers.

Aviad Almagor is Director of the Mixed Reality Program at Trimble, Inc.




AREA Members Featured in IndustryWeek Article on AR in Manufacturing

AREA members Newport News Shipbuilding (NNS), DAQRI, and Upskill and AREA Executive Director Mark Sage are featured in an article on AR at IndustryWeek, the long-running manufacturing industry publication. The article explores the state of AR adoption in manufacturing, weaving in the experiences and insights of NNS’ Patrick Ryan, DAQRI’s Matt Kammerait, and Upskill’s Jay Kim, along with observations from executives of GE Digital and Plex Systems. Find the article here.




The 1st AREA Ecosystem Survey is Here!

The Augmented Reality (AR) marketplace is evolving so rapidly, it’s a challenge to gauge the current state of market education, enterprise adoption, provider investment, and more. What are the greatest barriers to growth? How quickly are companies taking pilots into production? Where should the industry be focusing its efforts? To answer these and other questions and create a baseline to measure trends and momentum, we at the AREA are pleased to announce the launch of our first annual ecosystem survey.

Please click here to take the survey. It won’t take more than five minutes to complete. Submissions will be accepted through February 8, 2017. We’ll compile the responses and share the results as soon as they’re available.

Make sure your thoughts and observations are captured so our survey will be as comprehensive and meaningful as possible. Thank you!




The AREA Issues Call for Proposals for an AR Research Project

The AREA has issued a request for proposals for a funded research project that its members will use to better understand relevant data security risks associated with wearable enterprise AR and mitigation approaches.

Organizations with expertise in the field of data security risks and mitigation and adjacent topics are invited to respond to the invitation by January 30, 2017.

The goals of the AREA-directed research project are:

  • To clarify questions about enterprise data security risks when introducing enterprise AR using wearables
  • To define and perform preliminary validation of protocols that companies can use to conduct tests and assess risks to data security when introducing wearable enterprise AR systems

The research project will produce:

  • An AREA-branded in-depth report that: details the types of data security risks that may be of concern to IT managers managing AR delivery devices and assets; classifies the known and potential threat to data security according to potential severity levels; and proposes risk mitigation measures
  • An AREA-branded protocol for testing wearable enterprise AR devices for their hackability or data exposure threat levels
  • An AREA-branded report documenting the use of the proposed protocol to test devices for their security exposure threat levels.

All proposals will be evaluated by the AREA research committee co-chairs on the following criteria:

  • Demonstrated knowledge and use of industry best practices for research methodology
  • Clear qualifications of research organization and any partners in the domain of data security threats and mitigation, and AR, if possible
  • Review of prior research report and testing protocol samples
  • Feedback of references

The AREA will provide detailed replies to submitters on or before February 13, 2017. The research project is expected to be completed and finished deliverables produced by May 1, 2017.

Full information on the request for proposals, including a submission form, can be found here.

 




GE’s Sam Murley Scopes Out the State of AR and What’s Next

General Electric (GE) has made a major commitment to Augmented Reality. The industrial giant recently announced that it plans to roll out AR in three business divisions in 2017 to help workers assemble complex machinery components. In his role leading Innovation and Digital Acceleration for Environmental Health & Safety at General Electric, Sam Murley is charged with “leading, generating and executing digital innovation projects to disrupt and streamline operations across all of GE’s business units.” To that end, Sam Murley evangelizes and deploys immersive technologies and digital tools, including Augmented Reality, Virtual Reality, Artificial Intelligence, Unmanned Aerial Vehicles, Natural Language Processing, and Machine Learning.

As the first in a series of interviews with AREA members and other ecosystem influencers, we recently spoke with Sam to get his thoughts on the state of AR, its adoption at GE, and his advice for AR novices.

AREA: How would you describe the opportunity for Augmented Reality in 2017?

SAM MURLEY: I think it’s huge — almost unprecedented — and I believe the tipping point will happen sometime this year. This tipping point has been primed over the past 12 to 18 months with large investments in new startups, successful pilots in the enterprise, and increasing business opportunities for providers and integrators of Augmented Reality.

During this time, we have witnessed examples of proven implementations – small scale pilots, larger scale pilots, and companies rolling out AR in production — and we should expect this to continue to increase in 2017. You can also expect to see continued growth of assisted reality devices, scalable for industrial use cases such as manufacturing, industrial, and services industries as well as new adoption of mixed reality and augmented reality devices, spatially-aware and consumer focused for automotive, consumer, retail, gaming, and education use cases. We’ll see new software providers emerge, existing companies taking the lead, key improvements in smart eyewear optics and usability, and a few strategic partnerships will probably form.

AREA: Since it is going to be, in your estimation, a big year, a lot of things have to fall into place. What do you think are the greatest challenges for the Augmented Reality industry in 2017?

SAM MURLEY: While it’s getting better, one challenge is interoperability and moving from proprietary and closed systems into connected systems and open frameworks. This is really important. All players — big, medium and small — need to work towards creating a connected AR ecosystem and democratize authoring and analytical tools around their technology. A tool I really like and promote is Unity3D as it has pretty quickly become the standard for AR/VR development and the environment for deployment of AR applications to dozens of different operating systems and devices.

It’s also important that we find more efficient ways to connect to existing 3D assets that are readily available, but too heavy to use organically for AR experiences. CAD files that are in the millions of polygons need some finessing before they can be imported and deployed as an Augmented Reality object or hologram. Today, a lot of texturing and reconstruction has to be performed to keep the visual integrity intact without losing the engineering accuracy. Hopefully companies such as Vuforia (an AREA member) will continue to improve this pipeline.

For practical and wide-scale deployment in an enterprise like GE, smart glasses need to be intrinsically safe, safety rated, and out-of-the box ready for outdoor use. Programmatically, IT admins and deployment teams need the ability to manage smart glasses as they would any other employee asset such as a computer or work phone.

AREA: GE seems to have been a more vocal, public proponent of Augmented Reality than a lot of other companies. With that level of commitment, what do you hope to have accomplished with Augmented Reality at GE within the next year? Are there certain goals that you’ve set or milestones you hope to achieve?

SAM MURLEY: Definitely. Within GE Corporate Environmental Health & Safety we have plans to scale AR pilots that have proven to be valuable to a broader user base and eventually into production.

Jeff Immelt, our Chairman and CEO, in a recent interview with Microsoft’s CEO Satya Nadella, talked specifically about the use of Microsoft HoloLens in the enterprise. He put it perfectly, “If we can increase productivity by one percent across the board, that’s a no brainer.” It’s all about scaling to increase productivity, scaling to reduce injuries, and scaling based on user feedback. In 2017, we will continue to transform our legacy processes and create new opportunities using AR to improve worker performance and increase safety.

AREA: Do you have visibility into all the different AR pilots or programs that are going on at GE?

SAM MURLEY: We’re actively investigating Augmented Reality and other sister technologies, in partnership with our ecosystem partners and the GE Businesses. Look, everyone knows GE has a huge global footprint and part of the reward is finding and working with other GE teams such as GE Digital, our Global Research Centers, and EHS Leaders in the business units where AR goals align with operational goals and GE’s Digital Industrial strategy.

At the 2016 GE Minds + Machines conference, our Vice President of GE Software Research, Colin Parris, showed off how the Microsoft HoloLens could help the company “talk” to machines and service malfunctioning equipment. It was a perfect example of how Augmented Reality will change the future of work, giving our customers the ability to talk directly to a Digital Twin — a virtual model of that physical asset — and ask it questions about recent performance, anomalies, potential issues and receive answers back using natural language. We will see Digital Twins of many assets, from jet engines to or compressors. Digital Twins are powerful – they allow tweaking and changing aspects of your asset in order to see how it will perform, prior to deploying in the field. GE’s Predix, the operating system for the industrial Internet, makes this cutting-edge methodology possible. “What you saw was an example of the human mind working with the mind of a machine,” said Parris. With Augmented Reality, we are able to empower the workforce with tools that increase productivity, reduce downtime, and tap into the Digital Thread and Predix. With Artificial Intelligence and Machine Learning, Augmented Reality quickly allows language to be the next interface between the Connected Workforce and the Internet of Things (IoT). No keyboard or screen needed.

However, we aren’t completely removing mobile devices and tablets from the AR equation in the short term. Smart glasses still have some growing and maturing to do. From a hardware adoption perspective, smart glasses are very new – it’s a new interface, a new form factor and the workforce is accustomed to phones, tablets, and touch screen devices. Mobile and tablet devices are widely deployed in enterprise organizations already, so part of our strategy is to deploy smart eyewear only when absolutely needed or required and piggyback on existing hardware when we can for our AR projects.

So, there is a lot going on and a lot of interest in developing and deploying AR projects in 2017 and beyond.

AREA: A big part of your job is navigating that process of turning a cool idea into a viable business model. That’s been a challenge in the AR world because of the difficulty of measuring ROI in such a new field. How have you navigated that at GE?

SAM MURLEY: That’s a good question. To start, we always talk about and promote the hands-free aspects of using AR when paired with smart glasses to access and create information. AR in general though, is a productivity driver. If, during a half-hour operation or maintenance task out in the field, we can save a worker just a few minutes, save them from having to stop what they’re doing, go back to their work vehicle, search for the right manual, find the schematic only to realize it’s out of date, and then make a phone call to try and solve a problem or get a question answered, an AR solution can pay for itself quickly as all of that abstraction is removed. We can digitize all of that with the Digital Twin and supply the workforce with a comfortable, hands-free format that also keeps them safe from equipment hazards, environmental hazards, and engaged with the task at hand.

Usability is key though – probably the last missing part to all of this – to the tipping point. Our workforce is so accustomed and trained to use traditional devices – phones, tablets, workstations, etc. Introducing smart glasses needs to be handled with care and with an end-user focus. The best AR device will be one that requires zero to no learning curve.

It is important to run a working session at the very start. Grab a few different glasses if you can and let your end users put them on and listen to their feedback. You need to baseline your project charter with pre-AR performance metrics and then create your key performance indicators.

AREA: At a company like GE, you’ve got the size and the resources to be able to explore these things. What about smaller companies?

SAM MURLEY: That’s definitely true. I hope we see some progress and maturation in the AR ecosystem so everyone can benefit – small companies, large organizations, and consumers. The cost of hardware has been a challenge for everyone. Microsoft came out with the HoloLens and then announced a couple of months later that their holographic engine in the system was going to be opened to OEMs. You could have an OEM come in and say, maybe I don’t need everything that’s packed in the HoloLens, but I still want to use the spatial sensing. That OEM can potentially build out something more focused on a specific application for a fraction of the cost. That’s going to be a game changer because, while bigger companies can absorb high-risk operations and high-risk trials, small to medium size companies cannot and may take a big hit if it doesn’t work or rollout is slow.

Hopefully we’ll see some of the prices drop in 2017 so that the level of risk is reduced.

AREA: Can you tell us about any of the more futuristic applications of AR that you’re exploring at GE?

SAM MURLEY: The HoloLens demo at Minds + Machines mentioned earlier is a futuristic but not-that-far-off view of how humans will interact with data and machines. You can take it beyond that, into your household. Whether it’s something you wear or something like the Amazon Echo sitting on your counter, you will have the ability to talk to things around as if you were carrying on a conversation with another person. Beyond that, we can expect that things, such as refrigerators, washing machines, and lights in our houses, will be powered by artificial intelligence and have embedded holographic projection capabilities.

The whole concept around digital teleportation or Shared Reality is interesting. Meron Gribetz, Meta’s CEO, showcased this on stage during his 2016 TEDx – A Glimpse of the Future Through an Augmented Reality Headset. During the presentation, he made a 3D photorealistic call to his co-founder, Ray. Ray passed a digital 3D model of the human brain to Meron as if they were standing right next to each other even though they were physically located a thousand miles apart.

That’s pretty powerful. This type of digital teleportation has the potential to change the way people collaborate, communicate, and transfer knowledge amongst each other. Imagine a worker being out in the field and he or she encounters a problem. What do they do today? They pick up their mobile device and call an expert or send an email. The digital communication stack of tomorrow won’t involve phones or 2D screens, rather holographic calls in full spatial, photorealistic, 3D.

This is really going to change a lot of, not only heavy industrial training or service applications, but also applications well beyond the enterprise over the next few decades.

AREA: One final question. People are turning to the AREA as a resource to learn about AR and to figure out what their next steps ought to be. Based on your experience at GE, do you have any advice for companies that are just embarking on this journey?

SAM MURLEY: Focus on controlled and small scale AR projects to start as pilot engagements. Really sharpen the pencil on your use case and pick one performance metric to measure and go after it. Tell the story, from the start to the end about how and what digital transformation can and will do when pitching to stakeholders and governing bodies.

My other recommendation is to leverage organizations like the AREA. The knowledge base within the AREA organization and the content that you push out on almost a daily basis is really good information. If I were just dipping my toe in the space, those are the types of things that I would be reading and would recommend other folks dig into as well. It’s a really great resource.

To sum up: stay focused with your first trial, determine what hardware is years away from real-world use and what is ready today, find early adopters willing to partner in your organization, measure effectiveness with insightful metrics and actionable analytics, reach out to industry experts for guidance, and don’t be afraid to fail.




The AR Market in 2017, Part 4: Enterprise Content is Not Ready for AR

Previous: Part 3: Augmented Reality Software is Here to Stay

 

As I discussed in a LinkedIn Pulse post about AR apps, we cannot expect users to run a different app for each real world target they want to use with AR or one monolithic AR application for everything in the physical world. It is unscalable (i.e., far too time-consuming and costly). It’s unclear precisely when, but I’m confident that we will, one day, rely on systems that make content ready for AR presentation as a natural result of digital design processes.

The procedures or tools for automatically converting documentation or any digital content into AR experiences for enterprise use cases are not available. Nor will they emerge in the next 12 to 18 months. To begin the journey, companies must develop a path that leads from current procedures that are completely separate from AR presentation to the ideal processes for continuous AR delivery.

Leaders need to collaborate with stakeholders to focus on areas where AR can make a difference quickly.

Boiling the Ocean

There are hundreds of AR use cases in every business. All AR project managers should maintain a catalog of possible use cases. Developing a catalog of use cases begins with identification of challenges that are facing a business. As simple as this sounds, revealing challenges increases exposure and reduces confidence in existing people and systems. Most of the data for this process is buried or burned before it escapes. Without data to support the size and type of challenges in a business unit, the AR advocate is shooting in the dark. The risk of not focusing on the best use case and challenges is too high.

There need to be initiatives to help AR project managers and engineers focus on the problems most likely to be addressed with AR. Organizational change would be a likely group to drive such initiatives once these managers are, themselves, trained to identify the challenges best suited for AR.

In 2017, I expect that some systems integration and IT consulting companies will begin to offer programs that take a methodical approach through the AR use case development process, as part of their services to clients.

Prioritization is Key

How do stakeholders in a company agree on the highest priority content to become AR experiences for their top use cases? It depends. On the one hand there must be consistent AR technology maturity monitoring and, in parallel, the use case requirements need to be carefully defined.

To choose the best use case, priorities need to be defined. If users perceive a strong need for AR, that should weigh heavily. If content for use in the AR experience is already available, then the costs and time required to get started will be lower.

A simple method of evaluating the requirements appears below. Each company needs to define their own priorities based on internal drivers and constraints.

ch

A simple process for prioritizing AR use cases (Source: PEREY Research & Consulting).

Markers Won’t Stick

One of the current trends in enterprise AR is to use markers as the target for AR experiences. Using computer vision with markers indicates to the user where they need to point their device/focus their attention, consumes less power and can be more robust than using 3D tracking technologies in real-world conditions.

However, for many enterprise objects that are subject to sun, wind and water, markers are not a strategy that will work outside the laboratory. Those companies that plan to use AR with real-world targets that can’t have markers attached need to begin developing a new content type: trackables using natural features.

In 2017 more enterprise AR project managers will be asking for SDKs and tools to recognize and track the physical world without markers. For most, the technologies they will test will not meet their requirements. If well managed, the results of testing in 2017 will improve the SDKs as suggested in our post about AR software.

The AR Ecosystem and Technology are Immature

While the title of this post suggests that enterprise content is not in formats and associated with metadata to make AR experiences commonplace, the reverse statement is also true: not all the required AR components are ready for enterprise introduction.

Projects I’ve been involved with in 2016 have shown that while there are a few very solid technologies (e.g., tracking with markers on print), most components of AR solutions with which we are working are still very immature. The hardware for hands-free AR presentation is one area that’s changing very rapidly. The software for enterprise AR experience authoring is another. As more investments are made, improvements in the technology components will come, but let’s be clear: 2017 will not be the year when enterprise AR goes mainstream.

For those who have seen the results of one or two good proofs of concept, there will be many people who will need your help to be educated about AR. One of the important steps in that education process is to participate in the activities of the AREA and to share with others in your company or industry how AR could improve workplace performance.

When your team is ready to introduce AR, call in your change management group. You will need all the support you can get to bring the parts of this puzzle together in a successful AR introduction project!

Do you have some predictions about what 2017 will bring enterprise AR? Please share those with us in the comments to this post. 




The AR Market in 2017, Part 3: Augmented Reality Software is Here to Stay

Previous: Part 2: Shiny Objects Attract Attention

 

There are some who advocate for integrating AR directly and deeply into enterprise content management and delivery systems in order to leverage the IT systems already in place. Integration of AR features into existing IT reduces the need for a separate technology silo for AR. I fully support this school of software architecture. But, we are far from having the tools for enterprise integration today. Before this will be possible, IT groups must learn to manage software with which they are currently unfamiliar.

An AR Software Framework

Generating and presenting AR to users requires combining hardware, software and content. Software for AR serves three purposes:

  1. To extract the features, recognize, track and “store” (manage and retrieve the data for) the unique attributes of people, places and things in the real world;
  2. To “author” interactions between the human, the digital world and real world targets found in the user’s proximity, and publish the runtime executable code that presents AR experiences; and
  3. To present the experience to, and manage the interactions with, the user while recognizing and tracking the real world.

We will see changes in all three of these segments of AR software in 2017.

Wait, applications are software, aren’t they? Why aren’t they on the list? Before reading further about the AR software trends I’m seeing, I recommend you read a post on LinkedIn Pulse in which I explain why the list above does not include thousands of AR applications.

Is it an AR SDK?

Unfortunately, there is very little consistency in how AR professionals refer to the three types of software in the framework above, so some definitions are in order. A lot of professionals just refer to everything having to do with AR as SDKs (Software Development Kits).

In my framework AR SDKs are tools with which developers create or improve required or optional components of AR experiences. They are used in all three of the purposes above. If the required and optional components of AR experiences are not familiar to you, I recommend reviewing the post mentioned above for a glimpse of (or watching this webinar for a full introduction to) the Mixed and Augmented Reality Reference Model.

Any software that extracts features of the physical world in a manner that captures the unique attributes of the target object or that recognizes and tracks those unique features in real time is an AR SDK. Examples include PTC Vuforia SDK, ARToolkit (Open Source SDK), Catchoom CraftAR SDK, Inglobe ARmedia, Wikitude SDK and SightPath’s EasyAR SDK. Some AR SDKs do significantly more, but that’s not the topic of this post.

Regardless of what it’s called, the technology to recognize and track real world targets is fundamental to Augmented Reality. We must have some breakthroughs in this area if we are to deliver the benefits AR has the potential to offer enterprises.

There are promising developments in the field and I am hopeful that these will be more evident in 2017. Each year the AR research community meets at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR) and there are always exciting papers focused on tracking. At ISMAR 2016, scientists at Zhejiang University presented their Robust Keyframe-based Monocular SLAM. It appears much more tolerant to fast motion and strong rotation which we can expect to see more frequently when people who are untrained in the challenges of visual tracking use wearable AR displays such as smart glasses.

In another ISMAR paper, a group at the German Research Center for Artificial Intelligence (DFKI) published that they have used advanced sensor fusion employing a deep learning method to improve visual-inertial pose tracking. While using acceleration and angular velocity measurements from inertial sensors to improve the visual tracking has been promising results for years, we have yet to see these benefits materialize in commercial SDKs.

Like any software, the choice of AR SDK should be based on project requirements but in practical terms, the factors most important for developers today tend (or appear) to be a combination of time to market and support for Unity. I hope that with support for technology transfer with projects like those presented at ISMAR 2016, improved sensor fusion can be implemented in commercial solutions (in the OS or at the hardware level) in 2017.

Unity Dominates Today

A growing number of developers are learning to author AR experiences. Many developers find the Unity 3D game development environment highly flexible and the rich ecosystem of developers valuable. But, there are other options worthy of careful consideration. In early 2016 I identified over 25 publishers of software for enterprise AR authoring, publishing and integration. For an overview of the options, I invite you to read the AREA blog post “When a Developer Needs to Author AR Experiences.”

Products in the AR authoring group are going to slowly mature and improve. With a few mergers and acquisitions (and some complete failures), the number of choices will decline and I believe that by the end of 2017, fewer than 10 will have virtually all the market share.

By 2020 there will be a few open source solutions for general-purpose AR authoring, similar to what is available now for authoring Web content. In parallel with the general purpose options, there will emerge excellent AR authoring platforms optimized for specific industries and use cases.

Keeping Options for Presenting AR Experiences Open

Today the authoring environment defines the syntax for the presentation so there’s really little alternative for the user than to install and run the AR execution engine that is published by the authoring environment provider.

I hope that we will see a return of the browser model (or the emergence of new Web apps) so that it will be possible to separate the content for experiences from the AR presentation software. To achieve this separation and lower the overhead for developers to maintain dozens of native AR apps, there needs to be consensus on formats, metadata and workflows.

Although not in 2017, I believe some standards (it’s unclear which) will emerge to separate all presentation software from the authoring and content preparation activities. 

Which software are you using in your AR projects and what are the trends you see emerging?

 

Next: Navigating the way to continuous AR delivery