1

Blog on The Economist Demystifies Enterprise Augmented Reality

We’ve detected an increase in the frequency with which major business publications are writing about enterprise Augmented Reality. This post on The Economist blog is a great example of how to help readers understand the differences between AR for enterprise and that designed for consumers, as well as the distinctions between Augmented Reality and Virtual Reality.

The post which demystifies Augmented Reality features two of the AREA’s large enterprise customer members: Boeing and Newport News Shipbuilding and mentions the DAQRI smart helmet as a promising technology.

Although the promise of hands-free displays is large, the costs and other shortcomings of the technology are still high. In the near term many projects demonstrating the effectiveness of Augmented Reality will use tablets, according to Patrick Ryan of Newport News Shipbuilding.




Defining Augmented Reality Remains a Challenge

Advocates for Augmented Reality continue to find it challenging to communicate the benefits of the enabling technology when so many remain confused about the actual definition of the term. In this post on the ReadWrite.com blog, Kyle Samani, CEO of Pristine.io, explores possible alternatives and strategies to reduce the negative impacts of a term that continues to trouble.

Samani explains that some of the problem is exacerbated by developers that persist in offering the “glueing a phone to your face” user interaction model. It’s not surprising when you think about it. They are limited to using what’s available in 2015. According to Samani, “true AR” requires better optics, eye tracking integrated into the hardware and sophisticated computer vision. These rely on more powerful processors with better heat dissipation mechanisms and larger batteries.

So, in the end, definitions can keep people interested, but delivering the full promise remains a challenge as well.




Apple Rumored to be Developing Augmented Reality Windshield

In yet another sign that Apple is developing or planning to introduce systems using Augmented Reality, tech analyst Chip Chowdry told the Washington Post that the company is developing a 27- to 50-inch heads-up display for automobile windshields.

The new display would also incorporate gesture control by using sensors integrated into the windshield.

Release of the technology is “not imminent” and could conceivably be used in many different applications, such as TVs. The article recently published in Autoguide.com new section speculates the windshield AR technology could also be used in Apple’s rumored self-driving car. This would make Apple one of a number of companies working on Augmented Reality windscreens, such as Land Rover and Supplier Continental.




Augmented Reality Can Change Manufacturing Processes

Catavolt, a provider of enterprise application mobility solutions, has published a new eBook about optimizing manufacturing processes with real-time data which summarizes results of data collected from 111 manufacturing professionals in various disciplines about trends and technology that are driving operational excellence. This is an excellent resource for manufacturing and AR professionals.

In a recent post on the Catavolt blog some of the important study findings in relation to AR-assisted systems were summarized. Specifically, Augmented Reality is improving operational efficiencies in manufacturing by reducing production downtime, quickly identifying problems and keeping processes moving. An example provided is the use of AR in the assembly of F-35 Lightening II fighter jets at Lockheed Martin. It is interesting that there are not more companies mentioned or projects described.

Suggesting benefits of Augmented Reality is the first step in educating customers. Unfortunately, the Catavolt blog post fails to mention any of the obstacles that face those who are beginning to implement AR in their manufacturing processes. It would also have been valuable to gather insights on how early adopters in manufacturing industries are addressing major barriers encountered in the Augmented Reality introduction process.




VR and Augmented Reality Investments are on the Rise in 2015

According to a post published by CB Insights, VR and Augmented Reality companies are receiving a lot of attention from investors. The second quarter of 2015 saw $131 M raised in 16 deals.

While the amount invested in 2Q is an increase in terms of dollars raised quarter-to-quarter, up from $117 M in Q1, the number of deals is lower (1Q saw 25 new investments).

In the first half of 2015, there have been $248 Mn invested in 41 deals; that is a major improvement from the first half of 2014: $93 M invested in 23 deals.

Aside from the Magic Leap investment in fourth quarter 2014, the startups with the greatest investment in the past year are in three segments: healthcare (LensAR), hardware support (Matterport, Movidius, Leap Motion, Meta) and mobile software (NantMobile). 

Google Ventures and Intel Capital were the most active corporates in the space, tying with Andreessen Horowitz and TechStars for second place after Rothenberg Ventures.




Tips for Bringing Digital Experiences Into the Real World

“The simplest things in life are often the most difficult to perfect – especially things that we normally take for granted,” says, Jody Medich, consultant and UX designer. In an eloquent original essay published on the LeapMotion blog, Medich explains that until designers learn and practice new methods of interface design digital content will stay in its world (while the users will stay in the physical world). She compares our flat screens to pieces of paper with “magical” qualities that permit users to interact with the digital world but in very limited ways.

She implores UX designers to begin thinking of their interfaces in three dimensions and explains how to approach designing for 3D experiences, such as those provided in immersive environments like Virtual Reality and in the physical world, using only paper, scissors, markers, some red and blue film, and tape.

The results are very impressive and have permitted Medich and those with whom she works to design compelling interfaces such as Leap Motion’s Unity Widgets.

This blog should be in the bookmarks of every UX designer working on AR interfaces.




SMI Introduces New Mobile Eye Tracking Algorithms and Platform

Many hands-free Augmented Reality-assisted display providers are experimenting with voice, gesture and touch for user interaction. One of the options that has been the study of researchers requires the use of gaze detection and tracking for eye gestures. Past implementations of gaze-based user interfaces have not been widely adopted due to high need for training, slow performance and lack of accuracy.

In a press release issued by SensoMotoric Instruments (SMI), the company says that in its booth at SIGGRAPH visitors will be able to experience new eye tracking algorithms using the EPSON Moverio BT-200. The company reports that it will replace cumbersome and error-prone display calibration routines, provide higher quality results and add continuous adaption of the AR experience to the user’s biometric information with new eye tracking algorithms.

The systems are currently targeted at researchers. The platform has been tested in a German Federal Ministry for Education and Research funded project conducted by Cluster of Excellence Cognitive Interactive Technology (CITEC) researchers at Bielefeld University.




3D Interfaces Offer Users Infinite Space

In this essay published on the Singularity Hub blog, Jody Medich, a consultant and 3D user experience designer, compares how different 2D and 3D interfaces are for users and should be thought about deeply by those who seek to deliver the full value of virtual and Augmented Reality. The essay suggests that moving from 2D to 3D experiences with digital content is as profound as moving from the command line interface of DOS to the graphical user interfaces provided in later operating systems such as Macintosh OS and Windows.

The author begins with several observations with which no one can argue. For example, humans rely heavily on their vision for acquiring knowledge and just about everything else. Humans also use space to think. Medich explains that humans use three dimensional space to offload a number of cognitively heavy tasks from working memory. For example, space helps humans understand relationships between objects as a result of their relative proximity to one another.

The essay concludes with the suggestion that Augmented Reality systems will deliver a new technological revolution in which users work with infinite space. In infinite spatial systems, users create spatial buckets in which to organize their digital belongings, tools and tasks. These ideas are valuable for everyone to consider, even those who are not trained UX designers, because to use infinite spatial interfaces the new concepts will need to be understood by all.




Pelican Imaging Announces Depth Sensing Camera Array has Been Integrated

Pelican Imaging provides computational camera solutions that capture depth at every pixel, giving users the freedom to refocus after the fact, focus on multiple subjects, segment objects, take linear depth measurements, apply filters, change backgrounds, and create 3D models, from any device. In the past, the hardware has been demonstrated for use with stationary camera arrays as well as tablets. Many millions of dollars have been invested (by Qualcomm among others) to get the components into a form factor that is suitable for mobile platforms.

Prototypes of the patented depth sensing technology have been publicly demonstrated. Chris Pickett, Pelican Imaging’s CEO, presented the latest developments at Augmented World Expo 2015. By using a latest camera array, the component provides the system into which it is integrated improved image quality, faster focus, and highly accurate depth data.

The company has announced in a press release that hands-free Augmented Reality personal display or smart glasses providers are able to integrate, or perhaps one of the providers has already integrated, the depth-sensing array technology solutions. The release does not make clear which provider, if any, has taken this important step nor when that system will be available commercially.




Magic Leap’s User Interface for Augmented Reality

Fast Company takes a look at the 106-page patent filing of Magic Leap, an Augmented and Virtual Reality startup that recently closed a $542 million financing round led by Google.

The patent application reveals gesture-controlled smart glasses featuring an innovative user interface based on totem objects such as virtual keychains of menus and commands, allowing users to interact in new ways with information about their environment. The patent filing also shows virtual menus and dials being displayed on a user’s hand in order to transform the hand into an interface “controller.”

Although the majority of scenarios are consumer oriented (e.g., changing television channels via hand movements), a few use cases describe enterprise scenarios. One example shows a firefighter using a glove as an interface controller to communicate with the dispatcher, while another describes a Magic Leap-equipped doctor using a 3D model of a heart for guidance during surgery