1

AR4 Commercializes New Computer Vision Technologies

Austrian startup, AR4 GmbH, has announced that its computer vision-based 3D tracking technology targeting industrial use cases, Vizar.Track, is now commercially available. The new markerless tracking technology is SLAM-based and optimized for existing mobile platforms.

The Vizar team, led by TU Graz senior research scientist, Clemens Arth, is also working on a mobile camera calibration tool, VIZARIO.Cam. VIZARIO.Cam created sets of calibration records for multiple resolutions and multiple focal settings, which in turn can be used to improve tracking and recognition results. Finally, the company has released preliminary information about a mobile visual search library, VIZARIO.Find.

AR4 is a spin-off of the Institute for Computer Graphics and Vision of the Graz University of Technology in Austria. Led by Dieter Schmalstieg, who is currently an advisor to AR4, this institute is the research powerhouse which developed technology for the patents acquired by Qualcomm from Imagination Computer Services GmbH in 2010 (and that became Vuforia).




Microsoft Opens its Windows Holographic Developer Portal

Creating holographic applications is going to be challenging. In preparation for shipment of its HoloLens Development Edition hardware at the end of March, Microsoft has opened its new Windows Holographic developer portal to help those getting started. Text on the portal’s developer overview section explains that HoloLens is a Universal Windows Platform. The developer needs to develop “holographic apps,” all of which are Universal Windows apps. Conversely, all Universal Windows apps can be adapted for presentation on Microsoft HoloLens.

Developing for HoloLens requires more than new software. The paradigm is different and this requires new terminology. For example, in Augmented Reality, developers describe and develop experiences within the real world. An interactive scene is the real world and the experience runs within it. In HoloLens the real world and hologram are referred to as the “shell” or the “mixed world.” The platform documentation also introduces concepts such as persistence of holograms. To keep some things familiar, Microsoft has extended Cortana, the desktop or mobile phone user’s personal assistant, for use on HoloLens.

The first of the eight planned tutorials in a dedicated “Holographic Academy” on Unity as well many other topics, focuses on the HoloLens Emulator (only 44 seconds published at the time of this post). A developer discussion forum is also provided in the portal. It will be interesting to see how much Microsoft will need to add in order to keep the developer ecosystem pushing the envelope once it has the new hardware to test.




Meta Founder Demonstrates Meta Vision at TED 2016

Meta Company, the Portola Valley, California startup behind the kickstarter-funded Meta Spaceglasses project in summer 2013 which then grew into a well-funded technology provider, was invited to the TED 2016 stage. The company’s founder, Meron Gribetz, spent his 20 minutes to demonstrate the next generation product, the Meta Vision.

The latest device, slated to officially launch on March 2, 2016, builds upon the company’s Meta 1 developer kit. In the 15-month interval between the two generations of hardware, the company has grown rapidly (it has raised $23M from Silicon Valley luminaries) and also acquired a lot of insights for the Meta Vision from its 1000 developer-strong global Meta Pioneers ecosystem.

From the videos the company has released, Meta Vision is neither 100% Augmented Reality nor Virtual Reality. The focus of the new product is on a gesture-based platform for interaction with screens and holographically projected objects. Similar to Microsoft when describing and showing the features of HoloLens and use cases, Meta focuses on adding 3D holographic objects without obstructing the real world. Another way that it is similar to HoloLens is the form factor: the Meta Vision uses a band and strap system over the skull to reduce the device’s weight on the user’s ears and nose. Unlike HoloLens, the Meta Vision display is currently tethered (i.e., uses a cable) to accessories and, thus, not likely to serve in many use cases where the user must move about freely.

The system, which runs interactive, graphic-rich applications generated with the Unity game engine, also permits the user to view information on screens. The screen shots published on this Business Insider blog post show access to Gmail and Spotify. Explaining that this could be valuable for information workers, Gribetz is repeatedly quoted stating that engineers at Meta will be able to replace their desktop and laptop computers with the Meta Vision glasses for their day-to-day work.




Epson Announces Moverio BT-300 at Mobile World Congress

Smart glasses are getting lighter, more powerful and more likely to meet enterprise customer requirements. According to the press release issued by Epson, followed by coverage on the TechRepublic blog and other sites, the Moverio BT-300 introduces a new silicon-based OLED (organic light emitting diode) digital display technology and features a 5 mega pixel front-facing camera. The silicon OLED, engineered for use in the Moverio platform, replaces the prior model’s glass lenses to reduce weight and increase brightness and resolution.

One of the partnerships highlighted as key to improving the performance of Epson’s new smart glasses is with Intel. However, unlike the DAQRI Smart Helmet, which Intel announced is based on the Intel dual core m7 processor, Epson chose the quad core Intel Atom X5 processor. How these systems compare in the finished products remains to be seen. Although pricing has not been announced, more details about the BT-300 specifications are available on the company’s web site.

Unfortunately, the video released by the company to show use cases for the Epson Moverio BT-300 smart glasses focuses entirely on consumers in sports and cultural heritage environments. Other photography shows a user wearing the BT-300 while operating and watching a drone.




APX Skylight Available on Intel’s Recon Jet

AREA member APX Labs has announced that Skylight, its software suite for smart glasses, is now available for developers choosing to deploy Recon Jet devices in enterprise use cases. The hardware, originally styled and promoted for sports use cases, combines an Android-based mobile operating system, and an LCD display positioned just below the user’s right eye. The device is water resistant and the lenses are produced with impact resistant materials, making them well suited to workers that need protective eyewear on the job.

This partnership with Recon Instruments, an Intel Company, opens the door for those who are looking for alternate monocular smart glasses products to deploy for hands-free workers. Running Skylight, Recon Jet users are able to see work instructions quickly as well as to receive live task guidance and trouble resolution from a remote expert by way of an Internet connection.

A post on the Recon Instruments blog by Recon co-founder, Dan Eisenhardt, summarizes his excitement about the partnership and potential for Augmented Reality in enterprise.




Atheer AiR Suite Available on Recon Instruments’ Jet Smart Glasses

AREA member Atheer announced in a press release that its AiR Suite for Enterprise software for smart glasses is available for developers choosing to implement use cases with Recon Instruments’ Jet. Running on Recon Jet, a water-resistant and impact-resistant smart glasses device originally designed for use in sports use cases, the AiR Suite software can provide enterprise professionals with hands-free capabilities including remote collaboration for rapid problem solving. This development follows a prior announcement that it has optimized and released the AiR Suite for Enterprise for use with Vuzix M-100 smart glasses.

Through these new relationships and its support for third-party hardware, Atheer is paving the way for enterprises to deploy and manage Augmented Reality-enabled systems through its own binocular hardware devices as well as complementary solutions.

By offering a single enterprise-ready smart glasses information delivery and management software architecture, Atheer integration partners should be able to more easily approach enterprises with diverse use cases for the technology.




Augmented Reality Startups Have New Incubator and Fund

Founders of Augmented Reality startups encounter very significant business and financial challenges. Many of these are directly tied to general confusion among investors and potential partners about the future of Augmented Reality. Then there’s the high level of expectation drummed up by giant tech companies like Google, Samsung and Microsoft (and regular disappointments that follow). The lack of high-quality, vendor-neutral information about the topic and the very immature ecosystem of potential partners for Augmented Reality technology and solution providers are both issues that the AREA is helping to address for companies focusing on enterprise use cases.

The list of other challenges facing startups is too long to list here. Startup incubators have a track record of addressing the need for coaching, funding and other forms of support in a compact and efficient manner. Incubators which focus on a technology sector can serve to foster collaboration, develop differentiation and help companies reach critical mass.

Super Ventures, the first incubator and seed fund for early stage Augmented Reality technology startups, has been established in the Silicon Valley. In the press release issued by the fund, the founders announced that they will invest $10M in companies they select. The partners of the new venture, Ori Inbar, Matt Miesnieks, Tom Emrich and Professor Mark Billinghurst, will be providing hands-on guidance and support to foster the growth of their young portfolio of companies.

The first investments announced by the fund are with Waygo, a visual translation app (similar to WordLens which was acquired by Google) specializing in Asian languages, and Fringefy, a visual search engine that helps people discover local content and services.




Google and Movidius Partner to Drive Vision Processing

Everyone’s aware of GPUs, Graphics Processing Units, but most have not heard about (much less begun to use) the next wave in specialized mobile hardware: the Vision Processing Unit (VPU). Movidius, one of Googles Project Tango partners and the company which put VPU System-on-Chip (SoC)s on the map, announced in a post on their web site that they are partnering with Google’s Machine Learning Group.

Google will begin using Movidius’ latest VPU—the MA2450 in its mobile devices and platforms. The MA2450 is the second iteration of the Myriad family of vision processors based on a proprietary software-controlled, multi-core, multi-ported memory subsystem and caches which can be configured to allow a large range of workloads. This flexibility will make it possible to run specialized algorithms for recognizing and tracking highly disparate types of objects at lower computational and power cost.

In a post on TechCrunch about the partnership, Movidius’ CEO Remi El-Ouazzane, is quoted saying that “power consumption savings of the new chip equates to a 10-100x power savings over current models on the market.” In addition, the new chip is 20% the size of equivalent options.

The collaboration isn’t limited to hardware licensing. Google is also expected to implement the Movidius software development environment. As part of the deal, Google’s Machine Learning Group, headed by Blaise Agϋera y Arcas, is expected to collaborate with Movidius’ engineering team on the company’s next generation neural network technology.

Announcements like this serve to remind the market that computer vision is a highly active field of research and development which promises to advance the use of Augmented Reality on wearable devices that must be highly power efficient to recognize and track a vast range of different targets in the real world while serving users for 8 hours or more.




Apple Acquires Augmented Reality Technology Firm Flyby Media

It’s no secret or surprise that Apple is acquiring technologies for its as-of-yet unannounced projects in the domain of Augmented Reality. January 2016 marked several more developments that went by with barely a sound.

Although the author of this post on TechCrunch refers to VR, Flyby Media, the latest small tech company to be acquired by the tight-lipped consumer electronics giant, had been developing capabilities for Augmented Reality. More specifically, the team had been working with NVIDIA and Google on Project Tango, providing large-scale SLAM, indoor navigation, sensor fusion, image recognition and 3D tracking.

While the technology had been demonstrated in a consumer-facing social AR application published by Flyby Media, its potential to provide high-performance AR for enterprise cannot be overlooked.




PTC Vuforia Introduces VuMark and ThingX Platform for Augmented Reality

In a press release issued by PTC the company announced that it will be introducing a new tracking marker, the VuMark, and that Vuforia will soon support Windows 10.

In an event on the same day, the company shared that it would soon be rolling out a new suite of products as part of a platform called “ThingX,” which stands for Thing Experience. ThingBrowser, ThingServer, and ThingBuilder, the three components of the ThingX platform, will take an entirely different “turnkey” approach to authoring AR experiences than was previously provided by Vuforia.

The ThingX announcement was made during the ThingEvent, a webinar and studio event, produced by the company. A post on the PTC blog summarizes the highlights of the event and the ThingX platform. A video archive of the ThingEvent is also available for playback.