1

DMDII Funds Augmented Reality Manufacturing Projects

An article on Engineering.com reveals that Digital Manufacturing and Design Innovation Institute (DMDII) has issued $12 million across seven contracts for investigation and research into digital manufacturing including Augment Reality applications for use on manufacturing shop floors, wearables and mobile devices.

The projects include real time data driven visual decision support systems for the plant floor, and tend to be focused on streamlining manufacturing processes.

The Rochester Institute of Technology and Expert Demonstration (led by Iowa State University) are hoping to explore and exploit the potentials of Augmented Reality in the manufacturing industry.

The Executive Director of DMDII, Dean Bartles, commented that with each new project, further researchers, industry leaders and providers come into the consortium to help with research and development for the age of smart manufacturing.

One of our AREA members, iQagent, has developed an Augmented Reality app. iQagent’s manufacturing clients have been using Augmented reality on the shop floor since 2013, introducing efficiency savings for large manufacturers.




Augmented Reality Diving Helmets Help US Navy

The US Navy is developing a high-tech diving helmet that has the potential to make underwater missions a lot safer. This new heads-up display system is built into the diver’s helmet and is called the Diver Augmented Vision Display (DAVD).

A Tech Insider report shows a lead engineer of the Naval Surface Warfare Center Panama City Division wearing and demonstrating the DAVD during a lab simulation.

augmented reality dive helmet sea submarine

DAVD uses Augmented Reality to overlay the diver’s vision with real-time information such as diagrams, images, text messages and videos. This can help with navigation and situational awareness especially when the diver is in low-visibility murky water, for example, by displaying the diver’s topside view and site of the diver’s actual location. Access to real-time operational data helps divers to be safer and more effective due to improved accuracy when navigating towards underwater objects of interest.

A press release by the US Navy reveals that applications of the DAVD HUD could be underwater construction, salvage operations, ship repairs at sea as well as first responders in underwater rescue missions.

The development team is working on a second phase: components are being designed for helmet systems and full-face masks. In-water simulation testing is due to be conducted in October 2016, with further field testing in 2017.




Use Cases for Visualizing Data using 3D City Models

In this paper published in the International Journal of Geospatial Information special issue on “Efficient Capturing of 3D Objects at a National Level: With a Focus on Buildings and Infrastructure,” the authors present a taxonomy for 3D city model use cases based on their review of over 400 published works on the topic of using 3D city models.

The taxonomy distinguishes between use cases that visualize information and those that lead to other results (non-visualization). Of the 24 use cases involving visualization of 3D city data, the authors note that many could be adapted to involve one or more real world places using Augmented Reality-assisted visualization. But an even greater number of the use cases describe how having a 3D city model can be the basis for enhanced Augmented Reality experiences in urban environments. For example, use cases that involve visualizing human activity, wind fields, and air quality data may be the basis for suggested routes to take in a navigation use case.

This in-depth catalog could be inspiration for others to document use cases for Augmented Reality-assisted visualization.




4Any Framework for Mobile Augmented Reality Systems

Researchers at Macquarie University, Sydney, Australia have published a scientific paper about the 4Any framework for mobile Augmented Reality systems and experience development. The framework is composed of layers for actors, profiles and metadata to be shared, and for visualizing scenarios. In the same paper, the researchers describe ArcHIVE, a pilot system developed to test the framework and provide a link to the code generated by the implementation.

According to the 4Any framework, the ArcHIVE system architecture provides a metadata layer to retrieve, deploy, create, update or delete customization information, such as custom object definitions and page layouts, which are tailored to the user profile defined by the actor layer. Django, an open source mobile application authoring environment that uses the Python scripting language to provide a web/database interface that is well supported, scalable, and easy to maintain, was used to code the pilot system.




3D Vision Topics at the 12th European Conference for Visual Media Production

As computing power increases and costs decrease, many technologies previously confined to motion picture studios, such as capturing temporally consistent 3D models of dynamic scenes from real world imagery, are becoming suitable for other environments. Consequently, the European Conference for Visual Media Production which has long been a forum for presenting computer graphics and video research for motion pictures, is rising in relevance on the list of events where research valuable to enterprise Augmented Reality is likely to be presented and published.

One example is the paper Dr. Stefan Rueger of the Knowledge Media Institute (KMi) of the Open University presented on the topic of new methods for monocular markerless motion capture. Another is the keynote given by Dr. Lourdes Agapito, Professor of 3D Vision and member of the Vision and Imaging Science group and the Centre for Inverse Problems in the Department of Computer Science at University College London (UCL). Her research in Computer Vision has consistently focused on the inference of 3D information from the video acquired from a single moving camera.

Agapito’s keynote focused on a “model-free” framework developed by her group at UCL to acquire fully dense “per-pixel” 3D models of deformable objects solely from video. Using a template-based, sequential and direct method of tracking deformable surfaces with only an RGB camera could be highly valuable for Augmented Reality. In addition, Agapito described a unified approach to 3D modelling of dynamic scenes that simultaneously segments the scene into different objects and decomposes these into parts while reconstructing them in 3D. This approach allows the acquisition of more semantically meaningful 3D representations of a scene. She concluded by discussing recent work on correlations in the variation of 3D shapes across objects of the same class that could address the problem of category-based reconstruction.

While these frameworks and approaches are not currently possible in real time, they could increase the ease and lower the cost of real world environment capture, permitting more frequent capture and higher reliability recognition and tracking of static and dynamic objects in real world scenes.




Enterprise Augmented Reality Featured in HBR Webinar

In a recent webinar produced by Harvard Business Review (HBR), co-authors Michael Porter and Jim Hepplemann, CEO of PTC, summarized concepts presented in their article entitled “How Smart, Connected Products Are Transforming Companies” and discussed the implications of these new technologies. The primary emphasis is on the use of sensors and other embedded technologies into products, and how by using the data generated by the connected products, companies can be more informed about how their customers are using the products.

The co-authors stress how important it will be for companies to embrace not only technologies but also organizational changes. Without both, the benefits of either are far less significant.

During the webinar (play the archive below), both Porter and Hepplemann repeatedly described how, using Augmented Reality as a UI, the assembly and use of smart connected products will become commonplace, reducing the need for training in complex procedures. The power of Augmented Reality can also be applied to increase end user customers’ ability to leverage new features in more complex and intelligent products as a result of the dynamic interfaces that AR-assisted systems can offer.

HBR provides a link to download the webinar slides.




Benefits of Wearables for Augmented Reality in Logistics

There are many situations in which an employee needs wearables for Augmented Reality because they must use their hands to perform tasks. In this post on Philipp Rauschnabel’s blog, the author summarizes an interview recently conducted for Mobile Business, a German business publication. The focus is on the benefits of hands-free displays such as smart glasses in logistics.

The primary benefits are to reduce distraction, offer navigation within a warehouse, quickly address rare or new situations and unforeseen problems, and make relevant information faster to find and easier to use.

In addition to use cases that provide instant access to orders and other facility information, the possibility of offering remote experts to assist a worker in the warehouse is very attractive to planners.




Personality Type Impacts Wearable Hardware Acceptance

Personality types are known to impact many aspects of our daily life, including user acceptance of having to wear unusual hardware for the purpose of having Augmented Reality experiences. A scientific publication on ResearchGate.com describes several studies conducted by Philipp Rauschnabel, Alexander Brem and Bjoern Ivens on the direct and moderating effects of human personality on the awareness and innovation adoption of smart glasses.

The study suggests that those with personalities that are open and extroverted are more likely to already be aware of smart glasses such as Google Glass. It follows naturally that those who perceive the potential for benefits and social conformity of smart glasses are more likely to adopt such wearables. According to the authors, the strength of these effects is moderated by an individual’s level of openness to new experiences.

Findings like these could help the development and validation of new processes that help AR project managers select the best users with which to work on enterprise Augmented Reality pilots, proofs of concept and introduction projects.




EM-Sense Technology Adds to Sensory Awareness

EM-Sense, a new technology developed by Carnegie Mellon University and Disney Research, was introduced at UIST 2015, the ACM Symposium on User Interface Software and Technology on November 8-11, in Charlotte, N.C. and information about it was published in a post on the CMU website.

The technology, which has been demonstrated using off-the-shelf consumer electronics components, takes advantage of the body’s natural electrical conductivity. In essence, the body serves as an antenna to detect whether a person is touching an electrical or electromechanical device and, based on the distinctive electromagnetic noise emitted by such devices, automatically identifies the object.

Kitchen appliances, power tools, electronic scales and door handles with electrically triggered locks are among the first items that the researchers have demonstrated can be detected and identified. Further training demonstrated that the sensitivity is sufficient to distinguish between different smartphone models. This technology could help users in an instrumented workplace receive instructions about the tools they touch and use.




IEEE ISMAR 2016 Details Announced

The IEEE Mixed and Augmented Reality Symposium (ISMAR) is the annual research conference at which many important developments in the fields of Mixed and Augmented Reality are made public.

The general chair of ISMAR 2016, Walterio Mayol-Cuevas of the University of Bristol, announced during the closing ceremony of ISMAR 2015 that ISMAR 2016 will take place in Merida, Yucatan, Mexico from September 19 to September 23, 2016.

ismar_logo

The IEEE Computer Society and IEEE VGTC have announced that the deadline for ISMAR 2016 paper submissions is March 15, 2016. Other details will be made available via the web site in the near future.