1

Blippar brings AR content creation and collaboration to Microsoft Teams

LONDON, UK – 14 June 2022 – Blippar, one of the leading technology and content platforms specializing in augmented reality (AR), has announced the integration of Blippbuilder, its no-code AR creation tool, into Microsoft Teams.

Blippbuilder, the company’s no-code AR platform, is the first of its type to combine drag and drop-based functionality with SLAM, allowing creators at any level to build realistic, immersive AR experiences. Absolute beginners can drop objects into a project, which when published will stay firmly in place using Blippar’s proprietary surface detection. These experiences will serve as the foundation of the interactive content that will make up the metaverse.

Blippbuilder includes access to tutorials and best practice guides to familiarise users with AR creation, taking them from concept to content. Experiences are built to be engaged with via browser – known as WebAR – removing the friction of, and reliance on dedicated apps or hardware. WebAR experiences can be accessed through a wide range of platforms, including Facebook, Snapchat, TikTok, WeChat, WhatsApp, alongside conventional web and mobile browsers.

Teams users can integrate Blippbuilder directly into their existing workflow. Designed with creators and collaborators in mind, whether they be product managers, designers, creative agencies, clients, or colleagues, organisations can be united in their approach and implementation – all within Teams. The functionality of adaptive cards, single sign-on, and notifications, alongside real-time feedback and approvals,  provides immediate transparency and seamless integration from inception to distribution. The addition of tooltips, support features, and starter projects also allows teams to begin creating straightaway.

“The existing process for creating and publishing AR for businesses, agencies, and brands is splintered. Companies are forced to use multiple tools and services to support collaboration, feedback, reviews, updates, approvals, and finalization of projects,” said Faisal Galaria, CEO at Blippar. “By introducing Blippbuilder to Microsoft Teams, workstreams including team channels and group chats, we’re making it easier than ever before for people to collaborate, create and share amazing AR experiences with our partners at Teams”.

Utilizing the powerful storytelling and immersive capabilities of AR, everyday topics, objects, and content, from packaging, virtual products, adverts, and e-commerce, to clothing and artworks, can be ‘digitally activated’ and transformed into creative, engaging, and interactive three-dimensional opportunities.

Real-life examples include:

  •  Bring educational content to life, enabling collaborative, immersive learning
  •  Visualise and discuss architectural models and plans with clients
  •  Allowing product try-ons and 3D visualization in e-commerce stores
  •  Create immersive onboarding and training content
  •  Present and discuss interior design and event ideas
  •  Bring print media and product packaging to life
  •  Artists and illustrations can redefine the meaning of three-dimensional artworks

In today’s environment of increasingly sophisticated user experiences, customers are looking to move their technologies forward efficiently and collaboratively. Having access to a comprehensive AR creation platform is a feature that will keep Microsoft Teams users at the forefront of their industries. Blippbuilder in Teams is the type of solution that will help customers improve the quality and efficiency of their AR building process.

Blippar also offers a developer creation tool, its WebAR SDK. While Blippbuilder for Teams is designed to be an accessible and time-efficient entry point for millions of new users, following this validation of AR, organisations can progress to building experiences with Blippar’s SDK. The enterprise platform boasts the most advanced implementation of SLAM and marker tracking, alongside integrations with the key 3D frameworks, including A-Frame, PlayCanvas, and Babylon.js.




AREA Research Project on Best Practices of Merging IOT +AI+AR

The AREA is issuing a request for proposals for a funded research project that will produce tools to increase dialog between stakeholders in enterprises and suppliers of IoT, AI and AR technologies, and a report which describes the state of the art and proposes potential courses of action to address challenges facing enterprises who seek to combine IoT, AI and AR in the workplace.




Augmented Reality Lowers Errors in Automotive Manufacturing

According to mathematical models, manufacturing processes can become highly data driven, nimble and responsive. People and equipment are deployed to optimize resources whenever shifts in demand, supply of components or materials, or currency exchanges reach thresholds. In practice, application of such models raises significant challenges.

In complex assembly lines such as for cars, companies seek to strike a balance between customization (made to order vehicles) and mass production. It’s not uncommon for a single assembly line to finish parts for different car models and colors. Dynamic, configurable equipment and workflows are easier to program and manage than in Henry Ford’s day, but they also introduce complexity into operations. Complexity introduces errors. And errors cause production delays, also known as downtime.

Improvements in operator training can reduce downtime but if employees must memorize complex steps that change frequently, it does not guarantee the best resource use. Delivering precise work instructions to the assembly line worker at the time of task performance and in the field of view of the user has enormous potential to bring the real world closer to the textbook models.

Projection Augmented Reality

Projection Augmented Reality is an alternative to tablets or smart glasses in some environments. The approach is well suited when the workplace is stationary (or on a moving assembly line) and the work tools or items on which a task is performed can be brought to the AR-enabled workstation.

One company providing projection AR capabilities is OPS Solutions. OPS Solutions works with automobile manufacturers such as Fiat Chrysler Automobiles to increase workplace efficiency and productivity. This article describes a study conducted with OPS’ flagship product Light Guide Systems (LGS), a projection AR solution that has been implemented within Chrysler’s manufacturing operations.

Manufacturing Study

LGS projects instructions (visual and audio) for guidance, pacing and direction on work surfaces, and provides feedback for improving industrial processes such as capturing cycle times for each step within manual processes including assembly, part knitting and training. In trials conducted with Chrysler in 2014, use of standard paper work instructions for complex tasks were compared with using LGS for the same tasks when training new operators. Ten operators were tasked with assembling gears and chains, a process totaling 10 steps for each operator. For each step, they had to select the correct gear corresponding to a location and diameter within a standard cycle time, and install the corresponding chains correctly. Lastly, they had to verify that the installation was done correctly.

Each operator had to do two different versions of each task, one using paper-based work instructions, and the other with LGS. Five operators started with paper and then switched to LGS, and the other five with LGS and then to paper.

The results of the study are conclusive, and the table below shows the efficiency of LGS in comparison to standard paper work instructions.

The 80% reduction in errors shows a marked improvement in quality. Reducing errors at one stage of an assembly line has great impact on costs, since faulty articles must be withdrawn further down the line and corrected in order to proceed with later stages of assembly.

The reduction in cycle times and increased throughput reflect efficiency and speed, as articles completed faster increase overall manufacturing productivity.

A similar study was done for logistics tasks, or kitting and sequencing of parts before reaching the assembly line. Associates had to select the correct subparts and put them in the appropriate bins on a cart, before wheeling the cart to the next stage of production. They used LGS to project selecting the correct quantities of parts and guiding the associates to place the parts in the correct bins based on a highly variable sequence that changes constantly in production.

The study similarly compared the efficiency of using projection Augmented Reality with that of paper work instructions. As shown in the table below, using projection AR greatly increased the associate’s efficiency and reduced errors.

In both the assembly and logistics studies, a further advantage of instructions displayed directly in the field of view is that the normal attention switching to refer to other sources of information such as paper or computer-based work instructions is eliminated, thereby reducing cognitive load and speeding up task execution.

A Leap in Productivity

Chrysler’s experience with projection Augmented Reality supplied by OPS reveals the potential this technology offers to boost employee productivity through:

  • Increased quality and standardization of processes
  • Training efficiency by enabling operators to self-train on the job
  • Greater accountability through confirmations of successfully completed steps
  • Feedback on completed tasks and cycle times

One byproduct of designing, installing and using projection AR is that the plant floor workflows and stations are reviewed and improved. Reconfiguring stations can increase efficiency with respect to material staging and ergonomics.




Crunchfish CEO on Gesture Technology and the Future of AR

How would you describe the state of the AR ecosystem today?

It’s a very exciting period within AR. We have big actors like Google and Apple pushing AR in their new devices and new tools like ARCore and ARKit. And from an industrial perspective, we see a lot of companies starting to see the potential AR can bring. But it’s still a challenge to connect these industry actors with providers of software and hardware to create a total solution.

As a technology provider, we play a role in several segments of this AR ecosystem, including device vendors, software vendors and system integrators, where we utilize our gesture control and proximity technology to enable features. It’s an exciting ecosystem but also an ecosystem that’s in an early stage and that need groups like the AREA. The AREA provides a meeting place for the different creators where they can share and jointly develop these new solutions.

What do you think are the major obstacles to widespread AR adoption now?

There are several things. It is largely a matter of getting the industry know-how to where this new technology can make a difference. From a technology solution perspective, we not only need to provide the hardware or software solutions, but also to map them to the needs of the industry, which is a very complex environment. We need to get these two worlds to meet.

Second, it’s still early days from a hardware perspective. We are building these new devices based on components from the mobile world or other electronics areas, rather than designing them from scratch. We will need to come further in terms of battery life, design, performance, and the comfort of wearing these devices. There are a lot of things that need further improvement to really take off and meet the demands of the industry.

From a software perspective, of course, there are improvements needed as well. We are trying to contribute from our end on the interaction part, which I also think is very important, so that you can interact with these new wearable solutions in the way that’s needed. The methods you use, and the way it is done, are very important for the overall uptake on the end user side. At the end of the day, AR will really take off when we can get people to use wearables as part of their working environment and help them to get the “superpowers” these products can provide.

How important is gesture technology to the development and adoption of AR?

It is crucial, because we are providing the user with a new dimension. Designing for immersive environments is fundamentally different than designing for 2D flat screens. We’ve done a lot of studies into the development of user interfaces for AR solutions. To interact in three dimensions, you need a method that provides the capabilities you expect as a user, like interacting with objects and moving them around, and gestures can do exactly that.

Our mobile proximity technology provides another important part of the user experience, by providing contextual awareness – a key technology to secure information relevance and efficient information exchange when performing tasks. We’re looking at a paradigm shift within UI in AR within the next two years. Our contribution is to provide the means for the touchless interaction and contextual awareness part and making that possible in AR.

What can you tell us about the future of gesture technology in augmented reality?

The limitations are hardware currently. We can use a number of different sensors to enable gesture control, but most AR glasses and mobile solutions are based on 2D standup camera sensors. That limits the way in which we interact with gestures, especially in three dimensions. So, looking forward I expect there will come more advanced sensors in these devices that provide you the depth map, the third dimension of information, that is needed to interpret gestures in all three dimensions. With that in place, we can go from having gestures as a menu-driven, pick-and-choose interaction, to manipulating the environment you are working in with AR. That will be a huge change. Back to this paradigm shift, AR is one part of that shift but also the shades of interaction and how you build user experience and the user interface will be completely different in a few years, which will totally change the appearance of these solutions.

How do you expect to benefit from being a member of the AREA, and which AREA activities might Crunchfish be involved in?

We are very much looking forward to being an active member in driving the user experience aspect of AR. I think we can contribute quite a lot in this space. For the last seven years, we have been working with gesture interaction and user experiences in mobile devices, and lately in virtual reality headsets and augmented reality glasses. Since 2014, we have been working on our mobile proximity technology that provides contextual awareness between entities such as smartphones, wearables, machines, physical areas and vehicles. In a defined “proximity bubble,” our technology enables these entities to seamlessly discover, connect and share information with each other. Besides contributing our user experience expertise, we will certainly gain valuable insights about enterprise challenges and barriers for AR adoption from our fellow AREA members. We’re excited about getting to know and work with pioneers and innovators in the industry.

 




Three Lessons We’ve Learned Developing AR Solutions

Industries and enterprises are adopting AR solutions to strengthen their competitive advantage and get customers engaged in business activities. As an organization that has faced AR development challenges every day, we at Program-Ace have learned three essential lessons that could be handy for those seeking to create powerful and engaging augmented reality experiences.

1. AR helps tell a product’s story, so make it important for users.

Augmented reality technology enables storytelling. It makes us see everyday objects in a different light by making visible what has been invisible, enabling us to visualize 2D images, and bringing life to inanimate objects. In other words, it has the capacity to humanize the technology. This, in turn, dramatically increases the value and recognition of the product (or any other object of your choice). A good story not only positively influences the presence but also allows users to be closer to the product and engaged in the tech community.

To deliver valuable applications for the business world, Program-Ace conducts extensive marketing research that studies existing products, possible competitors, and consumer behavior in both B2B and B2C markets to discover weaknesses and consider the most profitable potential opportunities. In our development adventures, the Program-Ace team has drawn one important conclusion: AR development is not just about the smooth integration of CG content with the physical environment; it is about allowing consumers to be connected to the virtual realm. Moreover, the app ideation process (the phase in which you create the concept, define the technological feasibility, and understand the time constraints) can also be supported with product usage data and information regarding solutions already available in the market along with their strengths and weaknesses.

2. Gamification can be a successful way to drive user acceptance and productivity.

Augmented reality technology has had a significant influence on the development of various wearables, headsets, and head-mounted devices. And, of course, gamers are among the early adopters of these advanced accessories. For that reason, many people hold the opinion that it is necessary to develop games in order to be noticeable in the market. While that might be true for some industries, such as education and defense, when it comes to retail, government, or banking, you need a serious approach to the business. Still, gamification can be an effective approach for you.

Even though it originated in the gaming world, gamification has proved to be an extremely effective tool for user acquisition, virality, and customer conversion. At Program-Ace, we have long realized that companies should focus on what the gaming experience can bring to the AR application, instead of creating games. When you deliver proofs of concept to your clients using basic and advanced gamification features, such as multi-layered storytelling, competition, rewards, lifelike avatars, etc., you can drive user engagement and increase productivity.

3. Platform-specific apps are an endangered species.

Contrary to the conventional wisdom that, in the near future, one winning platform will become an AR market monopolist, we do not see any indication of this yet. Instead, the market is full of various products designed for different user needs and demands, and it is highly unlikely that in the next five years, the diversity of platforms will disappear. Accordingly, our experience has taught us to build platform-agnostic applications choosing a cross-platform approach that has worked well for our customers for more than 20 years now, helping them to pursue market supremacy while being platform independent and relevant to user requirements.

Multi-platform (or cross-platform) AR development, especially creating one application that can be deployed to any platform, is preliminarily customized to respect the features of a particular platform or device. However, in some cases, these approaches are ineffective, especially when the target audience is used to native apps. In this situation, our team eventually creates experiences aimed at a specific type of device. For instance, one of our mini games, Archy the Rabbit, was initially designed cross-platform for iOS and Android. With the introduction of HoloLens, we have ported it to this platform by changing the game UI, adding new features, and programming the app to recognize gestures, voices, and gazes. A combination of the Unity game engine and HoloToolKit helped our team to develop important app functionality such as spatial sound, voice recognition, and spatial mapping with minimal effort and improved human-computer interaction (HCI).

Shaping the future

As the next phase of computing, augmented reality offers an opportunity to shape the future of HCI and technology itself. In order to be creative and deliver compelling AR experiences, we have begun to focus on the principles above. These lessons have enabled us to design applications that maximize the value of the technology. By remembering these AR development lessons, you can crystallize your thinking and focus your efforts on developing successful and engaging AR applications.

 

Anastasiia Bobeshko is the Chief Editor at Program-Ace.




Features Worth Seeking in an Augmented Reality SDK

Interest in AR SDKs has intensified since last year, when one of the leading solutions, Metaio, was sold to Apple, leaving an estimated 150,000+ developers in search of a replacement. Vuforia remains the market leader, but there are many good alternatives in the marketplace, some of which are already quite well known, such as EasyAR、Blippar、and Wikitude.

So, what criteria should a developer apply in evaluating AR SDKs? The answer to that question will vary. There are many factors developers need to consider in choosing an SDK, including key features and cost. Portability is another issue, since some SDKs only work on certain hardware.

However, there are a handful of key features and capabilities that all developers should look for when evaluating their next AR SDK:

  • Clould-based storage to support a greater number of 2D markers. 2D object tracking is the most basic form of mapping and allows an application to recognize a flat surface which can then be used to trigger a response, such as creating a 3D image or effect to appear on top of it, or playing a movie trailer where a poster used to be. This is simple to do and all SDKs support it; however, a key difference among SDKs is the number of markers that can be recognized. Many SDKs support around 100 markers as standard, but others allow for a nearly unlimited number of markers by using very fast cloud storage software to store a much larger database of markers. When an AR application can recognize more 2D objects, it enables developers to create more robust applications that trigger more AR effects.
  • 3D object tracking. 3D object tracking expands the opportunities for AR developers by allowing 3D objects, such as a cup or a ball, to be used as AR markers that can then be recognized by the app to trigger an AR effect. This can be useful for advertising-related applications, and also for use in games. For example, toys can come alive and talk in AR because they can be recognized as unique entities by this type of tracking. While 3D tracking is not yet a universal capability among SDKs, it is becoming more common and affords a developer greater latitude in creating compelling, lifelike AR applications.
  • SLAM support. Simultaneous Localization And Mapping has become an increasingly desirable feature in an AR SDK because it allows for the development of much more sophisticated applications. In layman’s terms, SLAM allows the application to create a map of the environment while simultaneously tracking its own movement through the environment it is mapping. When done right, it allows for simple depth information to convey to the camera where things are in a room. For example, if there is a table and an AR image is appearing over the table, SLAM allows the application to remember where the table is and to keep the AR image over the table. SLAM also allows users to look around a 3D image, and move closer to it or farther from it. It combines several different input formats and is very hard to do accurately. Some SDKs offer this functionality, but it is quite challenging and processor-intensive to make it work smoothly, particularly with a single camera. Look for an SDK that can handle SLAM effectively with a single camera.
  • Unity support + native engine. For some applications, it is important that an SDK supports the Unity cross-platform game engine. Unity is one of the most accessible ways to produce games and other entertainment media, but it also simplifies the development process, since Unity applications can be run on almost all hardware. Most SDKs operate through Unity to allow for some very sophisticated AR experiences. However, using Unity as a framework can be disadvantageous in certain applications because it is highly resource-intensive and can slow down AR experiences. As a result, some SDKs offer their own engines that function natively on iOS or Android devices, without the need for Unity. This can be used to create much smoother experiences with robust tracking for each. However, it does introduce the issue of having a coding team for each. This is not an issue if a developer is only planning to release on one platform. In this case, a developer could find that an application runs substantially faster when coded natively, rather than through a Unity plug-in.
  • Wearables support. Smart glasses and other wearables allow AR experiences to be overlaid on the world we see before us, while offering a hands-free experience. As the use of wearables grows, developers producing content for future devices need to ensure that the software they are working with will support the devices they are building for.

When you have narrowed down your candidate SDKs based on these and other evaluation criteria, I recommend that you try them out. Many providers offer free trial versions that may include a subset of the features found in their professional versions. This will enable you to determine whether its interface suits your style of working and the type of application you are developing.

My final piece of advice is to examine the costs of SDKs carefully. Some have licensing models that are priced on the number of applications downloaded or AR toys sold. This may be the most critical purchase criterion, particularly for independent developers.

Albert Wang is CTO of Visionstar Information Technology (Shanghai) Co., Ltd., an AREA member and developer of the EasyAR SDK.




Global Smart Glass Market 2014-2021

This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures.
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




Mixed Reality: Just One Click Away

Author: Aviad Almagor, Director of the Mixed Reality Program, Trimble, Inc.

Though best known for GPS technology, Trimble is a company that integrates a wide range of positioning technologies with application software, wireless communications, and services to provide complete commercial solutions. In recent years, Trimble has expanded its business in building information modeling, architecture and construction, particularly since the company’s 2012 acquisition of SketchUp 3D modeling software from Google. Mixed Reality is becoming a growing component of that business. This guest blog post by Trimble’s Aviad Almagor discusses how Trimble is delivering mixed reality solutions to its customers.

Many industries – from aerospace to architecture/engineering/construction (AEC) to mining – work almost entirely in a 3D digital environment. They harness 3D CAD packages to improve communication, performance, and the quality of their work. Their use of 3D models spans the full project lifecycle, from ideation to conceptual design and on to marketing, production, and maintenance.

Take AEC, for example. Architects design and communicate in 3D. Engineers design buildings’ structures and systems in 3D. Owners use 3D for marketing and sales. Facility managers use 3D for operation and maintenance.

And yet, we still consume digital content the same way we have for the last 50 years: behind a 2D screen. For people working in a 3D world, the display technology has become a limiting factor. Most users of 3D content have been unable to visualize the content their jobs depend on in full 3D in the real world.

However, mixed reality promises to change that. Mixed reality brings digital content into the real world and supports “real 3D” visualization.

The challenge

There are several reasons why mixed-reality 3D visualization has not yet become an everyday reality. Two of the primary reasons are the user experience and the processing requirements.

For any solution to work, it needs to let engineers, architects, and designers focus on their core expertise and tasks, following their existing workflow. Any technology that requires a heavy investment in training or major changes to the existing workflow faces an uphill battle.

Meanwhile, 3D models have become increasingly detailed and complex. It is a significant challenge – even for beefy desktop workstations – to process large models and support visualization in 60fps.

One way around that problem is to use coding and specialized applications and workflows, but that approach is only acceptable to early adopters and innovation teams within large organizations – not the majority of everyday users.

To support real projects and daily activities – and be adopted by project engineers — mixed reality needs to be easily and fully integrated into the workflow. At Trimble, we call this “one-click mixed reality” – getting data condensed into a form headsets can handle, while requiring as little effort from users as possible.

Making one-click mixed reality possible

The lure of “one-click” solutions is strong. Amazon has its one-click ordering. Many software products can be downloaded and installed with a single click. The idea of one-click mixed reality is to bring that ease and power to 3D visualization.

Delivering one-click mixed reality requires a solution that extends the capabilities of existing tools by adding mixed reality functionality without changing the current workflow. It must be a solution that’s requires little or no training. And it means that any heavy-lifting processing that’s required should be done in the background. From a technical standpoint, that means any model optimization, including polycount, occlusion culling, and texture handling, is performed automatically without the need for manual, time-consuming specialized processes.

At Trimble, we’re working to deliver one-click mixed reality by building on top of existing solutions. Take SketchUp for example, one of the most popular 3D packages in the world. We want to make it possible for users to design a 3D model in SketchUp, click to publish it, and instantly be able to visualize and share their work in mixed reality.

We’re making sure that we support users’ existing workflow in the mixed reality environment. For example, we want to enable users to use scenes from SketchUp, maintain layer control, and collaborate with other project stakeholders in the way they’re accustomed.

And we’re taking it one step further by making it possible to consume models directly from SketchUp or from cloud-based environments, such as SketchUp 3D Warehouse or Trimble Connect. This will eliminate the need to install SketchUp on the user’s device in order to visualize the content in mixed reality. As a next step, we are exploring with our pilot customers a cloud-based pre-processing solution which will optimize models for 3D visualization.

We’re making good progress. For example, in his Packard Plant project (which was selected to represent the US at the Venice Architecture Biennale), architect Greg Lynn used SketchUp and SketchUp Viewer for Microsoft HoloLens to explore and communicate his design ideas. In this complex project, a pre-processing solution was required to support mixed reality visualization.

“Mixed-reality bridges the gap between the digital and the physical. Using this technology I can make decision at the moment of inception, shorten design cycle, and improve communication with my clients” 

– Architect Greg Lynn

One-click mixed reality is coming to fruition. For project teams, that means having the ability to embed mixed reality as part of their daily workflow. This will enable users to become immediately productive with the technology, gain a richer and more complete visualization of their projects, and build on their existing processes and tools.

The advent of one-click mixed reality indicates that the world of AR/VR is rapidly approaching the time when processing requirements, latency, and user experience issues will no longer be barriers.

Aviad Almagor is Director of the Mixed Reality Program at Trimble, Inc.




The 1st AREA Ecosystem Survey is Here!

The Augmented Reality (AR) marketplace is evolving so rapidly, it’s a challenge to gauge the current state of market education, enterprise adoption, provider investment, and more. What are the greatest barriers to growth? How quickly are companies taking pilots into production? Where should the industry be focusing its efforts? To answer these and other questions and create a baseline to measure trends and momentum, we at the AREA are pleased to announce the launch of our first annual ecosystem survey.

Please click here to take the survey. It won’t take more than five minutes to complete. Submissions will be accepted through February 8, 2017. We’ll compile the responses and share the results as soon as they’re available.

Make sure your thoughts and observations are captured so our survey will be as comprehensive and meaningful as possible. Thank you!




The AREA Issues Call for Proposals for an AR Research Project

The AREA has issued a request for proposals for a funded research project that its members will use to better understand relevant data security risks associated with wearable enterprise AR and mitigation approaches.

Organizations with expertise in the field of data security risks and mitigation and adjacent topics are invited to respond to the invitation by January 30, 2017.

The goals of the AREA-directed research project are:

  • To clarify questions about enterprise data security risks when introducing enterprise AR using wearables
  • To define and perform preliminary validation of protocols that companies can use to conduct tests and assess risks to data security when introducing wearable enterprise AR systems

The research project will produce:

  • An AREA-branded in-depth report that: details the types of data security risks that may be of concern to IT managers managing AR delivery devices and assets; classifies the known and potential threat to data security according to potential severity levels; and proposes risk mitigation measures
  • An AREA-branded protocol for testing wearable enterprise AR devices for their hackability or data exposure threat levels
  • An AREA-branded report documenting the use of the proposed protocol to test devices for their security exposure threat levels.

All proposals will be evaluated by the AREA research committee co-chairs on the following criteria:

  • Demonstrated knowledge and use of industry best practices for research methodology
  • Clear qualifications of research organization and any partners in the domain of data security threats and mitigation, and AR, if possible
  • Review of prior research report and testing protocol samples
  • Feedback of references

The AREA will provide detailed replies to submitters on or before February 13, 2017. The research project is expected to be completed and finished deliverables produced by May 1, 2017.

Full information on the request for proposals, including a submission form, can be found here.