Crunchfish CEO on Gesture Technology and the Future of AR
Gesture interaction and proximity software developer Crunchfish recently joined the AREA. We spoke with Crunchfish CEO Joakim Nydemark to get his perspective on AR adoption and the role of gesture interaction and contextual awareness in its future.
How would you describe the state of the AR ecosystem today?
It’s a very exciting period within AR. We have big actors like Google and Apple pushing AR in their new devices and new tools like ARCore and ARKit. And from an industrial perspective, we see a lot of companies starting to see the potential AR can bring. But it’s still a challenge to connect these industry actors with providers of software and hardware to create a total solution.
As a technology provider, we play a role in several segments of this AR ecosystem, including device vendors, software vendors and system integrators, where we utilize our gesture control and proximity technology to enable features. It’s an exciting ecosystem but also an ecosystem that’s in an early stage and that need groups like the AREA. The AREA provides a meeting place for the different creators where they can share and jointly develop these new solutions.
What do you think are the major obstacles to widespread AR adoption now?
There are several things. It is largely a matter of getting the industry know-how to where this new technology can make a difference. From a technology solution perspective, we not only need to provide the hardware or software solutions, but also to map them to the needs of the industry, which is a very complex environment. We need to get these two worlds to meet.
Second, it’s still early days from a hardware perspective. We are building these new devices based on components from the mobile world or other electronics areas, rather than designing them from scratch. We will need to come further in terms of battery life, design, performance, and the comfort of wearing these devices. There are a lot of things that need further improvement to really take off and meet the demands of the industry.
From a software perspective, of course, there are improvements needed as well. We are trying to contribute from our end on the interaction part, which I also think is very important, so that you can interact with these new wearable solutions in the way that’s needed. The methods you use, and the way it is done, are very important for the overall uptake on the end user side. At the end of the day, AR will really take off when we can get people to use wearables as part of their working environment and help them to get the “superpowers” these products can provide.
How important is gesture technology to the development and adoption of AR?
It is crucial, because we are providing the user with a new dimension. Designing for immersive environments is fundamentally different than designing for 2D flat screens. We’ve done a lot of studies into the development of user interfaces for AR solutions. To interact in three dimensions, you need a method that provides the capabilities you expect as a user, like interacting with objects and moving them around, and gestures can do exactly that.
Our mobile proximity technology provides another important part of the user experience, by providing contextual awareness – a key technology to secure information relevance and efficient information exchange when performing tasks. We’re looking at a paradigm shift within UI in AR within the next two years. Our contribution is to provide the means for the touchless interaction and contextual awareness part and making that possible in AR.
What can you tell us about the future of gesture technology in augmented reality?
The limitations are hardware currently. We can use a number of different sensors to enable gesture control, but most AR glasses and mobile solutions are based on 2D standup camera sensors. That limits the way in which we interact with gestures, especially in three dimensions. So, looking forward I expect there will come more advanced sensors in these devices that provide you the depth map, the third dimension of information, that is needed to interpret gestures in all three dimensions. With that in place, we can go from having gestures as a menu-driven, pick-and-choose interaction, to manipulating the environment you are working in with AR. That will be a huge change. Back to this paradigm shift, AR is one part of that shift but also the shades of interaction and how you build user experience and the user interface will be completely different in a few years, which will totally change the appearance of these solutions.
How do you expect to benefit from being a member of the AREA, and which AREA activities might Crunchfish be involved in?
We are very much looking forward to being an active member in driving the user experience aspect of AR. I think we can contribute quite a lot in this space. For the last seven years, we have been working with gesture interaction and user experiences in mobile devices, and lately in virtual reality headsets and augmented reality glasses. Since 2014, we have been working on our mobile proximity technology that provides contextual awareness between entities such as smartphones, wearables, machines, physical areas and vehicles. In a defined “proximity bubble,” our technology enables these entities to seamlessly discover, connect and share information with each other. Besides contributing our user experience expertise, we will certainly gain valuable insights about enterprise challenges and barriers for AR adoption from our fellow AREA members. We’re excited about getting to know and work with pioneers and innovators in the industry.