3D Vision Topics at the 12th European Conference for Visual Media Production

As computing power increases and costs decrease, many technologies previously confined to motion picture studios, such as capturing temporally consistent 3D models of dynamic scenes from real world imagery, are becoming suitable for other environments. Consequently, the European Conference for Visual Media Production which has long been a forum for presenting computer graphics and video research for motion pictures, is rising in relevance on the list of events where research valuable to enterprise Augmented Reality is likely to be presented and published.

One example is the paper Dr. Stefan Rueger of the Knowledge Media Institute (KMi) of the Open University presented on the topic of new methods for monocular markerless motion capture. Another is the keynote given by Dr. Lourdes Agapito, Professor of 3D Vision and member of the Vision and Imaging Science group and the Centre for Inverse Problems in the Department of Computer Science at University College London (UCL). Her research in Computer Vision has consistently focused on the inference of 3D information from the video acquired from a single moving camera.

Agapito’s keynote focused on a “model-free” framework developed by her group at UCL to acquire fully dense “per-pixel” 3D models of deformable objects solely from video. Using a template-based, sequential and direct method of tracking deformable surfaces with only an RGB camera could be highly valuable for Augmented Reality. In addition, Agapito described a unified approach to 3D modelling of dynamic scenes that simultaneously segments the scene into different objects and decomposes these into parts while reconstructing them in 3D. This approach allows the acquisition of more semantically meaningful 3D representations of a scene. She concluded by discussing recent work on correlations in the variation of 3D shapes across objects of the same class that could address the problem of category-based reconstruction.

While these frameworks and approaches are not currently possible in real time, they could increase the ease and lower the cost of real world environment capture, permitting more frequent capture and higher reliability recognition and tracking of static and dynamic objects in real world scenes.

Back to News +

Share Article: