In our research project we will develop new data structures and real-time methods that allow users to
intuitively feel and manipulate 3D scan data already while acquiring it.
Combining virtual and augmented reality displays with portable range sensors can permit to immerse
users into an experience of 3D data that was just captured live. One challenge is to design operations
which clean, transform and structure the raw data fast enough to provide a lag-free user experience.
The other is to structure the data in order to enable new ways of interaction with the scene,
de-coupled from physics-based metaphors like walking or flying.
Through our project, we introduce a paradigm change for navigation in virtual and mixed
environments. Furthermore, we expect the proposed data structure and implemented methods to
enhance the speed of human-computer interaction in such environments. The expected advances
in conducting virtual experiences directly contribute to several other basic and applied research
efforts. Applications include, but are not limited to, medical healthcare via 3D visualization of
2D CT scans, geology and geophysics via structure measurement and analysis of LIDAR data of
surfaces, engineering and prototype design (e.g., cars and aircrafts), as well as physics, biology and
astronomy. Other possible applications loosely related to research include military training,
crime-scene construction, and tourism.
We propose a new view-dependent data structure that permits efficient connectivity
creation and traversal of unstructured data, while classifying occlusions at no extra cost.
Based on this data structure, we will develop new methods for fast surface recovery, collision
detection, as well as browsing and interactive manipulation of dynamic environments.
The new data structure will also allow quick access to occluded layers in the current view. This
enables new methods to explore, manipulate and edit 3D scenes, overcoming interaction methods
that rely on physics-based metaphors like walking or flying. In a way, this lets us lift interaction with
3D environments to a superhuman level.
The special contribution of our project is that we cut short the time required to transform scanned 3D
data into a structured form which permits browsing through the scene, as well as touching and
editing the reconstructed surfaces.
Post-doc Dr. Stefan Ohrhallinger will be the principal investigator together with Prof. Dr. Michael
Wimmer, head of the Rendering and Modeling group of the Institute of Computer Graphics and
Algorithms, and PhD student Mohamed Radwan will also work on this project.