QU3ST - 2.5D Sensing Technologies in Motion: The Quest for 3D
11 Mai 2012
Catégorie : Conférence internationale
There already exist many workshops and seminars on SfM/SfX, non-classical imaging or active sensors. This one is, to our point of view, unique in so far as it takes into account the very recent advances in sensor technologies and intends to bring together different communities: different fields or sub-fields of research (machine vision, computer vision, robotics, multiple view, active sensing), different level of interest and involvement (manufacturers, researchers, users), different scientific heritage and background. We all share common points of interest: how to provide a 3D model of the environment? how to take benefit from it for a given application? Our intention is to find out a common language and generic methodology, sensor-independent, to best describe, analyse and handle these points of interest. The workshop will help in stimulating the discussion to reveal new challenges and prospects in SfM/SfX.
A selection of the best papers presented to the workshop will be published in a Special Issue of IET Computer Vision.
Deadline: June 27, 2012
Structure-from-Motion (SfM) is a passive and now mature methodology that facilitates the 3D reconstruction of sparse features while estimating the camera’s displacement over time. SfM has recently been given a unifying framework based on the concept of generic camera models.
Recent advances in active sensor technologies (Time-of-Flight cameras, structured light such as the COMET, RGB-D cameras such as the Kinect, LIDARs such as the Velodyne and so on) permit to easily capture a 2.5D view of the scene in one-shot. These sensors are getting increasing popularity in robotics applications, cultural heritage or computer-aided surgery, amongst others. However, these sensors usually do not solve the temporal registration problem, required to produce a 3D model by stitching together the multiple 2.5D views.
The scientific community has recently tackled the problem of solving 2.5D sensor motion tracking, but a unifying modeling has not been yet proposed.
This workshop intends to gather the two communities working on active and passive 3D vision with the goal of integrating active and passive technologies in a unifying framework. More specifically, this workshop will address, but will not be limited to, the following questions and issues:
- Pose estimation of active sensors;
- Active sensors in dynamic environments;
- Rigid and non-rigid scenarios;
- Optical flow computation from 3D active sensing;
- 2D/3D fusion;
- Structure-from-motion vs motion-from-structure (registration);
- 3D models from multiple 2.5D views;
- Optimization of the 3D reconstruction from multiple shots;
- Data structures for large-scale/long-sequence reconstruction.
Our objective is to identify algorithms and methodologies that would allow advancing the research field towards a generic structure-and-motion from structure-and-motion scheme suited to both passive and active sensors and to different fields of applications (robotics, medical imaging, and industrial inspection). The workshop will also be the opportunity to bring together the manufacturers (in 3D capture), scientists from computer vision and robotics, and the final users.
The key outcomes of the workshop will be to establish an overview over i) the recent advances in sensor technologies that permit 3D active sensing; ii) the possible applications that can benefit from these advances; iii) the state-of-the-art in motion recovery from 3D live sensing; and iv) foster a discussion of the key challenges and opportunities of SfX/SfM based on these new sensors.
Prospective authors are invited to submit a ten-page paper (ECCV format) via the workshop web-page. The papers will be reviewed by an international Program Committee. Accepted papers will be presented in single-track oral sessions as well as a poster session (all papers are allocated the same number of pages). Accepted papers will be published on the DVD included in the main conference proceedings.