Some of my latest research has focused on reconstructing the 3D shape of objects using noisy, sparse depth data (from, say a Kinect) in real-time using efficient algorithms based around Martin Hermann et. al Voxel Depth Carving.
A key insight we’re pushing is that often the negative data from a point cloud (that is, information about what is not part of the object) is much more informative than the positive data. This has led us to avoid using point clouds altogether in favor of much more descriptive ray clouds to describe depth data. One advantage of this technique is that it tends to be much more robust to noise, and naturally deals with missing data.
Another advantage of the technique is that it allows us to reason about the unknown regions of space, and make probabilistic statements about the object at those locations — something which is not possible when one only considers the point cloud. The robot can construct strongly principled priors about the shapes of objects and make reasoned inferences about the shape of the unknown parts of the object, rather than throwing away data or fitting a model beforehand.
In an ongoing research effort, we are using the Voxel Depth Carving technique along with kernel regression to learn the 3-dimensional distance field of the object using passthroughs of un-occupied space as constraints.
This research will probably be published early next school year.