ReconstructMe Big Picture

ReconstructMe enables you to create a three dimensional virtual model of real world objects or scenes. Therefore, you only have to walk around with a 3D video capture device – like Microsofts new sensor Kinect  – and film the object(s) you are interested in. Filming the objects from as much different views as possible will give you a detailed model of the real world object or scene. In case you filmed the scene from 360 degrees, you will also get the virtual scene in 360 degrees. Once the virtual environment model is created, a surface model can be generated.
ReconstructMe consists of three core components. These components are data representation, tracking of the camera and 3D surface generation. The data of the scene is represented in a volume, described by voxels. The depth images of the 3D video capture device are integrated into this volume. The values of the hit voxels are recalculated, depending on the camera position relative to the volume. To calculate the camera position, register frame t-1 to frame t to get the transformation between them. Doing this with every frame, you’ll always know the current camera position relative to its last frame (or t frames back – start position). Due to the volume representation, you don’t have to save each depth image and its transformation, what would be a really huge amount of data. Instead, a voxel could be hit multiple times by different depth images and also smooth the noise error of the 3D video capture device.
A current CPU (like Intel i7) can’t manage the processing of the enormous data received from a 3D capture device in real time. Due to the high parallel hardware-design of GPU’s, the data can be efficiently processed on current Graphic cards (like AMD Radeon HD 6870 or nVidia GTX 560).

6 thoughts on “ReconstructMe Big Picture

  1. Martin

    Hi,
    just one question, what about the precesion?
    How exactly will be the scanned object?
    Is it possible to scan an object with an acuracy about 1 or 2 mm ?

    Thank you Martin

    Reply
    1. Christoph Heindl

      It depends on the size of the volume and the resolution, the accuracy of the sensor and it’s calibration. I’d say that you will come close to 2-5mm accuracy for smaller volumes.

      Reply
  2. csk varma

    How would the available memory on GPU impact the final resolution of the 3D surface. Since you are using a voxel-grid like structure, the GPU memory would limit the resolution of the grid and hence the final model. Please correct me if I am wrong. What is the typical size of the GPU memory needed for a good resolution for the final 3D model. One more question is regarding the lower-end GPUs. Since lower end GPUs may not have enough cores as in GTX560(384 cores) or Radeon HD 6870(1120 stream processors), could reconstructMe be run on lower end GPUs like GT 540M(96 cores) which are available on typical laptops.

    Reply
    1. Christoph Heindl

      Yes, the available GPU memory limits the resolution of the final model. However, as we apply sub-voxel interpolation we are able to reduce the effect of quantization a lot.

      Larger volumes, both in size and resolution, will have a negative impact on the performance of ReconstructMe. So yes, lower-end GPUs will perform worse than high-end GPUs.

      However, ReconstructMe offers the ability to record data streams and process them later on. This you won’t miss a frame and are able to reconstruct your previously recorded stream over night on even on CPU (we currently do have some problems with OpenCL on CPUs though).

      Reply

Leave a Reply to Christoph Heindl Cancel reply

Your email address will not be published. Required fields are marked *