that ReconstructMe adapts itself to changing environments? We’ve put a video online to demonstrate the effects.
This video shows how ReconstructMe handles dynamic environments. It smoothly updates its state as changes occur. This allows us to put ReconstructMe to use where adaptiveness is required. Of course, these changes have to occur in a certain subset of the volume at once. You cannot change the content of the entire volume at once, or tracking failure detection will kick in.
The following video shows a brand new feature we’ve just added to our main development branch: Automatic Volume Stitching. In this video new volumes are created once the current volume is left. Transformations between volumes are tracked automatically and used to reconstruct a complete surface model. This will allow us to keep the volumes small and the resolution per volume high to create an accurate surface.
Note, that we haven’t applied any post-processing in Meshlab. We might add an Iterative Closest Point algorithm to increase the accuracy at volume seams, but for now, we are pretty happy with the results.
We haven’t decided when to add this feature to the public version, because more testing needs to be done before.
Here are few words on a new feature we’ve added these days to our main development trunk: Tracking failure detection and recovery. Using that feature the system is capable of detecting various tracking failures and recover from that. Tracking failures occur, for example, when the user points the sensor in a direction that is not being covered by the volume currently reconstructed, or when the sensor is accelerated too fast. Once a tracking error is detected, the system switched to a safety position and requires the user to roughly align viewpoints. It automatically continues from the safety position when the viewpoints are aligned.
Here’s a short video demonstrating the feature at work.
We are considering to integrate this feature into the final beta phase, since stability of the system and its usability increase. Be warned, however there are still cases when the systems fails to track and fails to detect that the tracking was lost, causing the reconstruction to become messy.
Here’s a list of 3D replications printed by our keen beta testers. Sources are 360° depth streams of team members we posted here. In case you are about to print one of our team members, let me know, so I can update this post.
Derek, one of our enthusiastic users, has taken the time to replicate Martin’s stream using a 3D printer device. There is a write up about the making of on Derek’s blog. Check it out, it’s worth reading!
Here’s is his video capturing the printing sequence
Thanks a lot Derek!
Tony just dropped us a note that he successfully printed Christoph and uploaded the the results to thingiverse. Here is an image of the result
Thanks a lot Tony!
Bruce (3D Printing Systems) replicated all three of. His setup reminds me of bit of Mount Rushmore. Here’s the image
Our (PROFACTOR) interest in ReconstructMe goes beyond reconstructing 3D models from raw sensor data. The video below shows how to use ReconstructMe to do a stable foreground/background segmentation in 3D. A technique that is often required in 3D vision for pre-processing purposes.
With ReconstructMe you can generate background knowledge by moving the sensor around and thus building a 3D model of the environment. Once you switch from build to track only, ReconstructMe can robustely detect foreground by comparing the environment with the current frame.
This technique can be used for various applications
Monitoring robotic workcells for human safety aspects.
Intelligent reconstruction of process relevant objects only. We definitely do a video on this one.
We have just recorded three of our colleagues and created STL models from them. This time we made a full model of ourselfs by rotating around the camera. One guy was sitting on a chair and rotated around, while the other one moved the Kinect up and down so we could get a full model of the front, back, and also the top. If you own a 3D printer or 3D printing software, we would very much like to know if these models are good enough for 3D printing! Please post any comments here. For everyone who made it on the BETA program, there is also the datastream so you can create the STL model yourself. We have post-processed the STL with Meshlab by re-normalising the normals, and converted them to binary STL. Continue reading →
we recorded a new video showing the really fast reconstruction of different people in just a few seconds. You also will see the stable camera tracking even although a person walks through the recorded scene. Check out the video!
You can also download the generated model of the last person in the video.