I am a Software Engineer, and love to develop cool algorithms. At PROFACTOR I work on multiple research projects, mostly vision and robotics related. One major example is the IRobFeeder which is an automatic bin picking system where I work mostly on object recognition algorithms. I have a private blog too.
Use the opportunity to discuss with numerous international experts from 10 countries and 3 continents, both from science and industry, what 3D printing technologies offer today and what they can be expected to offer in the future.
What is special about Add+it 2015?
Workshops provide the opportunity to interact with participants and experts; discuss relevant 3D printing issues and initiate possible further business cooperations.
The mod looks really cool, I am amazed that this actually works… It definitely shows that adding a lens to the Kinect might be a good solution to be able to scan small parts. The difficult part will be to provide a calibration method that can successfully undistort the data.
Here are the settings Tony Buser has used for this scan:
We have created a full body scan one of our coworkers while using a bigger volume, and he used this as the basis for a character animation.
The most difficult part of the scanning process was to stand still and not move the arms. We solved this problem by letting the model hold a broomstick in each hand :) This data was later removed from the CAD scan.
To animate the skin of the character a biped system from 3Ds Max was used.
Finally, With Motion Mixer from 3Ds Max, several BIP files were loaded to affect biped motion.
The result looks quite stunning! Here is a video with the result:
On Monday everybody, which probably includes you, will be able to scan and reconstruct the world. ReconstructMe will be free for non-commercial use. Contact us for commercial interests.
Big thanks fly out to our BETA testers that made releasing in time possible! They provided valuable feedback throughout the entire BETA program and without them, we wouldn’t have reached the robustness and usability we have now.
We have just recorded three of our colleagues and created STL models from them. This time we made a full model of ourselfs by rotating around the camera. One guy was sitting on a chair and rotated around, while the other one moved the Kinect up and down so we could get a full model of the front, back, and also the top. If you own a 3D printer or 3D printing software, we would very much like to know if these models are good enough for 3D printing! Please post any comments here. For everyone who made it on the BETA program, there is also the datastream so you can create the STL model yourself. We have post-processed the STL with Meshlab by re-normalising the normals, and converted them to binary STL. Continue reading →