Recently idea_beans shared a Thing to print a professional Kinect XBox lense holder. This thingy can be used to increase the resolution of a Kinect/PrimeSense sensor by sticking common 2.5+ reading glass lenses to it.
Unfortunately, the Kinect does not have an auto-calibration feature which means that you won’t be able to scan 360° with this upgrade. However, the PrimseSense Carmine 1.09 is reported (and [un]officially confirmed) to auto-compensate for the lense distortion.
As you might know, Fredini’s Coney Island Scan-A-Rama kickstarter project is a success. With 188+ backers the project raised more than 15.000 USD. Now Fred Kahl has released initial hardware designs for the Scan-O-Tron 3000. Below is a video showing the Scan-O-Tron and ReconstructMe in action.
Coney Island Scan-A-Rama is an art project to scan and produce 3D printed portraits of the masses of people who visit America’s playground: Coney Island. Visitors to the portrait studio will come in to get a 3D portrait taken and then a full body 3D figurine of them will be included in a 2014 installation recreating a fully populated model of Coney Island, New York’s Luna Park as it stood 100 years ago!
Fred is using ReconstructMe technology for capturing 3D models of visitors and is probably one of the most experienced users out there. At the time of writing the project still needs some funding, so please back the project if you can.
Thanks to the success of many of our projects like ReconstructMe and CANDELOR, we are now looking to expand our team! If you are an ambitious software engineer who knows how to develop and structure complex software and are interested to develop cutting edge technology, apply here (in German):
We’ve started our midsummer deal and you can order a PRO version of ReconstructMe only at Euro 135 (limited time offer, including taxes). The full PRO removes all limitations of the LITE version and includes updates for one year. Click the image below to shop.
MagWeb just dropped us the following high-detail self-scan using ReconstructMe 1.2 and PrimeSense Carmine 1.09. He writes
Just to give some idea of what you can get with the new version and its Carmine 1.09 support: The attached results were done using Carmine 1.09 with glasses. Seems Carmine firmware (or OpenNi 2) performs some self calibration so adding glasses works fine without doing any external calibration
We’ve released new versions of ReconstructMe (ReconstructMeQt in previous versions) and ReconstructMe SDK. Were happy to introduce OpenNi 2 with this release. We decided to remove the sensor driver installer option from the installer of ReconstructMe and ReconstructMe SDK, since there is only either the Asus sensor driver necessary, or the Microsoft SDK v1.6.
The new SDK provides a faster and more memory efficient polygonization routine. ReconstructMe generally got faster and easier to handle. Feel free to check out the new version and test it.
We are proud to announce the newest release of ReconstructMe SDK and ReconstructMeQt. We’ve put a lot of efforts into both releases in the hope of making 3D reconstruction easier, more robust and versatile. Here is a video of ReconstructMeQt in action!
One of the major changes introduced in ReconstructMe SDK 1.4 is the addition of global tracking based on CANDELOR as reported in an earlier blog post. We find reconstruction much more robust with these algorithms in place. It also allows us to advance into new workflows such as point-and-shoot based reconstruction.
Secondly, we think that you are happy to hear that we have removed the forced tracking-loss limitation in the unlicensed version. The limitation is replaced artificial spheres generated into the output mesh.
Our graphical user fronted ReconstructMeQt 1.1 has received a couple of major usability improvements. You are now able to preview the surface directly, decimate before saving the mesh and choose among different rendering options.
Until now, ReconstructMe assumed that the camera movement between subsequent frames is rather small. Violating this constraint threw ReconstructMe off the track and a manual re-location required the user to position the camera very closely to the last recovery position. While this mode works most of the time it can be tedious to find the correct position manually.
With CANDELOR we can weaken this requirement as its algorithms allow us to determine the correct camera movement for large displacements. This is possible as CANDELOR searches for similar features in the 3D data of the recovery position and the current sensor frame. Given a set of corresponding features a transform can be estimated that reflects the searched for camera movement.
The video below shows tracking with CANDELOR enabled and directly compares tracking performance with and without CANDELOR enabled reconstruction.
As one can see in the video recording of data is paused multiple times and the resume position is far off from the paused position. Despite the displacement of the camera, tracking is successfully recovered by CANDELOR.
Automate extrinsic calibration of multiple sensors
A nice benefit of using CANDELOR is that it now has become very easy to use multiple sensors working on the same volume. In traditional multi-sensor applications one requires a good estimate of the so called extrinsic calibration. That is the transformation between two cameras. This extrinsic calibration is often assumed to be fixed and not allowed to change.
ReconstructMe works differently. The initial extrinsic calibration of multiple cameras is automatically calculated by CANDELOR. Once both sensors have registered you can freely move the cameras into different locations. They will maintain calibrated via the data you record. The unique advantage is that multiple sensors can scan more quickly (divide and conquer).
In the following video you can see Christoph and me scanning a person using multiple cameras that work on the same volume.
Point-and-shoot 3D object reconstruction
We’ve added a new reconstruction mode based on global tracking: Point-and-shoot. It means that reconstruction of objects is performed only at specific manually picked locations. Especially for users with low-powered GPU/CPUs will benefit from this mode as it allows you to capture an object with very few positions. The video below shows how it works.
Despite the fact that only few positions are used for data generation, the model looks quite smoothed and closed.
We are confident that the new features will ship with the upcoming SDK and Qt within the next two weeks.
the ReconstructMe team wishes everyone a Merry Christmas and a happy new Year! Together, we’ve reached a lot in this year, pushing forward the state of the art in 3D reconstruction. Although not everything we intended to do made it in time, we hope we can catch on that in the new year. Here’s an outlook for 2013 and summary of the past couple of weeks:
Area based Stereo Vision
We’ve teamed up with an Austrian based company that provides real-time dense area based stereo vision systems for arbitrary dimensions. Those systems can be operated in active and passive mode and scaled according to the scanning requirements. The image below shows one of the first scans of a peanut.
This is the first evidence ever made that ReconstructMe can scale for arbitrary dimensions. The sensor input dimension is micro meters (0.001 mm).
We received note that we have been accepted by the LEAP developer program. From a first glimpse of the SDK it seems like the API does not yet provide the necessary data to perform real-time reconstruction, but we hope that the missing features will be added within the 1st quarter of 2013, so we can start working on integration.
ReconstructMe a versatile tool. Besides scanning people for fun, ReconstructMe has been put into action in industrial applications. The video below shows how to digitalize existing machinery and pipes to generate complete 3D models.
If you are using ReconstructMeQt on a Windows 7 PC, you will be happy to hear that our application can be controlled by speech commands, essentially freeing you from having your fingers at the keyboard while scanning. Here is how it works.
Most sensors carry single microphones or an entire microphone-array. Using Windows speech recognition one can translate spoken commands to key-press events. We’ve created a speech recognition macro file that maps the following voice commands to keystrokes:
ReconstructMe Start – CTRL+P
ReconstructMe Stop – CTRL+P
ReconstructMe Reset – CTRL+R
ReconstructMe Save – CTRL+S
The following PDF file contains has the required instructions for setting this up.
We’ve just released ReconstructMe SDK 1.3 bringing a huge list of improvements. Especially performance for older graphics cards and high resolution volumes has increased, support for tilt motors was added and the scanning time in non-commercial mode is now significantly longer. This is just the tip of the iceberg. For an in-depth log of changes visit the release page.
We’ve just released ReconstructMe SDK 1.1 which brings you improved image support, calibration support as well as an increased scanning time for non-commercial projects. This release also greatly improves the documentation of the API with additional examples and sections on critical API elements.
Today ASUS sent along a RD version of a patch that should resolve the Xtion USB 3.0 issue. I’ve been given permission to distribute this file on a no-warranty-of-any-kind basis. In order to apply the patch download
Extract it to a folder of your choice, attach the sensor to an USB 2.0 port and execute UsbUpdate!Update-RD108x!.bat. Once that step has completed successfully your sensor should work on USB 3.0.
According to ASUS there was an issue with ASmedia USB3.0 controller(ASM1041/ASM1042), which can be fixed by ASmedia’s new driver(18.104.22.168) and firmware(12220E). I assume the content of .zip file updates the driver and the firmware.
I’m very grateful to ASUS that ReconstructMe users are among the first to receive this patch! Please report back your results, so I can give feedback to ASUS.