Author Archives: Christoph Heindl

About Christoph Heindl

I'm a professional software engineer working for PROFACTOR GmbH in Austria/Europe, the initiator of ReconstructMe and one of its main contributors. Reach me on LinkedIn.

A Robust Low-Cost 3D Scanning Device

Today, it’s our pleasure to announce completely new scanning strategy that improves hand-held scanning in many ways. Although we are in a prototypical stage, we wanted to share the lasted achievements with our readers.

Limitations of hand-held scanning

One of the major issues with hand-held scanning is the fact that the set of tolerated scan motions does not match your natural sequence of movements. This means that you are often constrained to move slower than intended, plus you have make sure that the scanner points at areas of interest and keeps a certain distance to the object being scanned. Violation of any of these contraints leads to ‘tracking lost’ and corrupted data scenarios. We’ve seen unexperienced users being frustrated by these implicit scanning assumptions more than once. Moreover, this frustration quickly turned into to refusal of the 3D scanning technology all together.

Improving usability

So, we thought about ways to improve the usability of the system and came up with the following. In the video linked below you can see a new low-cost 3D scanning device that does not lose track no matter how jerky the movements are.



Features at a glance

Robustness
The new system is robust to any kind of jerky movements. Move naturally and never lose track again. In case you put the scanner aside for a pause, you can immediately pick up scanning from any location within the scanning area.

High accuracy
The system offers a constant error behaviour across the scanning area. Accumulation of errors due to drift is suppressed. The tracking accuracy is mostly independent of surface material and geometric structure of the scene.

Low-cost components
We’ve put strong efforts into cutting costs by using commodity hardware components.

Scale
The supported scanning area is flexible – from desktop up to areas that easily fit an entire car.

We plan to release more material soon. We hope we’ve raised your interest. Stay tuned!

ReconstructMe SDK support for Intel RealSense

We’ve just released a new version of ReconstructMe SDK that supports all currently available Intel RealSense (F200, R200, SR300) models. reme_sensor_create now accepts a librealsense driver argument that will try to open the default Intel RealSense camera. More options can be set by via sensor configuration files. Multiple camera support is also available. See reme_sensor_bind_camera_options for a list of available options.

Download ReconstructMe SDK 2.6.43 x64 for Windows 7/8/10.

Long Night of Research Models Uploaded

Good morning everyone! We had a great Long Night of Research in April this year with more than 300 people visiting us at PROFACTOR. We scanned more than 80 people using a turntable and a single Intel SR300 camera using ReconstructMe. Here’s a good example of what those scans look like.

LNF2016

All models have been generated automatically without any manual interaction whatsoever! Please note that models are uncompressed and take a while to load up in your browser. Head over to entire scan collection.

License

Unless otherwise stated, all 3D model files are licensed under CC BY-NC 4.0. This means you can share or adapt them as long as you give appropriate credit and don’t use the material for commercial purposes.

Advances in Reconstruction and Texturing

Our team has been working hard in the past couple of months to improve overall reconstruction quality of ReconstructMe. We’ve put a lot of our attention towards generating photo realistic 3D scans using low-cost consumer sensors.

What we’ve come up with is a unique texturing pipeline that runs fully automatically and is able to compensate most of the artifacts caused by illumination, motion and other sources of errors. The interactive 3D viewer below shows a 3D bust generated using this work-in-progress technology.

The setup consists of a single INTEL sensor and a standard desktop PC running ReconstructMe. The bust was generated automatically. No manual interaction whatsoever.

Photo realistic texturing

Please be patient while loading as the geometry and textures are uncompressed.

We’d be happy to receive your feedback.

Kicking off Camera Review Series

reviewsicon

As you know ReconstructMe already supports a variety of commodity 3D cameras and we are working hard on integrating new and exotic ones as soon as we take notice of them. We felt it is about time to put details into perspective. Therefore we are kicking off a camera review series to cover sensor specifications, installation instructions and more.

Starting with the Intel RealSense R200 review, we plan to have an in-depth review of each sensor supported in ReconstructMe. The list of supported sensors and reviews can be found on our supported sensors page.

Free ReconstructMe 2.4.1016 released

On behalf of the ReconstructMe team, I’m proud to announce ReconstructMe v2.4.1016. This update improves support for the following sensors

  • Intel RealSense F200
  • Intel RealSense R200

You can grab the latest version from our download page. We are releasing this version free of charge for non-commercial projects as announced recently.

Usage

To use Intel RealSense cameras on your computer you will need to install Intel RealSense camera drivers and use the correct ReconstructMe sensor configuration files. For your convenience, you can download both from below.

Once you have installed the necessary components, open ReconstructMe and set the path to the configuration file as shown in the screenshot below.

HowToSetRealsense

Troubleshooting

Please note that Intel recommends connecting the sensors to a dedicated USB3 port directly. Avoid using hubs or extension cables. When your sensor does not respond for longer period of time, restarting the Intel depth camera services might help. You can easily find these services in local services management console as shown below

RestartRealSenseService

If You Love Something, Set It Free

gifts-570821_1920


From now on, ReconstructMe – our user interface for digitzing the world in 3d – is available to everyone for free!

We offer ReconstructMe free of cost and without limitations for private and non-commercial projects. This means you can download ReconstructMe and use it for everything from scanning for 3d printing, architecture, documentation and animation. For commercial purposes we continue offer royality fee based licenses of ReconstructMe and ReconstructMe SDK.

Head over to the download area and grab the latest version in order to set it free. If you already have ReconstructMe licensed, but your license expired, then re-open ReconstructMe and it will run in non-commercial mode instead of unlicensed mode.

ReconstructMe 2.4 brings color tracking

As promoted in our previous post we added a new color tracking feature to the SDK and promised to release a new UI frontend version supporting it. Today it is my pleasure on behalf of the ReconstructMe team to announce this new fronted release.

In the video below you can see ReconstructMe UI in action. Both scenes are tracked mainly due to color information, as the geometric information alone (planar shape in first scene and cylindrical shape in second scene) do not suffice to estimate the camera position accurately.



Color tracking is currently enabled for all sensors that support a RGB color stream. Algorithm settings are chosen automatically, so you don’t have to configure anything. In case your sensor does not support RGB the algorithm gently falls back to geometric tracking only. Note that scanning colorized is not a requirement for the color tracking algorithm to work properly.

Here are some tips for best results

  • Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.
  • Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.
  • Try to avoid fast camera motions that potentially blur color images.
  • Try to avoid reflective materials. Although a reflection appears as texture, it visually changes when moving the camera.

ReconstructMe SDK – Color Tracking Announcement

After weeks of hard work we are proud to announce a new upcoming feature called color tracking. Color tracking incorporates color information into camera motion estimation. This allows ReconstructMe to keep track over planar regions, cylindrical shapes or other primitive shapes. The following video shows some challenging reconstructions that succeed with the help of color tracking.



The new tracking algorithm seamlessly blends geometric and color information together, leading to an overall improved tracking performance in almost all situations. During development we’ve paid attention to robustness and runtime. As far as robustness is concerned, we’ve made sure that fast variations in illumination or camera auto exposure do not affect the tracking performance.

From a developer and user point of view you should be aware of the following points to maximize tracking stability.

  • Ensure that the scene you observe is texturally and/or geometrically rich. Although we’ve tuned the algorithm to cope with lack of information in both streams, we need at least some information to be present in the scene.
  • Try to get around 25-30 frames per second. Color tracking requires small increments in the transformation of the camera, otherwise it will not converge. Please note that the color tracking does more than geometric tracking alone, so it has a small increased runtime footprint.
  • Try to avoid fast camera motions that potentially blur color images.
  • Discard the first few camera frames as we have observed cameras to vary exposure vastly in these frames.
  • Make sure that the color camera is aligned to depth camera in space and time.

In case tracking fails we’ve also added a recovery strategy that takes color information into account. This global color tracking allows you to recover by bringing the sensor in a position that is close to the recovery position shown as shown in the following video.



Our roadmap forsees that we first release a new end user UI version that supports color tracking in the coming days. This will allow us to have many people test the current state of the algorithm and hence provide us with valuable feedback.

ReconstructMe 2.3.958 with Intel HD support

We continue to update ReconstructMe and are happy to announce our newest release supporting Intel HD 4000/4600 and Intel CPU Core i5/i7.

In case you intend to run ReconstructMe on a Intel HD graphics card, please update the graphics driver. If you favor running ReconstructMe on your Intel CPU install the latest OpenCL runtime.

Have fun reconstructing and let us know what you think!

ReconstructMe 2.3.954 released

We have just released new version of ReconstructMe. This is a bug-fix release that resolves immediate tracking lost issues on NVIDIA cards: our users reported immediate tracking-lost issues when starting a scan. The issue seems to occur on NVIDIA cards, with preference on the following models: GTX750, GTX970, GTX960, GTX840M, GTX850M. In case you are affected, please try to run the latest version.

ReconstructMe 2.3.952 released

We are happy to announce the release of ReconstructMe 2.3.952 today. The latest version can be downloaded here.

I’d like to briefly introduce the new SDK / UI features here and bring in-depth information in upcoming blog posts. The SDK / UI now supports the Intel RealSense F200 camera and we’ve reworked sensor positioning API to allow a more fine-grained control over the scan start position of the sensor with respect to the volume.

The UI now supports a rich set of sensor position options which include positioning the sensor based on a special marker seen by the sensor. This feature allows you to easily position the volume in world-space. The following video shows a turn-table reconstruction of a toy horse using marker positioning and the Intel RealSense F200 camera.



If you would like to try out the new Intel RealSense F200 camera, please download this sensor configuration file. You will need to specify the path to this file in the UI at Device / Sensor.

In case you want to give the marker positioning a try, please download this marker image, print it and measure the printed size in millimeters. Make sure to leave a big white border around the marker. You will need to set the correct marker size in the UI at Volume / Volume Position. We usually print the marker with size 90 millimeters. When using marker positioning, make sure the sensor captures the entire marker.

When you notice that the sensor position starts to vary as you move the marker, you know that ReconstructMe has detected the marker. Once ReconstructMe found the marker, you can use the Offset slider to adjust where the volume starts.

Enjoy and let us know what you think.

ReconstructMe selfies displayed on 3D screen

by Stefan Speiser

Hello everyone, my name is Stefan Speiser. I am a graduated Bachelor Student from the University of Applied Sciences (UAS) Technikum Wien in Vienna, Austria. I will present today my bachelor thesis in which I have worked with ReconstructMe.

The goal of the thesis was to create a booth for trade fairs and open days at the UAS Technikum Wien. At that booth a 3D-Scan of any willing visitor will be created, modified and then shown on a 3D-Monitor, so that the visitor can view him/herself in 3D. To reduce the needed time for the whole scanning-, modifying- and output process, a script capable of automating mentioned processes was created.
The booth was made as a marketing strategy of the UAS, in order to attract even more students than before by demonstrating how interesting technology can be.

This picture shows an early development stage of the booth with a functioning version of the script and all programs working as they should. On the left you can see the 3D-Monitor, next to it the control monitor where ReconstructMe runs and on the right is the Kinect System. Just barely visible on the bottom is a rotating chair.

Early development stage of the booth

Early development stage of the booth

To explain how I achieved this result, I would like to first write about the used hard- and software and afterwards explain the automationscript in detail.

Microsoft Kinect

To get the visual information needed for the 3D-Scan a Microsoft Kinect System was used. The person to be scanned sits in front of the Kinect System on a rotating chair. The built-in infrared projector emits a pattern of dots which covers the person standing in front of the Kinect Sensor and the rest of the room. These dots get recorded by the infrared camera and the Kinect System can calculate a depth image with this information.

An RGB camera recording at a resolution of 640×480 pixel and a frame rate of 30Hz grabs the color information of the scene in front of the Kinect System.

IR-Pattern from infrared projector (Source: MSDN, 2011)

IR-Pattern from infrared projector (Source: MSDN, 2011)

ReconstructMe

Both the depth image and color image information from the Kinect System is used by ReconstructMe. Since ReconstructMe offers native plug-and-play compatibility with the Kinect System, making scans was a breeze. The built-in 3D-Selfie function of ReconstructMe was the perfect fit for my project. It detects automatically when the person in front of the Kinect System rotates a full 360 degrees and stops recording the scan. During the processing phase, ReconstructMe stitches all holes in the mesh, shrinks the 3D-Scan and slices the upper body, so that if you would like to 3D-Print your scan, you could just save it and would be ready to 3D-Print it. (More information about the 3D-Selfie function can be found here: http://reconstructme.net/2014/04/24/reconstructme-2-1-introduces-selfie-3d/)

3D-Selfie scan after ReconstructMe processing

3D-Selfie scan after ReconstructMe processing

Meshlab/MeshlabServer

Meshlab is an open source program which allows you to process and edit meshes. A mesh is collection of for example triangles which build a three-dimensional structure. Available is either the program version with a GUI or MeshlabServer. The special thing about MeshlabServer is the possibility to create a script with all the filters you would like to apply to your mesh and then start this script via the command console. For the sake of automation this approach is of course the favorable.

The filters used in my script are rotating and increasing the 3D-Scan from ReconstructMe to make it better visible on the 3D-Monitor and another filter is reducing the amount of triangles in the mesh by half. The quality reduction is almost not visible, but the file size and therefor the time needed to save the modified mesh is also reduced by half.

Meshlab

Meshlab

Meshlabserver

Meshlabserver

Tridelity MV5500 3D-Monitor

The modified 3D-Scan is shown to the visitor on an autostereoscopic 3D-Monitor. What is autostereoscopy you might ask? It’s a technology which enables the viewer to see a three-dimensional picture without the need for 3D-Glasses or similar equipment.

This effect is achieved with a parallax barrier, a barrier mounted in front of the LCD panel which only allows the viewer to see one picture for the left eye and one for the right eye. By chopping the 3D-Scan into small slightly shifted vertical lines, the brain stitches these pictures together and makes you see a 3D-View. This effect works best at a specified distance and since the monitor supports MultiView, up to 5 people can view the 3D-Scan at the same time from different angles. Depending on the angle to the monitor, the viewer sees the 3D-Scan more from the front or from the side.

Parallax Barrier schematic (Source: Muchadoaboutstuff, 2013)

Parallax Barrier schematic (Source: Muchadoaboutstuff, 2013)

AutoHotKey & Gulover´s Macro Creator

To automate keyboard entries and mouse clicks in the used programs, AutoHotKey was the tool of choice. It lets you automate every thinkable action in the Windows OS and every program running in it. It features IF/ELSE, Loops, a PictureSearch where you can search for a specific detail on the screen and if it’s found fire another function and many other tasks.

Gulover´s Macro Creator is a freeware program which offers a GUI for all functions and tasks of AutoHotKey. This makes working with and programming scripts in AutoHotKey much more time efficient and easier.
This was the description of the utilized hard- and software, now let me explain the automation script in detail.

The automationscript

The script first starts ReconstructMe and the Tridelity Software which loads the last saved 3D-Scan and outputs it on the 3D-Monitor. Then via PictureSearch ReconstructMe is scanned for an error message which appears when the Kinect Sensor is not recognized. If the error appears, the user is asked via a prompt to solve the problem with instructions given.

Next a 3D-Selfie scan is started and when finished the user and the visitor are asked if they like the result or if another scan should be started. If they are content, the scan gets saved and overwrites the last saved scan.
Now the visitor gets asked if he would like to save the scan in an extra folder. If this is wished for, the visitor gets to enter his name which is then saved with the current date added into a specific extra folder. The visitor could now copy the scan on an USB-Stick and take it home and modify it or directly print it with a 3D-Printer.

The scan is now getting modified by the Meshlab script running in MeshlabServer. After the filters are applied the now modified scan gets saved as a .obj file which in the next step can be opened by the Tridelity 3D-Monitor software. This software outputs the 3D-Scan and rotates it so that the visitor can view himself in all different angles. As you can see in the pictures, the 3D-Scan created in ReconstructMe is in color while the output on the 3D-Monitor is only in shades of grey. This is because the 3D-Monitor can only play back .obj files which are unable to store color information. ReconstructMe on the other hand offers four possible output file formats! (.ply, .stl, .obj, .mtl)

After a defined time the user gets asked if he would like to stop the script which would stop all running programs and then itself or if another scan is wanted in which case the script jumps back to the 3D-Scan routine and starts all over.

Thanks to a sponsored license for ReconstructMe which lowers the processing time after the completion of the scan by forty seconds, one complete pass of the script from the start of the 3D-Scan to the final output on the 3D-Monitor takes two minutes and twenty seconds. Definitely a time visitors are willing to wait and ask questions while their scan is being prepared for their viewing pleasure.

I would like to take this opportunity to once more thank the whole ReconstructMe Team, especially Mr. Rooker, and my UAF bachelor thesis supervisor FH-Prof. Dr. Kubinger for their support and valuable input.