Category Archives: 3D Scanning

A 3D scanner for Hunt Library (Part 2)

by William Galliher

Hello everyone! We are the 3D Body Scanner team at North Carolina State University, and we are here with another blog update to show two important things. Pictures and progress! That’s right, we have a mid-project update for you all and a bunch of pictures of the team, our work, and where our project will be once we have completed it. The sponsor for our project, and the eventual home for our booth, is the Makerspace team at the James B. Hunt Jr. Library.

The Hunt library is a showcase for engineering and technology, boasting large, open spaces provided by the bookBot system that houses all of the books in underground storage. The MakerSpace within the library hosts multiple 3D printers, and is dedicated to educating the patrons of the library in 3D technology. In comes our team. We told you about our purpose, to make 3D scanning fun and educational, in our last post.

Here are three members of our four person team.

From left to right: Dennis Penn, William Galliher, Austin Carpenter

From left to right: Dennis Penn, William Galliher, Austin Carpenter

The three of us are standing in our prototype scanning booth, which can rotate around the user. The other member of the team is below, where he is getting scanned using the alpha prototype of our software and station.

Jonathan Gregory, standing in the station

Jonathan Gregory, standing in the station

But don’t worry, not only do we have pictures of our team and the library, we also have progress. Our prototype station was able to successfully scan Jonathan, and the output mesh is below.

The scan of Jonathan Gregory

The scan of Jonathan Gregory

Pretty good for our alpha demo. We even managed to successfully scan the chancellor of our school, Chancellor Randy Woodson. Not only did we get a successful scan of our chancellor, we also got a small figure printed out!

The 3D print of Chancellor Randy Woodson

The 3D print of Chancellor Randy Woodson

So that concludes this mid project blog post. You got to meet the team, and even got a sample of what we are able to do so far. We cannot wait to finish this project and be ready for our Design Day near the end of April. We will be back then with a final post on our project. Thank you for reading!

All images courtesy of William Galliher and http://lib.ncsu.edu/huntlibrary

Idea Contest – Win a 3D Printer!

makerbotrep2Hello everyone!

We are proud to announce the first ReconstructMe idea contest, where your idea can win a 3D printer and other great prizes. To participate, all you need to have is a good idea. We’ve put together a short document describing the contest, the evaluation criteria and other things you need to know to get started.

Please note that the closing data is 15th of June 2014 19th of June 2014. In case your submission contains larger files, please upload them to third party services and refer to the material by linking.

Looking forward to see your submission!

ReconstructMe Large Scale Reconstruction – Development Insights

Recently we kicked off the development of a new feature. Large scale reconstruction. Our vision is to enable users to reconstruct large areas with low cost sensors on mobile devices. This post shows the initial developments in boundless reconstruction.

Many approaches could be applied to enable this feature. After an evaluation we finally decided for a solution which integrates nicely into ReconstructMe. That is, we translate the volume along canonical directions and keep track of the camera position in world space. Once we determine how to shift, we need to figure out when to shift.

We decided to go with the concept of what we call trigger boundaries that are relative to the volume. When a specific point crosses this boundary, the volume will be shifted. The first approach was to position the camera at the center of the volume. Once the camera position crossed the boundary, the volume was shifted. We found that this concept did not perform ideally, since the data behind the camera is allocated but most likely not captured and thus wasteful. After evaluating different options, we settled with the idea to specify the trigger point as the center of the view frustum in camera space. Again, when the trigger point crosses the trigger boundary the volume is shifted into the dimension of the cross occurence.

LargeScaleVolumeShift

In testing we faced the issue that ReconstructMe requires decent computation hardware and its rather tedious to move around with a full blown desktop PC or gamer notebook. Luckily, a old feature called file sensor helped us to speed up testing and data acquisition. Recording a stream of a depth camera does not require a lot of hardware resources and can be performed on Windows 8 tablets (Asus Transformer T100 in this case).

The streams were used to test the approach and to extract a colorized point cloud of the global scanned area. The tests showed a drift of the camera which was expected. Nonetheless, ReconstructMe is able to reconstruct larger areas without any problems if enough geometric information is available.

Based on our initial experiences, we plan to invest into more research in robust camera tracking algorithms, loop closure detection and mobility. Additionally we will need to settle with a workflow for the user interface and at the SDK level.