I’ve been amazed for some time by what can be achieved with LIDAR, a game engine, and a bit of processing power. A few years ago, Digital Urban posted a series of tutorials for getting architectural models from Sketchup or 3d Max into the Oblivion game engine, as a way for exploring built space. I was always blown away by that. The issue I had was in getting the 3d model created in the first place.
Enter PhotoFly, from Autodesk. PhotoFly is currently free, and it works magic. It transforms your computer and digital camera into a 3d scanner. You take a series of overlapping photographs, upload them to the program, and the program sends them to Autodesk for processing. The system works out from the photographs three dimensional points, and uses these to stitch your images into a wire mesh model (with your photos as the overlay). The results are impressive – once you figure out the trick of taking sufficiently redundant photographs to provide the necessary information.
I used a Kodak EasyShare camera, Z612, and tried four different scenes before I finally started to get the hang of it. My first was a ceramic coffee mug with a glossy white finish. The shininess of the mug confused the processor. I then tried a Campbell’s Soup Can (my nod to Mr. Warhol). I put it on a lazy susan, set my camera up, and rotated the lazy susan through 5 degree increments, thinking I would get good coverage. This did not work (which I would’ve known had I watched the tutorial videos, but really, who has time for that?). The reason it did not work is that the algorithms that stitch everything together count on differences in perspective, focal depth, and so on to work out the relative placement of the camera for each shot, and hence the distance between the focus point and overlapping points that can be identified in the shots. At least, I believe that’s the reason. I tried again, moving around the soup can, but this time I didn’t get enough overlap.
My next attempt was to do my office (imagine, scanning an interior space!). I had more success this time, but again my overlap and the sheer clutter in here defeated me. Finally, I put a small toy car (about 5 inches long by 2 inches wide) on a chair, and proceeded to take about 20 photographs of it from every angle, varying the depth and distance. By this point, I was starting to get the hang of it, and it uploaded quite well. The ‘draft’ model came back more or less complete, and I saved it and sent it to youtube:
In the ‘draft’ mode, you can select triangles to clip, provide real-world coordinates and measurements (useful for creating scans of buildings and interior spaces especially) and basically do enough pre-processing that the model looks just about complete. One then changes to either ‘mobile’, ‘standard’, or ‘high’ quality and the model is sent back to Autodesk for more processing. At this point, the model can be exported in a variety of formats, especially CAD formats like DWG. This is where I get very excited. Sketchup Pro for instance can import DWG. And Sketchup can be used to create AR. I could imagine scanning an interior space, sending the resultant model to Sketchup and then into AR, tying the model to a QR code. Since my model can be sized 1:1, it should be possible for instance to scan say a cave-shelter, and then step into it somewhere else (the middle of a playing field, for example). I’m very excited about the possibilities; more on these as I explore.
My final model of the toy car is below. All in all, it took approximately 30 minutes to get to this stage. Obviously, my model has some flaws in it, but for 30 minutes work… not bad.
Problems with the Software
From time to time, the software just mysteriously died during the upload process. Save early, and save often. Also, exporting to youtube was often fraught. One enters the mysterious world of picking the right quality and codec to make it work. Once I selected ‘mobile’ for everything, things seemed to work better.
9/10 stars from the electric archaeologist. This is the most exciting piece of software I’ve played with in ages.