Putting Pompeii on Your Coffee Table

(cross-posted from my course blog, #hist5702x digital/public history. If you’re interested in public history and augmented reality, check out my students’ posts!)

Creating three dimensional models from photographs has its ups and downs. But what if we could do it from video? I decided to find out.

First, I found this tourist’s film of a house at Pompeii (house of the tragic poet, he says):

I saved a copy of the film locally; there are a variety of ways of doing this and two seconds with google will show you how. I then watched it carefully, and took note of a sequence of clearly lit pans at various points, marking down when they started and stopped, in seconds.

C extract extract3 atr-00625

Then, I searched for a way to extract still images from that clip. This blog post describes a command-line option using VLC, option 3. I went with that, which created around 600-images. I then batch converted them from png to jpg (Google around again; the solution I found from download.com was filled with extraneous crapware that cost me 30 minutes to delete).

I then selected around 40 images that seemed to cover things well. It would’ve been better if the cameraman had moved around rather than panned, as that would’ve provided better viewpoints (I’ll search for a better video clip). These I stitched together using 123D Catch. I have the Python Photogrammetry Toolbox on my other computer, so I’ll try doing it again on that machine; 123D Catch is all well and good but it is quite black-box; with PPT I can perhaps achieve better results.

The resulting model from 123D Catch shows the inside of the atrium far better than I expected (and again, a better starting film would probably give better results). I exported the .obj, .mtl, and the jpg textures for the resulting model, to my computer, which I then uploaded to augmentedev.com.

The result? A pompeian house, on my desktop!

The Atrium of the House of the Tragic Poet, Pompeii-on-the-Rideau

Now imagine *all* of the video that exists out there of Pompeii. It should be possible to create a 3d model of nearly the whole city (or at least, the parts they let tourists into), harvesting videos from youtube. One could then 3d print the city, export to AR, or import into a game engine….

As far as the #hist5702x project is concerned, we could do this in the workspace they’ve set up for us in the warehouse building, or at the airport, or from historical footage from inside a plane, or….

Historical Friction

edit June 6 – following on from collaboration with Stu Eve, we’ve got a version of this at http://graeworks.net/historicalfriction/

I want to develop an app that makes it difficult to move through the historically ‘thick’ places – think Zombie Run, but with a lot of noise when you are in a place that is historically dense with information. I want to ‘visualize’ history, but not bother with the usual ‘augmented reality’ malarky where we hold up a screen in front of our face. I want to hear the thickness, the discords, of history. I want to be arrested by the noise, and to stop still in my tracks, be forced to take my headphones off, and to really pay attention to my surroundings.

So here’s how that might work.

1. Find wikipedia articles about the place where you’re at. Happily, inkdroid.org has some code that does that, called ‘Ici’. Here’s the output from that for my office (on the Carleton campus):

http://inkdroid.org/ici/#lat=45.382&lon=-75.6984

2. I copied that page (so not the full wikipedia articles, just the opening bits displayed by Ici). Convert these wikipedia snippets into numbers. Let A=1, B=2, and so on. This site will do that:

http://rumkin.com/tools/cipher/numbers.php

3. Replace dashes with commas. Convert those numbers into music. Musical Algorithmns is your friend for that. I used the default settings, though I sped it up to 220 beats per minute. Listen for yourself here. There are a lot of wikipedia articles about the places around here; presumably if I did this on, say, my home village, the resulting music would be much less complex, sparse, quiet, slow. So if we increased the granularity, you’d start to get an acoustic soundscape of quiet/loud, pleasant/harsh sounds as you moved through space – a cost surface, a slope. Would it push you from the noisy areas to the quiet? Would you discover places you hadn’t known about? Would the quiet places begin to fill up as people discovered them?

Right now, each wikipedia article is played in succession. What I really need to do is feed the entirety of each article through the musical algorithm, and play them all at once. And I need a way to do all this automatically, and feed it to my smartphone. Maybe by building upon this tutorial from MIT’s App Inventor. Perhaps there’s someone out there who’d enjoy the challenge?

I mooted all this at the NCPH THATCamp last week – which prompted a great discussion about haptics, other ways of engaging the senses, for communicating public history. I hope to play at this over the summer, but it’s looking to be a very long summer of writing new courses, applying for tenure, y’know, stuff like that.

Edit April 26th – Stuart and I have been playing around with this idea this morning, and have been making some headway per his idea in the comments. Here’s a quick screengrab of it in action: http://www.screencast.com/t/DyN91yZ0

p3d.in for hosting your 3d scans

I’m playing with p3d.in to host some three dimensional models I’ve been making with 123D Catch. These are models that I have been using in conjunction with Junaio to create augmented reality pop-up books (and other things; more on that anon). Putting these 3d objects onto a webpage (or heaven forbid, a pdf) has been strangely much more complicated and time-consuming. P3d.in then serves a very useful purpose then!

Below are two models that I made using 123D catch. The first is the end of a log recovered from anaerobic conditions at the bottom of the Ottawa River (which is very, very deep in places). The Ottawa was used as a conduit for floating timber from its enormous watershed to markets in the US and the UK for nearly two hundred years. Millions of logs floated down annually…. so there’s a lot of money sitting down there. A local company, Log’s End, has been recovering these old growth logs and turning them into high-end wide plank flooring. They can’t use the ends of the logs as they are usually quite damaged, so my father picked some up and gave them to me, knowing my interest in all things stamped. This one carries an S within a V, which dates it to the time and timber limits of J.R. Booth I believe.

logend-edit2 (Click to view in 3D)

And here we have one of the models that my students made last year from the Mesoamerican materials conserved at the Canadian Museum of Civilization (soon-to-be-repurposed as the Museum of Canadian History; what will happen to these awkward materials that no longer fit the new mandate?)

mesoamerican (Click to view in 3D)

PS
Incidentally, I’ve now embedded these in a Neatline exhibition I am building:

3d manipulable objects in time and space

123D Catch iPhone app

I’ve just been playing with the 123D catch iphone app. Aside from some annoying login business, it’s actually quite a nice little app for creating 3d volumetric models. I have a little toy car in my office. I opened the app, took 15 pictures of the car sitting on my desk, and sent it off for processing. The resulting model is viewable here. Not too bad for 5 minutes of futzing about with the iphone.

Since I’m interested in 3d models as fodder for augmented reality, this is a great workflow. No fooling around trying to reduce model & mesh size to fit into the AR pipeline. By making the model with the iphone’s camera in the first place, the resulting model files are sufficiently small enough (more or less; more complicated objects will probably need a bit of massaging) to swap into my Junaio xml and pumped out to my channel with little difficulty.

 

How to make an augmented reality pop-up book

We made an augmented reality pop-up book in my first year seminar last spring. Perhaps you’d like to make your own?

1. Go to Junaio and register as a developer.

2. Get some server space that meets Junaio’s requirements.

3. On the right hand side of the screen, when you’re logged into Junaio, you can find out what your API key is by clicking on ‘show my api’. You’ll need this info.

4. Create a channel (which you do once you’re logged in over at Junaio; click on ‘new channel’ on the right hand side of the screen). You will need to fill in some information. For the purposes of creating a pop-up book, you select a ‘GLUE channel’. The ‘callback’ is the URL of the folder on your server where you’re going to put all of your assets. Make this ‘YOURDOMAIN/html/index.php. Don’t worry that the folder ‘html’ or the file ‘index.php’ doesn’t exist yet. You’ll create those in step 6.

Now the fun begins. I’m going to assume for the moment that you’ve got some 3d models available that you’ll be using to augment the printed page. These need to be in .obj format, and they need to be smaller than >750 kb. I’ve used 123D Catch to make my models, and then Meshlab to reduce the size of the models (Quadratic Edge Collapse is the filter I use for that). Make sure you keep a copy of the original texture somewhere so that it doesn’t get reduced when you reduce polygons.

5. Create your tracking images. You use Junaio’s tool to do this. At the bottom of that page, where it says, ‘how many patterns do you want to generate?’, select however many images you’re going to augment. PNGs should work; make sure that they are around 100 kb or smaller. If your images don’t load into the tool – if it seems to get stuck in a neverending loading cycle – your images may be too large or in the wrong format.  Once this process is done, you’ll download a file called ‘tracking.xml_enc. Keep this.

6. Now, at this point, things are a bit different than they were in May, as Junaio changed their API somewhat. The old code still works, and that’s what I’m working with here. Here’s the original tutorial from Junaio. Download the ‘Getting Started’ php package. Unzip it, and put its contents into your server space.

7. navigate to config/config.php, open that, and put your Junaio Developer key in there.

8. I created a folder called ‘resources’; this is where you’re going to put your assets. Put the tracking_xml.enc in there.

9. Navigate to src/search.php. Open this. This is the file that does all the magic, that ties your tracking images to your resources. Here’s mine, for our book. Note how there’s a mixture of movies and models in there:

<?php /**/ ?><?php

/**
* @copyright  Copyright 2010 metaio GmbH. All rights reserved.
* @link       http://www.metaio.com
* @author     Frank Angermann
**/

require_once ‘../library/poibuilder.class.php’;

/**
* When the channel is being viewed, a poi request will be sent
* $_GET['l']…(optional) Position of the user when requesting poi search information
* $_GET['o']…(optional) Orientation of the user when requesting poi search information
* $_GET['p']…(optional) perimeter of the data requested in meters.
* $_GET['uid']… Unique user identifier
* $_GET['m']… (optional) limit of to be returned values
* $_GET['page']…page number of result. e.g. m = 10: page 1: 1-10; page 2: 11-20, e.g.
**/

//use the poiBuilder class   — this might not be right, for
$jPoiBuilder = new JunaioBuilder();

//create the xml start
$jPoiBuilder->start(“http:YOURDOMAIN/resources/tracking.xml_enc”);

//bookcover-trackingimage1
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0″, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/movie-reel.mp4&#8243;, //texture
95, //scale
1, //cosID
“Universal Newspaper Newsreel November 6, 1933, uploaded to youtube by publicdomain101″, //description
“”, //thumbnail
“movie1″, //id
“1.57,1.57,3.14″, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage2 pg 9 xxi-a:55 -
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0″, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/edited-museum-1.mp4&#8243;, //texture
90, //scale
2, //cosID
“Faces of Mexico – Museo Nacional de Antropologia”, //description
“”, //thumbnail
“movie2″, //id
“1.57,1.57,3.14″, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://www.youtube.com/watch?v=Dfc257xI0eA&#8221;);

$poi->addCustomization($cust);
//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage3 pg 11 xxi-a:347 bighead -3d model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Effigy”, //name
“0,0,0″,  //translation
http://YOURDOMAIN/resources/id3-big-head.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/big-head-statue_tex_0.jpg&#8221;, //resource (texture)
5, //scale
3, //cos ID -> which reference the POI is assigned to
“XXI-A:51″, //description
“”, //thumbnail
“Zapotec Effigy”, //id
“0,3.14,1.57″ //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage4 pg13 xxi-a:51 from shaft tomb, model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Shaft Tomb Figurine”, //name
“0,0,0″,  //translation
http://YOURDOMAIN/resources/id4-shaft-grave.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-statue.jpg&#8221;, //resource (texture)
5, //scale
4, //cos ID -> which reference the POI is assigned to
“XXI-A:51″, //description
“”, //thumbnail
“Shaft Tomb Figurine”, //id
“0,0,3.14″ //orientation
);
//echo the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage5 pg15 xxi-a:28
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0″, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/pg15-movie.mp4&#8243;, //texture
90, //scale
5, //cosID
“Showing the finished model in Meshlab”, //description
“”, //thumbnail
“movie3″, //id
“1.57,1.57,3.14″, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage6 pg17 xxi-a:139 man with club movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0″, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/archaeologicalsites-1.mp4&#8243;, //texture
90, //scale
6, //cosID
“”, //description
“”, //thumbnail
“movie4″, //id
“1.57,1.57,3.14″, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage7 pg19 xxi-a:27 fat dog model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Fat Dog”, //name
“0,0,0″,  //translation
http://YOURDOMAIN/resources/id5-dog-model.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-dog_tex_0-small.png&#8221;, //resource (texture)
5, //scale
7, //cos ID -> which reference the POI is assigned to
“XXI-A:27, Created using 123D Catch”, //description
“”, //thumbnail
“Fat Dog”, //id
“0,0,3.14″ //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage8 pg21 xxi-a:373 ring of people – model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Ring of People”, //name
“0,0,0″,  //translation
http://YOURDOMAIN/resources/ring.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/ring-2_tex_0.jpg&#8221;, //resource (texture)
5, //scale
8, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Ring of People”, //id
“1.57,0,3.14″ //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 9 pg 23 xxi-a:29 old woman with head on knee.
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Old Woman”, //name
“0,0,0″,  //translation
http://YOURDOMAIN/resources/statue2.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/Statue_try_1_tex_0.png&#8221;, //resource (texture)
5, //scale
9, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Old Woman”, //id
“0,3.14,3.14″ //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 10 pg29 Anything but textbook movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0″, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/carleton-promo.mp4&#8243;, //texture
100, //scale
10, //cosID
“Carleton University – Anything but textbook!”, //description
“”, //thumbnail
“movie5″, //id
“1.57,1.57,3.14″, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://carleton.ca&#8221;);

$poi->addCustomization($cust);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

///end of tracking images
$jPoiBuilder->end();

exit;

And that does it, basically. Each element, each augment, is called after ‘//deliver the POI’. After ‘createBasicGluePOI’ you get the parameters. You provide the direct URL to your ‘mainresource’ when it’s a 3d model. In the next line, the direct URL to the texture. You can make it bigger or smaller by adjusting ‘Scale’; cos ID is next. Make sure that these correspond with the images you uploaded when you created the tracking file. Otherwise you can get the wrong model or movie playing in the wrong spot. The ‘description’ is what will appear on the smartphone if somebody touches the screen at this point. ‘Orientation’ is a bugger to sort out, as it is in radians. So take your compass, and divide by 3.14 to figure it out. I believe that 0,0,0 would put your model flat against your tracking image, but I could be wrong.

(If you’ll notice, some of the POIS are movies. To display these, you map a ‘movie plane’ over your tracking image, and then play the movie on top of this. Use Junaio’s movie plane – the URL is http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc and put that in the line after ‘translation’. The next line will be the direct URL to your movie. These need to be packaged a bit for iphone/android delivery. I used Handbrake to do this, with its presets. Load movie, export for iphone, and voila.)

Regarding packaging your models- you have to zip together the texture, the obj file, and the .mtl file that meshlab creates.  An MTL file looks like this inside:

#
# Wavefront material file
# Converted by Meshlab Group
#

newmtl material_0
Ka 0.200000 0.200000 0.200000
Kd 1.000000 1.000000 1.000000
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000
map_Kd big-head-statue_tex_0.jpg

newmtl material_1
Ka 0.200000 0.200000 0.200000
Kd 0.501961 0.501961 0.501961
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000

Make sure the named  texture file (here, ‘big-head-statue-tex_0.jpg’) in this file is the same as the one in the zip file, and the same as the one called in the search.php. I confess a big of ignorance here: I found that I also had to have the texture file in the main resource folder, unzipped. This is the one the search.php points to; but if you don’t also have it in the zipped file, you get a 3d object without texture. I do not know why this is.

10. Go back to ‘my channels’ on Junaio. Click ‘validate’ beside your channel. This will tell you if everything is ok. Note – sometimes things come back as an error when they aren’t really a problem. The only way to know the difference is to click on ‘get QR code’ and then go to step 11:

11. With your smartphone, having already downloaded the Juanio app, click ‘scan’ and scan the QR code for your channel. Your content – if all is well – will load. Aim your phone at one of your tracking images, and your augmentation should appear. Voila!

So that should do it. Good luck, and enjoy. Keep in mind that with Junaio’s new api, a lot of this has been streamlined. I’ll get around to learning that, soon.

Mesoamerica in Gatineau: Augmented Reality Museum Catalogue Pop-Up Book

Would you like to take a look at the term project of my first year seminar course in digital antiquity at Carleton University? Now’s your chance!

Last winter, Terence Clark and Matt Betts, curators at the Museum of Civilization in Gatineau Quebec, saw on this blog that we were experimenting with 123D Catch (then called ‘Photofly’) to make volumetric models of objects from digital photographs. Terence and Matt were also experimenting with the same software. They invited us to the museum to select objects from the collection. The students were enchanted with materials from mesoamerica, and our term project was born: what if we used augmented reality to create a pop-up museum catalogue? The students researched the artefacts, designed and produced a catalogue, photographed artefacts, used 123D Catch to turn them into 3d models, Meshlab to clean the models up, and Junaio to do the augmentation. (I helped a bit on the augmentation. But now that I know, roughly, what I’m doing, I think I can teach the next round of students how to do this step for themselves, too).The hardest part was reducing the models to less than 750kb (per the Junaio specs) while retaining something of their visual complexity.

The results were stunning. We owe an enormous debt of gratitude to Drs. Clark and Betts, and the Museum of Civilization for this opportunity. Also, the folks at Junaio were always very quick to respond to cries for help, and we thank them for their patience!

Below, you’ll find the QR code to scan with Junaio, to load the augmentations into your phone. Then, scan the images to reveal the augmentation (you can just point your phone at the screen). Try to focus on a single image at a time.

Also, you may download the pdf of the book, and try it out. (Warning: large download).

Artefact images taken by Jenna & Tessa; courtesy of the Canadian Museum of Civilization

3d Models & Augmented Reality

A longer post will follow with details, but I’m so pleased with the results I’m putting some stuff up right now. In my first year seminar class on digital antiquity which just ended, we’ve been experimenting with 123D Catch to make models of materials conserved at the Canadian Museum of Civilization (thanks Terry & Matt!). Our end of term project was to take these models, and think through ways of using them to open up the hidden museum to a wider public. We wondered if we could get these models onto people’s smartphones, as a kind of augmented reality (we settled on Junaio).

The students researched the artefacts, wrote up a booklet, and had it printed. They made the models, taking the photos, cleaning up in Meshlab, making videos and all the other sundry tasks necessary to the project. We ran out of time though with regard to the augmented reality part. By the end of term, we only had one model that was viewable on a smartphone. Today I added the rest of the materials to our ‘channel’ on Junaio, and tested it on the booklet.

It was magic. I was so excited, I ran around campus, trying to find people who I could show it to, who would appreciate it (nothing kills a buzz like showing off work to people who don’t really appreciate it, yet smile politely as you trail off…)

More about our workflow and the tacit knowledge necessary to make this all work will follow. In the image below, a model sits on the booklet on my desk. Handsome devil, eh?

Simple Omeka to Wikitude Hack

I’m working on some projects at the moment, aiming to make augmented reality and cultural heritage discovery easier and gentler for the small scale historical society, student groups, etc: folks with a basic level of web literacy, but no real great level of programming skills.

To that end, here’s something one can do with Omeka, to push items from its database into the Wikitude augmented reality platform.

  1. In Omeka, have the Geolocation plugin installed and working.
  2. Navigate to http://[your omeka site.com]/geolocation/map.kml
  3. You should see the xml structure of your geolocated items.
  4. In a new tab, go to wikitude.me, and sign up for a developer account (it’s free).
  5. Click ‘add new world’.
  6. Click ‘upload KML file’.
  7. Fill in all required fields (you’ll have to create a 32 by 32 pixel icon to serve as a dot-on-the-map, and upload that too).
  8. Under ‘KML/KMZ’ file, click on ‘Enter KML URL’. This will give you a box into which you may paste the URL from #2.
  9. Hit save.

If you’re successful, the next screen will tell you how many points have been uploaded. If, at some later point you’ve added many more items to Omeka, you’ll have to go back to your World in Wikitude and hit save again, to upload the most recent stuff.

Now, with Wikitude on your phone, you might not be able to find your world right away. There’s a solution. If you log back into the Wikitude developer zone, and click on the world you just created, you’ll find a string of letters under ‘developer key’. On your Iphone, go to ‘settings’ , select ‘Wikitude’. Under ‘Developer Settings’, there’s a box for the developer key. Enter that developer key there. Start Wikitude up, refresh the display, and your items from Omeka will be under ‘Around Me’.

…And there you have it. Right now, this just does the basic text descriptions, and the location. By fiddling with the Geolocation plugin code, one might be able to add the other information that Wikitude can display, like images, video, audio, etc.

For a similar approach, but directly from Google Maps, see this video by drmonkeyjcg: