Problems with low-friction AR

Flickr, Camille Rose

Ok. I’ve had a bit of feedback from folks. The issues seem to be:

  • audio doesn’t always load
  • locations don’t always trigger

Those are two big issues. I’m not entirely sure what to do about them. I just spun up a story that takes place around the quad here; I took the Ottawa Anomaly code and plugged in different coordinates. When I playtested from my computer, audio loaded up, which was good. But when I went downstairs and outside, to where I knew the first trigger to be: no audio. The ‘play audio’ function reported, ‘what audio?’ so I know the <> macro in the initialization passage didn’t load up.

I went to a second location; the geotrigger didn’t trigger. It kept reloading the previous geotrigger. Except – if I reloaded the entire story, then the new trigger’d trig. So am I dealing with a caching issue? Do I need to clear the latitude/longitude variables when the player moves on?

You can download the html and import it into the Twine2 interface. Answers on a postcard…

[update a few hours later] So I went back downstairs, and outside, and lo! everything loaded up as per desired. Must be a loading issue. In which case, I should shorten the length of the clips downwards, and also have a ‘turn off’ whenever the user rescans for geotriggers – I was getting overlapping sounds.

 

On haunts & low-friction AR – thinking out loud

The frightening news is that we are living in a story. The reassuring part is that it’s a story we’re writing ourselves. Alas, though, most of us don’t even know it – or are afraid to accept it. Never before did we have so much access to the tools of storytelling – yet few of us are willing to participate in their creation.

– Douglas Ruskhoff, ‘Renaissance Now! The Gamers’ Perspective’ in Handbook of Computer Game Studies, MIT Press, 2005: 415.

Haunts is about the secret stories of spaces.

Haunts is about locative trauma.

Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.

The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.

– Mark Sample, “Haunts: Place, Play, and Trauma” Sample Reality http://www.samplereality.com/2010/06/01/haunts-place-play-and-trauma/

It’s been a while since I’ve delved into the literature surrounding locative place-based games. I’ve been doing so as I try to get my head in gear for this summer’s Digital Archaeology Institute where I’ll be teaching augmented reality for archaeology.

Archaeology and archaeological practice are so damned broad though; in order to do justice to the time spent, I feel like I have to cover lots of different possibilities for how AR could be used in archaeological practice, from several different perspectives. I know that I do want to spend a lot of time looking at AR from a game/playful perspective though.  A lot of what I do is a kind of digital bricolage, as I use whatever I have to hand to do whatever it is I do. I make no pretense that what I’m doing/using is the best method for x, only that it is a method, and one that works for me. So for augmented reality in archaeology, I’m thinking that what I need to teach are ways to get the maximum amount of storytelling/reality making into the greatest number of hands. (Which makes me think of this tweet from Colleen Morgan this am:

…but I digress.)

So much about what we find in archaeology is about trauma. Houses burn down: archaeology is created. Things are deliberately buried: archaeology is created. Materials are broken: archaeology is created.

Sample’s Haunts then provides a potential framework for doing archaeological AR. He goes on to write:

The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.

It was more of a proof of concept than anything else, but my ‘low-friction AR‘ ‘The Ottawa Anomaly‘ tries to not so much tell a story, but provide echoes of events in key areas around Ottawa’s downtown, such that each player’s experience of the story would be different – the sequence of geotriggers encountered would colour each subsequent trigger’s emotional content. If you hear the gunshot first, and then the crying, that implies a different story than if you heard them the other way around. The opening tries to frame a storyworld where it makes sense to hear these echoes of the past in the present, so that the technological mediation of the smartphone fits the world. It also is trying to make the player stop and look at the world around them with new eyes (something ‘Historical Friction‘ tries to do as well).

I once set a treasure hunt around campus for my first year students. One group however interpreted a clue as meaning a particular statue in downtown Ottawa; they returned to campus much later and told me a stunning tale of illuminati and the secret history of Official Ottawa that they had crafted to make sense of the clues. Same clues, different geographical setting (by mistake) = oddly compelling story. What I’m getting at: my audio fragments could evoke very different experiences not just in their order of encounter but also given the background of the person listening. I suggested in a tweet that

creating another level of storytelling on top of my own.

I imagine my low-friction AR as a way for multiple stories within the same geographic frame, and ‘rechoes’ or ‘fieldnotes’ as ways of cross-connecting different stories. I once toyed with the idea of printing out QR codes such that they could be pasted overtop of ‘official Ottawa‘ for similar purposes…

Low Friction Augmented Reality

But my arms get tired.

Maybe you’ve thought, ‘Augmented reality – meh’. I’ve thought that too. Peeping through my tablet or phone’s screen at a 3d model displayed on top of the viewfinder… it can be neat, but as Stu wrote years ago,

[with regard to ‘Streetmuseum’, a lauded AR app overlaying historic London on modern London] …it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

I’ve said before that, as historians and archaeologists, we’re maybe missing a trick by messing around with visual augmented reality. The past is aural. (If you want an example of how affecting an aural experience can be, try Blindside).

Maybe you’ve seen ‘Ghosts in the Garden‘. This is a good model. But what if you’re just one person at your organization? It’s hard to put together a website, let alone voice actors, custom cases and devices, and so on. I’ve been experimenting these last few days with trying to use the Twine interactive fiction platform as a low-friction AR environment. Normally, one uses Twine to create choose-your-own-adventure texts. A chunk of text, a few choices, those choices lead to new texts… and so on. Twine uses an editor that is rather like having little index cards that you move around, automatically creating new cards as you create new choices. When you’re finished, Twine exports everything you’ve done into a single html file that can live online somewhere.

That doesn’t begin to even touch the clever things that folks can do with Twine. Twine is indeed quite complex. For one thing, as we’ll see below, it’s possible to arrange things so that passages of text are triggered not by clicking, but by your position in geographical space.

You can augment reality with Twine. You don’t need to buy the fancy software package, or the monthly SDK license. You can do it yourself, and keep control over your materials, working with this fantastic open-source platform.

When the idea occurred to me, I had no idea how to make it happen. I posed the question on the Twine forums, and several folks chimed in with suggestions about how to make this work. I now have a platform for delivering an augmented reality experience. When you pass through an area where I’ve put a geotrigger, right now, it plays various audio files (I’m going for a horror-schlock vibe. Lots of backwards talking. Very Twin Peaks). What I have in mind is that you would have to listen carefully to figure out where other geotriggers might be (or it could be straight-up tour-guide type audio or video). I’ve also played with embedding 3d models (both with and without Oculus Rift enabled), another approach which is also full of potential – perhaps the player/reader has to carefully examine the annotations on the 3d model to figure out what happens next.

Getting it to work on my device was a bit awkward, as I had to turn on geolocation for apps, for Google, for everything that wanted it (I’ve since turned geolocation off again).

If you’re on Carleton’s campus, you can play the proof-of-concept now: http://philome.la/electricarchaeo/test-of-geolocation-triggers/play  But if you’re not on Carleton’s campus, well, that’s not all that useful.

To get this working for you, you need to start a new project in Twine 2. Under story format (click the up arrow beside your story title, bottom left of the editor), make sure you’ve selected Sugarcube (this is important; the different formats have different abilities, and we’re using a lot of javascript here). Then, in the same place, find ‘edit story javascript’ because you need to add a whole bunch of javascript:


(function () {
if ("geolocation" in navigator && typeof navigator.geolocation.getCurrentPosition === "function") {
// setup the success and error callbacks as well as the options object
var positionSuccess = function (position) {
// you could simply assign the `coords` object to `$Location`,
// however, this assigns only the latitude and longitude since
// that seems to have been what you were attempting to do before
state.active.variables["Location"] = {
latitude : position.coords.latitude,
longitude : position.coords.longitude
};
// access would be like: $Location.latitude and $Location.longitude
},
positionError = function (error) {
/* currently a no-op; code that handles errors */
},
positionOptions = {
timeout: 31000,
enableHighAccuracy: true,
maximumAge : 120000 // (in ms) cached results may not be older than 1 minute
// this can probably be tweaked upwards a bit
};

// since the API is asynchronous, we give `$Location` an initial value, so
// trying to access it immediately causes no issues if the first callback
// takes a while
state.active.variables["Location"] = { latitude : 0, longitude : 0 };

// make an initial call for a position while the system is still starting
// up, so we can get real data ASAP (probably not strictly necessary as the
// first call via the `predisplay` task [below] should happen soon enough)
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);

// register a `predisplay` task which attempts to update the `$Location`
// variable whenever passage navigation occurs
predisplay["geoGetCurrentPosition"] = function () {
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);
};
} else {
/* currently a no-op; code that handles a missing/disabled geolocation API */
}
}());

(function () {
window.approxEqual = function (a, b, allowedDiff) { // allowedDiff must always be > 0
if (a === b) { // handles various "exact" edge cases
return true;
}
allowedDiff = allowedDiff || 0.0005;
return Math.abs(a - b) < allowedDiff;
};
}());

The first function enables your Twine story to get geocoordinates. The second function enables us to put a buffer around the points of interest. Then, in our story, you have to call that code and compare the result against your points of interest so that Twine knows which passage to display. So in a new passage – call it ‘Search for Geotriggers’- you have this:

<<if approxEqual($Location.latitude, $Torontolat) and approxEqual($Location.longitude, $Torontolong)>>
<<display “Downtown Toronto”>>
<<else>>
<<display “I don’t know anything about where you are”>>
<</if>>

So that bit above says, if the location is more or less equal to the POI called Torontolat,Torontolong, then display the passage called “Downtown Toronto”. If you’re not within the buffer around the Toronto point, display the passage called “I don’t know anything about where you are”.

Back at the beginning of your story, you have an initialization passage (where your story starts) and you set some of those variables:

<<set $Torontolat = 43.653226>>
<<set $Torontolong = -79.3831843>>

[[Search for Geotriggers]]

And that’s the basics of building a DIY augmented reality. Augmented? Sure it’s augmented. You’re bringing digital ephemera into play (and I use the word play deliberately) in the real world. Whether you build a story around that, or go for more of the tour guide approach, or devise fiendish puzzles, is up to you.

I’m grateful to ‘Greyelf’ and ‘TheMadExile’ for their help and guidance as I futzed about doing this.

[update May 22: Here is the html for a game that takes place in and around downtown Ottawa Ontario. Download it somewhere handy, then open the Twine 2 editor. Open the game file in the editor via the Import button and you’ll see how I built it, organized the triggers and so on. Of course, it totally spoils any surprise or emergent experience once you can see all the working parts so if you’re in Ottawa, play it here on your device first before examining the plumbing!]

Putting Pompeii on Your Coffee Table

(cross-posted from my course blog, #hist5702x digital/public history. If you’re interested in public history and augmented reality, check out my students’ posts!)

Creating three dimensional models from photographs has its ups and downs. But what if we could do it from video? I decided to find out.

First, I found this tourist’s film of a house at Pompeii (house of the tragic poet, he says):

I saved a copy of the film locally; there are a variety of ways of doing this and two seconds with google will show you how. I then watched it carefully, and took note of a sequence of clearly lit pans at various points, marking down when they started and stopped, in seconds.

C extract extract3 atr-00625

Then, I searched for a way to extract still images from that clip. This blog post describes a command-line option using VLC, option 3. I went with that, which created around 600-images. I then batch converted them from png to jpg (Google around again; the solution I found from download.com was filled with extraneous crapware that cost me 30 minutes to delete).

I then selected around 40 images that seemed to cover things well. It would’ve been better if the cameraman had moved around rather than panned, as that would’ve provided better viewpoints (I’ll search for a better video clip). These I stitched together using 123D Catch. I have the Python Photogrammetry Toolbox on my other computer, so I’ll try doing it again on that machine; 123D Catch is all well and good but it is quite black-box; with PPT I can perhaps achieve better results.

The resulting model from 123D Catch shows the inside of the atrium far better than I expected (and again, a better starting film would probably give better results). I exported the .obj, .mtl, and the jpg textures for the resulting model, to my computer, which I then uploaded to augmentedev.com.

The result? A pompeian house, on my desktop!

The Atrium of the House of the Tragic Poet, Pompeii-on-the-Rideau

Now imagine *all* of the video that exists out there of Pompeii. It should be possible to create a 3d model of nearly the whole city (or at least, the parts they let tourists into), harvesting videos from youtube. One could then 3d print the city, export to AR, or import into a game engine….

As far as the #hist5702x project is concerned, we could do this in the workspace they’ve set up for us in the warehouse building, or at the airport, or from historical footage from inside a plane, or….

Historical Friction

edit June 6 – following on from collaboration with Stu Eve, we’ve got a version of this at http://graeworks.net/historicalfriction/

I want to develop an app that makes it difficult to move through the historically ‘thick’ places – think Zombie Run, but with a lot of noise when you are in a place that is historically dense with information. I want to ‘visualize’ history, but not bother with the usual ‘augmented reality’ malarky where we hold up a screen in front of our face. I want to hear the thickness, the discords, of history. I want to be arrested by the noise, and to stop still in my tracks, be forced to take my headphones off, and to really pay attention to my surroundings.

So here’s how that might work.

1. Find wikipedia articles about the place where you’re at. Happily, inkdroid.org has some code that does that, called ‘Ici’. Here’s the output from that for my office (on the Carleton campus):

http://inkdroid.org/ici/#lat=45.382&lon=-75.6984

2. I copied that page (so not the full wikipedia articles, just the opening bits displayed by Ici). Convert these wikipedia snippets into numbers. Let A=1, B=2, and so on. This site will do that:

http://rumkin.com/tools/cipher/numbers.php

3. Replace dashes with commas. Convert those numbers into music. Musical Algorithmns is your friend for that. I used the default settings, though I sped it up to 220 beats per minute. Listen for yourself here. There are a lot of wikipedia articles about the places around here; presumably if I did this on, say, my home village, the resulting music would be much less complex, sparse, quiet, slow. So if we increased the granularity, you’d start to get an acoustic soundscape of quiet/loud, pleasant/harsh sounds as you moved through space – a cost surface, a slope. Would it push you from the noisy areas to the quiet? Would you discover places you hadn’t known about? Would the quiet places begin to fill up as people discovered them?

Right now, each wikipedia article is played in succession. What I really need to do is feed the entirety of each article through the musical algorithm, and play them all at once. And I need a way to do all this automatically, and feed it to my smartphone. Maybe by building upon this tutorial from MIT’s App Inventor. Perhaps there’s someone out there who’d enjoy the challenge?

I mooted all this at the NCPH THATCamp last week – which prompted a great discussion about haptics, other ways of engaging the senses, for communicating public history. I hope to play at this over the summer, but it’s looking to be a very long summer of writing new courses, applying for tenure, y’know, stuff like that.

Edit April 26th – Stuart and I have been playing around with this idea this morning, and have been making some headway per his idea in the comments. Here’s a quick screengrab of it in action: http://www.screencast.com/t/DyN91yZ0

p3d.in for hosting your 3d scans

I’m playing with p3d.in to host some three dimensional models I’ve been making with 123D Catch. These are models that I have been using in conjunction with Junaio to create augmented reality pop-up books (and other things; more on that anon). Putting these 3d objects onto a webpage (or heaven forbid, a pdf) has been strangely much more complicated and time-consuming. P3d.in then serves a very useful purpose then!

Below are two models that I made using 123D catch. The first is the end of a log recovered from anaerobic conditions at the bottom of the Ottawa River (which is very, very deep in places). The Ottawa was used as a conduit for floating timber from its enormous watershed to markets in the US and the UK for nearly two hundred years. Millions of logs floated down annually…. so there’s a lot of money sitting down there. A local company, Log’s End, has been recovering these old growth logs and turning them into high-end wide plank flooring. They can’t use the ends of the logs as they are usually quite damaged, so my father picked some up and gave them to me, knowing my interest in all things stamped. This one carries an S within a V, which dates it to the time and timber limits of J.R. Booth I believe.

logend-edit2 (Click to view in 3D)

And here we have one of the models that my students made last year from the Mesoamerican materials conserved at the Canadian Museum of Civilization (soon-to-be-repurposed as the Museum of Canadian History; what will happen to these awkward materials that no longer fit the new mandate?)

mesoamerican (Click to view in 3D)

PS
Incidentally, I’ve now embedded these in a Neatline exhibition I am building:

3d manipulable objects in time and space

123D Catch iPhone app

I’ve just been playing with the 123D catch iphone app. Aside from some annoying login business, it’s actually quite a nice little app for creating 3d volumetric models. I have a little toy car in my office. I opened the app, took 15 pictures of the car sitting on my desk, and sent it off for processing. The resulting model is viewable here. Not too bad for 5 minutes of futzing about with the iphone.

Since I’m interested in 3d models as fodder for augmented reality, this is a great workflow. No fooling around trying to reduce model & mesh size to fit into the AR pipeline. By making the model with the iphone’s camera in the first place, the resulting model files are sufficiently small enough (more or less; more complicated objects will probably need a bit of massaging) to swap into my Junaio xml and pumped out to my channel with little difficulty.

 

How to make an augmented reality pop-up book

We made an augmented reality pop-up book in my first year seminar last spring. Perhaps you’d like to make your own?

1. Go to Junaio and register as a developer.

2. Get some server space that meets Junaio’s requirements.

3. On the right hand side of the screen, when you’re logged into Junaio, you can find out what your API key is by clicking on ‘show my api’. You’ll need this info.

4. Create a channel (which you do once you’re logged in over at Junaio; click on ‘new channel’ on the right hand side of the screen). You will need to fill in some information. For the purposes of creating a pop-up book, you select a ‘GLUE channel’. The ‘callback’ is the URL of the folder on your server where you’re going to put all of your assets. Make this ‘YOURDOMAIN/html/index.php. Don’t worry that the folder ‘html’ or the file ‘index.php’ doesn’t exist yet. You’ll create those in step 6.

Now the fun begins. I’m going to assume for the moment that you’ve got some 3d models available that you’ll be using to augment the printed page. These need to be in .obj format, and they need to be smaller than >750 kb. I’ve used 123D Catch to make my models, and then Meshlab to reduce the size of the models (Quadratic Edge Collapse is the filter I use for that). Make sure you keep a copy of the original texture somewhere so that it doesn’t get reduced when you reduce polygons.

5. Create your tracking images. You use Junaio’s tool to do this. At the bottom of that page, where it says, ‘how many patterns do you want to generate?’, select however many images you’re going to augment. PNGs should work; make sure that they are around 100 kb or smaller. If your images don’t load into the tool – if it seems to get stuck in a neverending loading cycle – your images may be too large or in the wrong format.  Once this process is done, you’ll download a file called ‘tracking.xml_enc. Keep this.

6. Now, at this point, things are a bit different than they were in May, as Junaio changed their API somewhat. The old code still works, and that’s what I’m working with here. Here’s the original tutorial from Junaio. Download the ‘Getting Started’ php package. Unzip it, and put its contents into your server space.

7. navigate to config/config.php, open that, and put your Junaio Developer key in there.

8. I created a folder called ‘resources'; this is where you’re going to put your assets. Put the tracking_xml.enc in there.

9. Navigate to src/search.php. Open this. This is the file that does all the magic, that ties your tracking images to your resources. Here’s mine, for our book. Note how there’s a mixture of movies and models in there:

<?php /**/ ?><?php

/**
* @copyright  Copyright 2010 metaio GmbH. All rights reserved.
* @link       http://www.metaio.com
* @author     Frank Angermann
**/

require_once ‘../library/poibuilder.class.php';

/**
* When the channel is being viewed, a poi request will be sent
* $_GET[‘l’]…(optional) Position of the user when requesting poi search information
* $_GET[‘o’]…(optional) Orientation of the user when requesting poi search information
* $_GET[‘p’]…(optional) perimeter of the data requested in meters.
* $_GET[‘uid’]… Unique user identifier
* $_GET[‘m’]… (optional) limit of to be returned values
* $_GET[‘page’]…page number of result. e.g. m = 10: page 1: 1-10; page 2: 11-20, e.g.
**/

//use the poiBuilder class   — this might not be right, for
$jPoiBuilder = new JunaioBuilder();

//create the xml start
$jPoiBuilder->start(“http:YOURDOMAIN/resources/tracking.xml_enc”);

//bookcover-trackingimage1
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/movie-reel.mp4&#8243;, //texture
95, //scale
1, //cosID
“Universal Newspaper Newsreel November 6, 1933, uploaded to youtube by publicdomain101″, //description
“”, //thumbnail
“movie1″, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage2 pg 9 xxi-a:55 –
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/edited-museum-1.mp4&#8243;, //texture
90, //scale
2, //cosID
“Faces of Mexico – Museo Nacional de Antropologia”, //description
“”, //thumbnail
“movie2″, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://www.youtube.com/watch?v=Dfc257xI0eA&#8221;);

$poi->addCustomization($cust);
//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage3 pg 11 xxi-a:347 bighead -3d model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Effigy”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id3-big-head.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/big-head-statue_tex_0.jpg&#8221;, //resource (texture)
5, //scale
3, //cos ID -> which reference the POI is assigned to
“XXI-A:51″, //description
“”, //thumbnail
“Zapotec Effigy”, //id
“0,3.14,1.57” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage4 pg13 xxi-a:51 from shaft tomb, model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Shaft Tomb Figurine”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id4-shaft-grave.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-statue.jpg&#8221;, //resource (texture)
5, //scale
4, //cos ID -> which reference the POI is assigned to
“XXI-A:51″, //description
“”, //thumbnail
“Shaft Tomb Figurine”, //id
“0,0,3.14” //orientation
);
//echo the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage5 pg15 xxi-a:28
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/pg15-movie.mp4&#8243;, //texture
90, //scale
5, //cosID
“Showing the finished model in Meshlab”, //description
“”, //thumbnail
“movie3″, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage6 pg17 xxi-a:139 man with club movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/archaeologicalsites-1.mp4&#8243;, //texture
90, //scale
6, //cosID
“”, //description
“”, //thumbnail
“movie4″, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage7 pg19 xxi-a:27 fat dog model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Fat Dog”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id5-dog-model.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-dog_tex_0-small.png&#8221;, //resource (texture)
5, //scale
7, //cos ID -> which reference the POI is assigned to
“XXI-A:27, Created using 123D Catch”, //description
“”, //thumbnail
“Fat Dog”, //id
“0,0,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage8 pg21 xxi-a:373 ring of people – model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Ring of People”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/ring.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/ring-2_tex_0.jpg&#8221;, //resource (texture)
5, //scale
8, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Ring of People”, //id
“1.57,0,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 9 pg 23 xxi-a:29 old woman with head on knee.
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Old Woman”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/statue2.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/Statue_try_1_tex_0.png&#8221;, //resource (texture)
5, //scale
9, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Old Woman”, //id
“0,3.14,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 10 pg29 Anything but textbook movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/carleton-promo.mp4&#8243;, //texture
100, //scale
10, //cosID
“Carleton University – Anything but textbook!”, //description
“”, //thumbnail
“movie5″, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://carleton.ca&#8221;);

$poi->addCustomization($cust);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

///end of tracking images
$jPoiBuilder->end();

exit;

And that does it, basically. Each element, each augment, is called after ‘//deliver the POI’. After ‘createBasicGluePOI’ you get the parameters. You provide the direct URL to your ‘mainresource’ when it’s a 3d model. In the next line, the direct URL to the texture. You can make it bigger or smaller by adjusting ‘Scale'; cos ID is next. Make sure that these correspond with the images you uploaded when you created the tracking file. Otherwise you can get the wrong model or movie playing in the wrong spot. The ‘description’ is what will appear on the smartphone if somebody touches the screen at this point. ‘Orientation’ is a bugger to sort out, as it is in radians. So take your compass, and divide by 3.14 to figure it out. I believe that 0,0,0 would put your model flat against your tracking image, but I could be wrong.

(If you’ll notice, some of the POIS are movies. To display these, you map a ‘movie plane’ over your tracking image, and then play the movie on top of this. Use Junaio’s movie plane – the URL is http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc and put that in the line after ‘translation’. The next line will be the direct URL to your movie. These need to be packaged a bit for iphone/android delivery. I used Handbrake to do this, with its presets. Load movie, export for iphone, and voila.)

Regarding packaging your models- you have to zip together the texture, the obj file, and the .mtl file that meshlab creates.  An MTL file looks like this inside:

#
# Wavefront material file
# Converted by Meshlab Group
#

newmtl material_0
Ka 0.200000 0.200000 0.200000
Kd 1.000000 1.000000 1.000000
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000
map_Kd big-head-statue_tex_0.jpg

newmtl material_1
Ka 0.200000 0.200000 0.200000
Kd 0.501961 0.501961 0.501961
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000

Make sure the named  texture file (here, ‘big-head-statue-tex_0.jpg’) in this file is the same as the one in the zip file, and the same as the one called in the search.php. I confess a big of ignorance here: I found that I also had to have the texture file in the main resource folder, unzipped. This is the one the search.php points to; but if you don’t also have it in the zipped file, you get a 3d object without texture. I do not know why this is.

10. Go back to ‘my channels’ on Junaio. Click ‘validate’ beside your channel. This will tell you if everything is ok. Note – sometimes things come back as an error when they aren’t really a problem. The only way to know the difference is to click on ‘get QR code’ and then go to step 11:

11. With your smartphone, having already downloaded the Juanio app, click ‘scan’ and scan the QR code for your channel. Your content – if all is well – will load. Aim your phone at one of your tracking images, and your augmentation should appear. Voila!

So that should do it. Good luck, and enjoy. Keep in mind that with Junaio’s new api, a lot of this has been streamlined. I’ll get around to learning that, soon.

Mesoamerica in Gatineau: Augmented Reality Museum Catalogue Pop-Up Book

Would you like to take a look at the term project of my first year seminar course in digital antiquity at Carleton University? Now’s your chance!

Last winter, Terence Clark and Matt Betts, curators at the Museum of Civilization in Gatineau Quebec, saw on this blog that we were experimenting with 123D Catch (then called ‘Photofly’) to make volumetric models of objects from digital photographs. Terence and Matt were also experimenting with the same software. They invited us to the museum to select objects from the collection. The students were enchanted with materials from mesoamerica, and our term project was born: what if we used augmented reality to create a pop-up museum catalogue? The students researched the artefacts, designed and produced a catalogue, photographed artefacts, used 123D Catch to turn them into 3d models, Meshlab to clean the models up, and Junaio to do the augmentation. (I helped a bit on the augmentation. But now that I know, roughly, what I’m doing, I think I can teach the next round of students how to do this step for themselves, too).The hardest part was reducing the models to less than 750kb (per the Junaio specs) while retaining something of their visual complexity.

The results were stunning. We owe an enormous debt of gratitude to Drs. Clark and Betts, and the Museum of Civilization for this opportunity. Also, the folks at Junaio were always very quick to respond to cries for help, and we thank them for their patience!

Below, you’ll find the QR code to scan with Junaio, to load the augmentations into your phone. Then, scan the images to reveal the augmentation (you can just point your phone at the screen). Try to focus on a single image at a time.

Also, you may download the pdf of the book, and try it out. (Warning: large download).

Artefact images taken by Jenna & Tessa; courtesy of the Canadian Museum of Civilization

3d Models & Augmented Reality

A longer post will follow with details, but I’m so pleased with the results I’m putting some stuff up right now. In my first year seminar class on digital antiquity which just ended, we’ve been experimenting with 123D Catch to make models of materials conserved at the Canadian Museum of Civilization (thanks Terry & Matt!). Our end of term project was to take these models, and think through ways of using them to open up the hidden museum to a wider public. We wondered if we could get these models onto people’s smartphones, as a kind of augmented reality (we settled on Junaio).

The students researched the artefacts, wrote up a booklet, and had it printed. They made the models, taking the photos, cleaning up in Meshlab, making videos and all the other sundry tasks necessary to the project. We ran out of time though with regard to the augmented reality part. By the end of term, we only had one model that was viewable on a smartphone. Today I added the rest of the materials to our ‘channel’ on Junaio, and tested it on the booklet.

It was magic. I was so excited, I ran around campus, trying to find people who I could show it to, who would appreciate it (nothing kills a buzz like showing off work to people who don’t really appreciate it, yet smile politely as you trail off…)

More about our workflow and the tacit knowledge necessary to make this all work will follow. In the image below, a model sits on the booklet on my desk. Handsome devil, eh?