Heritage Jam 2015

…mmmm… heritage jam ….

So it’s Heritage Jam time again. This year’s theme is ‘museums & collections’. I was just going to repurpose my ‘diary in the attic‘ but I turned that into a tutorial for #msudai, and, well, with one thing and another, I’m not sure I’ll actually get anything made. That’s not to say I don’t have an idea about what I’d do in its stead.

I’ve got lots of ideas.

I am, in slow moments, idly trying to make this one happen, but I’m having trouble on one step (#5, as it happens). If anybody wants to take this idea and run with it, throw me a footnote and off you go.

1. select an object from a museum collection; ideally something in the 4 cubic inches size range ish.
2. 3d model it, grab the .stl file and
3. print it out.
4. scan it back into Vuforia’s object scanner to turn it into a 3d trackable.
5. augment it with a video player panel and / or an audio voice bubble that pulls the video, audio
6. from an external URL, a public place online.
7. release the APK for the AR viewer
8. release the .stl file for printing, and the audio and video files for remixing
9. set things up so that the most recent remixes are automatically pulled into the viewer (so, you’d remix but _save with the same file name_ and overwrite the most recent).

In my head, I imagine people competing with each other to have the object ‘speak’ or tell its story (or someone else’s story), and each time you’d train the viewer on it (so there’d be multiple speaking copies of the thing in the world), there would be different content. With version control enabled for where I’d store it (step 6), you’d have a history of speaking, though each ‘hearing’/’viewing’ would be unique. Kind of Tales of Thingsish, but with an element of the mob. Choose the right object to begin with, and you’d have something rather different.

The diary in the attic

update march 9 2017 I found the damned thing again. Zip file here with the pages & the android apk.

update feb 2 2017: In a fit of madness, I ‘tidied’ up the dropbox folder where these materials were hiding. And dumb-ass that I am, I can’t find them again. So, in the meantime, you can grab the apk for the stereoscopic version here, and the diary pages here (although those might be too low-rez for this to work properly). An object lesson: never tidy anything.

 

m1

Shawn dusted off the old diary. ‘Smells of mould’, he thought, as he flipped through the pages.

Hmmph. Somebody was pretty careless with their coffee.

I think it’s coffee.

Hmm.

Doesn’t smell like coffee. 

What the hell…. damn, this isn’t coffee.

Shawn cast about him, looking for the android digital spectralscope he kept handy for such occasions. Getting out his phone, he loaded the spectralscope up and, taking a safe position two or three feet away, gazed through it at the pages of the diary.

My god… it’s full of….

————————————–

The thing about hand-held AR is that you have to account for *why*. Why this device? Why are you looking through it at a page, or a bill board, or a magazine, or what-have-you? It’s not at all natural. The various Cardboard-like viewers out there are a step up, in that they free the hands (and with the see-through camera, feel more Geordi LaForge). In the passage above, I’m trying to make that hand-held AR experience feel more obvious, part of a story. That is, of course you reach for the spectralscope – the diary is clearly eldritch, something not right, and you need the device that helps you see beyond the confines of this world.

Without the story, it’s just gee-whiz look at what I can do. It’s somehow not authentic. That’s one of the reasons various museum apps that employ AR tricks haven’t really taken off I think. The corollary of this (and I’m just thinking out loud here) is that AR can’t be divorced from the tools and techniques of game based storytelling (narrative/ludology, whatever).

In the experience I put together above, I was trying out a couple of things. One – the framing with a story fragment, so that the story that emerges from the experience for you (gestures off to the left) is different from the story that emerges from the experience for you (gestures off to the right). (More on this here). I was also thinking about the kinds of things that could be augmented. I wondered if I could use a page of handwritten text. If I could, maybe a more self-consciously ‘scholarly’ use of AR could annotate the passages. Turns out, a page of text does not make a good tracking image. I used a macro in Gimp (comes prepackaged with Gimp) that adds a random waterstain/coffeestain to an image. The stained diary actually made the best tracking images I’ve ever generated! So maybe an AR annotated diary page could have such things discretely in the margins (but that takes us full circle to QR codes).

[Some time later, some further reflections:] one of the things I tell my history students who are interested in video games, the mechanic of the game should be illustrative of the kind of historical truth they are trying to tell. William Urrichio pointed out in 2005 that game mechanics map well onto various historiographies. What kind of truth then does an augmented reality application tell? In the very specific case of what I’ve been doing here, augmenting an actual diary (a trip up the Nile, from New York, starting in 1874), I’m put in mind of the diaries of William Lyon MacKenzie King, who was Prime Minister of Canada during the Second World War. King was a spiritualist, very much into seances and communing with the dead (his mom, mostly). I can imagine augmenting his ‘professional life’ (meeting minutes, journals, newspaper accounts), with his diaries such that his private life swirls and swoops through the public persona, much like the ghosts and spirits that he and his friends invoked on a regular basis. King was also something of a landscape architect; his private retreat in the Gatineau Hills (now a national historic site) are adorned with architectural follies (see this photo set) culled from gothic buildings torn down in the city of Ottawa. MacKenzie King might well be a subject whose personal history might be very well suited indeed for an exploration via augmented reality.

After all, the man lived an augmented reality daily.

Update July 30th Here’s a Google Cardboard -ready version of the Spectralscope. Vuforia updated their SDK today which includes stereoscopy (as well as a way to move from AR to VR and back), so I was playing with it.

~oOo~

Anyway. I should acknowledge my sources for the various sounds and 3d models.

– Egyptian Shabti, by Micropasts, from the Petrie Museum https://sketchfab.com/models/a09f9352c5ce44be8983524ff81e38b3

– Red Granite Sarcophagus (Giza, 5th dynasty), by the British Museum, https://sketchfab.com/models/117315772799431fa52e599630ec2a35

– 2 crouched burial inserted in bronze age pit, by d.powlesand https://sketchfab.com/models/fadf7a7392d94e41a3d1b85c160b4803

– ambient desert sounds by Joelakjgp http://desert.ambient-mixer.com/desert

– Akeley’s wax cylinder recordings https://ia801408.us.archive.org/16/items/Akeleys_Wax_Cylinder_Recording/

– Edison’s talking doll https://archive.org/details/EdisonsTalkingDollOf1890

– Man from the South, by Rube Bloom and his Bayou Boys https://archive.org/details/ManFromTheSouth

In the cardboard version, there are a few more things:

Granite head of Amenemhat III, British Museum   https://sketchfab.com/models/64d0b7662b59417986e9d693624de97a

– Mystic Chanting 4 by Mariann Gagnon, http://soundbible.com/1716-Mystic-Chanting-4.html

 

importing GIS data into unity

Per Stu’s workflow, I wanted to load a DEM of the local area into Unity.

1. Stu’s workflow: http://www.dead-mens-eyes.org/embodied-gis-howto-part-1-loading-archaeological-landscapes-into-unity3d-via-blender/

I obtained a DEM from the university library. QGIS would not open the damned thing; folks on Twitter suggested that the header file might be corrupt.

However, I was able to view the DEM using MicroDEM. I exported a grayscale geotiff from MicroDEM. The next step is to import into Unity. Stu’s workflow is pretty complicated, but in the comment thread, he notes this:

2. Alastair’s workflow: https://alastaira.wordpress.com/2013/11/12/importing-dem-terrain-heightmaps-for-unity-using-gdal/

Alrighty then, gdal. I’d already installed gdal when I updated QGIS. But, I couldn’t seem to get it to work from the command line. Hmmm. Turns out, you’ve got to put things into the path (lord how I loathe environment variables, paths, etc.)

export PATH=/Library/Frameworks/GDAL.framework/Programs:$PATH

Now I can use the command line. Hooray! However, his suggested command for converting from geotiff to raw heightmap expected by unity:

gdal_translate –ot UInt16 –scale –of ENVI –outsize 1025 1025 srtm_36_02_warped_cropped.tif heightmap.raw

(using my own file names, of course) kept giving me ‘too many options’ errors. I examined the help file on that gdal_translate, and by rearranging the sequence of flags in my command to how they’re listed in the help,

gdal_translate -ot UInt16 -of ENVI -outsize 1025 1025 -scale localdem.tif heightmap.raw

the issue went away. Poof!

Into Unity3d I went, creating a new project, with a new terrain object, importing the raw heightmap. Nothing. Nada. Rien.

Knowing that with computers sometimes, when you just keep doing things over and over expecting a different result you actually get a different result:

So from above, something like my original dem, though flipped a bit. I’m not too sure how to tie scripts into things, so we’ll let that pass for now. But as I examined the object closely, there were all sorts of jitters and jags and … well, it only looks like terrain from directly above.

A bit more googling, and I found this video:

which seems to imply that interleaving in the Raw format might be to blame (? I donno). Anyway, I don’t have Photoshop or anything handy on this machine for dealing with raster images. I might just go back to Qgis with the geotiff I made with Microdem.

(I went to install Gimp, saw that you could do it with Macports, and I’ve been waiting for the better part of an hour. I should not have done that, I suppose).

Anyway, the reason for all this – I’m trying to replicate Stu’s embodied gis concept. The first part of that is to get the target landscape into the unity engine. Then it gets pushed through vuforia… (I think. Need to go back and read his book, if I could remember who I let borrow it).

update june 9 – Success!

  1. I opened the grayscale image exported from MicroDem in Gimp.
  2. I resized the image as power of 2 + 1 (*shrug* everything indicates this is what you do, with unity); in this case, 1025.
  3. Saved as RAW.
  4. Changed the file extension from .data to .raw.
  5. Created a new 3d terrain object in Unity.
  6. Imported my .raw image.
  7. On the import dialogue, changed it to 8-bit image rather than 16-bit.
  8. Changed the width, height, x and z to all be 1025. Changed the y to be 75 (as the original image height is somewhere around 60 m above sea level, the highest point 135 m above sea level. Otherwise, I was getting these monstrous mountains when going with the default ‘600’).
  9. This post provided the solution: http://newton64.github.io/blog/2013-07-24-gimp-unity-terrain.html

I still need to rotate things, add water, add controls, etc. But now I could add my 3d models of the cemetery (which is on the banks of this river), perhaps also Oculus support etc. Talking with Stu a bit more, I see that his embodied GIS is still a bit beyond what I can do (lots of custom scripting), but imagine publishing an excavation this way, especially if ‘Excavation is destruction digitization‘…

Screen Shot 2015-06-09 at 12.14.30 PM
Success! That’s the mighty Rideau you’re looking at there.

Problems with low-friction AR

Flickr, Camille Rose

Ok. I’ve had a bit of feedback from folks. The issues seem to be:

  • audio doesn’t always load
  • locations don’t always trigger

Those are two big issues. I’m not entirely sure what to do about them. I just spun up a story that takes place around the quad here; I took the Ottawa Anomaly code and plugged in different coordinates. When I playtested from my computer, audio loaded up, which was good. But when I went downstairs and outside, to where I knew the first trigger to be: no audio. The ‘play audio’ function reported, ‘what audio?’ so I know the <> macro in the initialization passage didn’t load up.

I went to a second location; the geotrigger didn’t trigger. It kept reloading the previous geotrigger. Except – if I reloaded the entire story, then the new trigger’d trig. So am I dealing with a caching issue? Do I need to clear the latitude/longitude variables when the player moves on?

You can download the html and import it into the Twine2 interface. Answers on a postcard…

[update a few hours later] So I went back downstairs, and outside, and lo! everything loaded up as per desired. Must be a loading issue. In which case, I should shorten the length of the clips downwards, and also have a ‘turn off’ whenever the user rescans for geotriggers – I was getting overlapping sounds.

 

On haunts & low-friction AR – thinking out loud

The frightening news is that we are living in a story. The reassuring part is that it’s a story we’re writing ourselves. Alas, though, most of us don’t even know it – or are afraid to accept it. Never before did we have so much access to the tools of storytelling – yet few of us are willing to participate in their creation.

– Douglas Ruskhoff, ‘Renaissance Now! The Gamers’ Perspective’ in Handbook of Computer Game Studies, MIT Press, 2005: 415.

Haunts is about the secret stories of spaces.

Haunts is about locative trauma.

Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.

The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.

– Mark Sample, “Haunts: Place, Play, and Trauma” Sample Reality http://www.samplereality.com/2010/06/01/haunts-place-play-and-trauma/

It’s been a while since I’ve delved into the literature surrounding locative place-based games. I’ve been doing so as I try to get my head in gear for this summer’s Digital Archaeology Institute where I’ll be teaching augmented reality for archaeology.

Archaeology and archaeological practice are so damned broad though; in order to do justice to the time spent, I feel like I have to cover lots of different possibilities for how AR could be used in archaeological practice, from several different perspectives. I know that I do want to spend a lot of time looking at AR from a game/playful perspective though.  A lot of what I do is a kind of digital bricolage, as I use whatever I have to hand to do whatever it is I do. I make no pretense that what I’m doing/using is the best method for x, only that it is a method, and one that works for me. So for augmented reality in archaeology, I’m thinking that what I need to teach are ways to get the maximum amount of storytelling/reality making into the greatest number of hands. (Which makes me think of this tweet from Colleen Morgan this am:

…but I digress.)

So much about what we find in archaeology is about trauma. Houses burn down: archaeology is created. Things are deliberately buried: archaeology is created. Materials are broken: archaeology is created.

Sample’s Haunts then provides a potential framework for doing archaeological AR. He goes on to write:

The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.

It was more of a proof of concept than anything else, but my ‘low-friction AR‘ ‘The Ottawa Anomaly‘ tries to not so much tell a story, but provide echoes of events in key areas around Ottawa’s downtown, such that each player’s experience of the story would be different – the sequence of geotriggers encountered would colour each subsequent trigger’s emotional content. If you hear the gunshot first, and then the crying, that implies a different story than if you heard them the other way around. The opening tries to frame a storyworld where it makes sense to hear these echoes of the past in the present, so that the technological mediation of the smartphone fits the world. It also is trying to make the player stop and look at the world around them with new eyes (something ‘Historical Friction‘ tries to do as well).

I once set a treasure hunt around campus for my first year students. One group however interpreted a clue as meaning a particular statue in downtown Ottawa; they returned to campus much later and told me a stunning tale of illuminati and the secret history of Official Ottawa that they had crafted to make sense of the clues. Same clues, different geographical setting (by mistake) = oddly compelling story. What I’m getting at: my audio fragments could evoke very different experiences not just in their order of encounter but also given the background of the person listening. I suggested in a tweet that

creating another level of storytelling on top of my own.

I imagine my low-friction AR as a way for multiple stories within the same geographic frame, and ‘rechoes’ or ‘fieldnotes’ as ways of cross-connecting different stories. I once toyed with the idea of printing out QR codes such that they could be pasted overtop of ‘official Ottawa‘ for similar purposes…

Low Friction Augmented Reality

But my arms get tired.

Maybe you’ve thought, ‘Augmented reality – meh’. I’ve thought that too. Peeping through my tablet or phone’s screen at a 3d model displayed on top of the viewfinder… it can be neat, but as Stu wrote years ago,

[with regard to ‘Streetmuseum’, a lauded AR app overlaying historic London on modern London] …it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

I’ve said before that, as historians and archaeologists, we’re maybe missing a trick by messing around with visual augmented reality. The past is aural. (If you want an example of how affecting an aural experience can be, try Blindside).

Maybe you’ve seen ‘Ghosts in the Garden‘. This is a good model. But what if you’re just one person at your organization? It’s hard to put together a website, let alone voice actors, custom cases and devices, and so on. I’ve been experimenting these last few days with trying to use the Twine interactive fiction platform as a low-friction AR environment. Normally, one uses Twine to create choose-your-own-adventure texts. A chunk of text, a few choices, those choices lead to new texts… and so on. Twine uses an editor that is rather like having little index cards that you move around, automatically creating new cards as you create new choices. When you’re finished, Twine exports everything you’ve done into a single html file that can live online somewhere.

That doesn’t begin to even touch the clever things that folks can do with Twine. Twine is indeed quite complex. For one thing, as we’ll see below, it’s possible to arrange things so that passages of text are triggered not by clicking, but by your position in geographical space.

You can augment reality with Twine. You don’t need to buy the fancy software package, or the monthly SDK license. You can do it yourself, and keep control over your materials, working with this fantastic open-source platform.

When the idea occurred to me, I had no idea how to make it happen. I posed the question on the Twine forums, and several folks chimed in with suggestions about how to make this work. I now have a platform for delivering an augmented reality experience. When you pass through an area where I’ve put a geotrigger, right now, it plays various audio files (I’m going for a horror-schlock vibe. Lots of backwards talking. Very Twin Peaks). What I have in mind is that you would have to listen carefully to figure out where other geotriggers might be (or it could be straight-up tour-guide type audio or video). I’ve also played with embedding 3d models (both with and without Oculus Rift enabled), another approach which is also full of potential – perhaps the player/reader has to carefully examine the annotations on the 3d model to figure out what happens next.

Getting it to work on my device was a bit awkward, as I had to turn on geolocation for apps, for Google, for everything that wanted it (I’ve since turned geolocation off again).

If you’re on Carleton’s campus, you can play the proof-of-concept now: http://philome.la/electricarchaeo/test-of-geolocation-triggers/play  But if you’re not on Carleton’s campus, well, that’s not all that useful.

To get this working for you, you need to start a new project in Twine 2. Under story format (click the up arrow beside your story title, bottom left of the editor), make sure you’ve selected Sugarcube (this is important; the different formats have different abilities, and we’re using a lot of javascript here). Then, in the same place, find ‘edit story javascript’ because you need to add a whole bunch of javascript:


(function () {
if ("geolocation" in navigator && typeof navigator.geolocation.getCurrentPosition === "function") {
// setup the success and error callbacks as well as the options object
var positionSuccess = function (position) {
// you could simply assign the `coords` object to `$Location`,
// however, this assigns only the latitude and longitude since
// that seems to have been what you were attempting to do before
state.active.variables["Location"] = {
latitude : position.coords.latitude,
longitude : position.coords.longitude
};
// access would be like: $Location.latitude and $Location.longitude
},
positionError = function (error) {
/* currently a no-op; code that handles errors */
},
positionOptions = {
timeout: 31000,
enableHighAccuracy: true,
maximumAge : 120000 // (in ms) cached results may not be older than 1 minute
// this can probably be tweaked upwards a bit
};

// since the API is asynchronous, we give `$Location` an initial value, so
// trying to access it immediately causes no issues if the first callback
// takes a while
state.active.variables["Location"] = { latitude : 0, longitude : 0 };

// make an initial call for a position while the system is still starting
// up, so we can get real data ASAP (probably not strictly necessary as the
// first call via the `predisplay` task [below] should happen soon enough)
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);

// register a `predisplay` task which attempts to update the `$Location`
// variable whenever passage navigation occurs
predisplay["geoGetCurrentPosition"] = function () {
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);
};
} else {
/* currently a no-op; code that handles a missing/disabled geolocation API */
}
}());

(function () {
window.approxEqual = function (a, b, allowedDiff) { // allowedDiff must always be > 0
if (a === b) { // handles various "exact" edge cases
return true;
}
allowedDiff = allowedDiff || 0.0005;
return Math.abs(a - b) < allowedDiff;
};
}());

The first function enables your Twine story to get geocoordinates. The second function enables us to put a buffer around the points of interest. Then, in our story, you have to call that code and compare the result against your points of interest so that Twine knows which passage to display. So in a new passage – call it ‘Search for Geotriggers’- you have this:

<<if approxEqual($Location.latitude, $Torontolat) and approxEqual($Location.longitude, $Torontolong)>>
<<display “Downtown Toronto”>>
<<else>>
<<display “I don’t know anything about where you are”>>
<</if>>

So that bit above says, if the location is more or less equal to the POI called Torontolat,Torontolong, then display the passage called “Downtown Toronto”. If you’re not within the buffer around the Toronto point, display the passage called “I don’t know anything about where you are”.

Back at the beginning of your story, you have an initialization passage (where your story starts) and you set some of those variables:

<<set $Torontolat = 43.653226>>
<<set $Torontolong = -79.3831843>>

[[Search for Geotriggers]]

And that’s the basics of building a DIY augmented reality. Augmented? Sure it’s augmented. You’re bringing digital ephemera into play (and I use the word play deliberately) in the real world. Whether you build a story around that, or go for more of the tour guide approach, or devise fiendish puzzles, is up to you.

I’m grateful to ‘Greyelf’ and ‘TheMadExile’ for their help and guidance as I futzed about doing this.

[update May 22: Here is the html for a game that takes place in and around downtown Ottawa Ontario. Download it somewhere handy, then open the Twine 2 editor. Open the game file in the editor via the Import button and you’ll see how I built it, organized the triggers and so on. Of course, it totally spoils any surprise or emergent experience once you can see all the working parts so if you’re in Ottawa, play it here on your device first before examining the plumbing!]

Putting Pompeii on Your Coffee Table

(cross-posted from my course blog, #hist5702x digital/public history. If you’re interested in public history and augmented reality, check out my students’ posts!)

Creating three dimensional models from photographs has its ups and downs. But what if we could do it from video? I decided to find out.

First, I found this tourist’s film of a house at Pompeii (house of the tragic poet, he says):

I saved a copy of the film locally; there are a variety of ways of doing this and two seconds with google will show you how. I then watched it carefully, and took note of a sequence of clearly lit pans at various points, marking down when they started and stopped, in seconds.

C extract extract3 atr-00625

Then, I searched for a way to extract still images from that clip. This blog post describes a command-line option using VLC, option 3. I went with that, which created around 600-images. I then batch converted them from png to jpg (Google around again; the solution I found from download.com was filled with extraneous crapware that cost me 30 minutes to delete).

I then selected around 40 images that seemed to cover things well. It would’ve been better if the cameraman had moved around rather than panned, as that would’ve provided better viewpoints (I’ll search for a better video clip). These I stitched together using 123D Catch. I have the Python Photogrammetry Toolbox on my other computer, so I’ll try doing it again on that machine; 123D Catch is all well and good but it is quite black-box; with PPT I can perhaps achieve better results.

The resulting model from 123D Catch shows the inside of the atrium far better than I expected (and again, a better starting film would probably give better results). I exported the .obj, .mtl, and the jpg textures for the resulting model, to my computer, which I then uploaded to augmentedev.com.

The result? A pompeian house, on my desktop!

The Atrium of the House of the Tragic Poet, Pompeii-on-the-Rideau

Now imagine *all* of the video that exists out there of Pompeii. It should be possible to create a 3d model of nearly the whole city (or at least, the parts they let tourists into), harvesting videos from youtube. One could then 3d print the city, export to AR, or import into a game engine….

As far as the #hist5702x project is concerned, we could do this in the workspace they’ve set up for us in the warehouse building, or at the airport, or from historical footage from inside a plane, or….

Historical Friction

edit June 6 – following on from collaboration with Stu Eve, we’ve got a version of this at http://graeworks.net/historicalfriction/

I want to develop an app that makes it difficult to move through the historically ‘thick’ places – think Zombie Run, but with a lot of noise when you are in a place that is historically dense with information. I want to ‘visualize’ history, but not bother with the usual ‘augmented reality’ malarky where we hold up a screen in front of our face. I want to hear the thickness, the discords, of history. I want to be arrested by the noise, and to stop still in my tracks, be forced to take my headphones off, and to really pay attention to my surroundings.

So here’s how that might work.

1. Find wikipedia articles about the place where you’re at. Happily, inkdroid.org has some code that does that, called ‘Ici’. Here’s the output from that for my office (on the Carleton campus):

http://inkdroid.org/ici/#lat=45.382&lon=-75.6984

2. I copied that page (so not the full wikipedia articles, just the opening bits displayed by Ici). Convert these wikipedia snippets into numbers. Let A=1, B=2, and so on. This site will do that:

http://rumkin.com/tools/cipher/numbers.php

3. Replace dashes with commas. Convert those numbers into music. Musical Algorithmns is your friend for that. I used the default settings, though I sped it up to 220 beats per minute. Listen for yourself here. There are a lot of wikipedia articles about the places around here; presumably if I did this on, say, my home village, the resulting music would be much less complex, sparse, quiet, slow. So if we increased the granularity, you’d start to get an acoustic soundscape of quiet/loud, pleasant/harsh sounds as you moved through space – a cost surface, a slope. Would it push you from the noisy areas to the quiet? Would you discover places you hadn’t known about? Would the quiet places begin to fill up as people discovered them?

Right now, each wikipedia article is played in succession. What I really need to do is feed the entirety of each article through the musical algorithm, and play them all at once. And I need a way to do all this automatically, and feed it to my smartphone. Maybe by building upon this tutorial from MIT’s App Inventor. Perhaps there’s someone out there who’d enjoy the challenge?

I mooted all this at the NCPH THATCamp last week – which prompted a great discussion about haptics, other ways of engaging the senses, for communicating public history. I hope to play at this over the summer, but it’s looking to be a very long summer of writing new courses, applying for tenure, y’know, stuff like that.

Edit April 26th – Stuart and I have been playing around with this idea this morning, and have been making some headway per his idea in the comments. Here’s a quick screengrab of it in action: http://www.screencast.com/t/DyN91yZ0

p3d.in for hosting your 3d scans

I’m playing with p3d.in to host some three dimensional models I’ve been making with 123D Catch. These are models that I have been using in conjunction with Junaio to create augmented reality pop-up books (and other things; more on that anon). Putting these 3d objects onto a webpage (or heaven forbid, a pdf) has been strangely much more complicated and time-consuming. P3d.in then serves a very useful purpose then!

Below are two models that I made using 123D catch. The first is the end of a log recovered from anaerobic conditions at the bottom of the Ottawa River (which is very, very deep in places). The Ottawa was used as a conduit for floating timber from its enormous watershed to markets in the US and the UK for nearly two hundred years. Millions of logs floated down annually…. so there’s a lot of money sitting down there. A local company, Log’s End, has been recovering these old growth logs and turning them into high-end wide plank flooring. They can’t use the ends of the logs as they are usually quite damaged, so my father picked some up and gave them to me, knowing my interest in all things stamped. This one carries an S within a V, which dates it to the time and timber limits of J.R. Booth I believe.

logend-edit2 (Click to view in 3D)

And here we have one of the models that my students made last year from the Mesoamerican materials conserved at the Canadian Museum of Civilization (soon-to-be-repurposed as the Museum of Canadian History; what will happen to these awkward materials that no longer fit the new mandate?)

mesoamerican (Click to view in 3D)

PS
Incidentally, I’ve now embedded these in a Neatline exhibition I am building:

3d manipulable objects in time and space

123D Catch iPhone app

I’ve just been playing with the 123D catch iphone app. Aside from some annoying login business, it’s actually quite a nice little app for creating 3d volumetric models. I have a little toy car in my office. I opened the app, took 15 pictures of the car sitting on my desk, and sent it off for processing. The resulting model is viewable here. Not too bad for 5 minutes of futzing about with the iphone.

Since I’m interested in 3d models as fodder for augmented reality, this is a great workflow. No fooling around trying to reduce model & mesh size to fit into the AR pipeline. By making the model with the iphone’s camera in the first place, the resulting model files are sufficiently small enough (more or less; more complicated objects will probably need a bit of massaging) to swap into my Junaio xml and pumped out to my channel with little difficulty.

 

How to make an augmented reality pop-up book

We made an augmented reality pop-up book in my first year seminar last spring. Perhaps you’d like to make your own?

1. Go to Junaio and register as a developer.

2. Get some server space that meets Junaio’s requirements.

3. On the right hand side of the screen, when you’re logged into Junaio, you can find out what your API key is by clicking on ‘show my api’. You’ll need this info.

4. Create a channel (which you do once you’re logged in over at Junaio; click on ‘new channel’ on the right hand side of the screen). You will need to fill in some information. For the purposes of creating a pop-up book, you select a ‘GLUE channel’. The ‘callback’ is the URL of the folder on your server where you’re going to put all of your assets. Make this ‘YOURDOMAIN/html/index.php. Don’t worry that the folder ‘html’ or the file ‘index.php’ doesn’t exist yet. You’ll create those in step 6.

Now the fun begins. I’m going to assume for the moment that you’ve got some 3d models available that you’ll be using to augment the printed page. These need to be in .obj format, and they need to be smaller than >750 kb. I’ve used 123D Catch to make my models, and then Meshlab to reduce the size of the models (Quadratic Edge Collapse is the filter I use for that). Make sure you keep a copy of the original texture somewhere so that it doesn’t get reduced when you reduce polygons.

5. Create your tracking images. You use Junaio’s tool to do this. At the bottom of that page, where it says, ‘how many patterns do you want to generate?’, select however many images you’re going to augment. PNGs should work; make sure that they are around 100 kb or smaller. If your images don’t load into the tool – if it seems to get stuck in a neverending loading cycle – your images may be too large or in the wrong format.  Once this process is done, you’ll download a file called ‘tracking.xml_enc. Keep this.

6. Now, at this point, things are a bit different than they were in May, as Junaio changed their API somewhat. The old code still works, and that’s what I’m working with here. Here’s the original tutorial from Junaio. Download the ‘Getting Started’ php package. Unzip it, and put its contents into your server space.

7. navigate to config/config.php, open that, and put your Junaio Developer key in there.

8. I created a folder called ‘resources’; this is where you’re going to put your assets. Put the tracking_xml.enc in there.

9. Navigate to src/search.php. Open this. This is the file that does all the magic, that ties your tracking images to your resources. Here’s mine, for our book. Note how there’s a mixture of movies and models in there:

<?php /**/ ?><?php

/**
* @copyright  Copyright 2010 metaio GmbH. All rights reserved.
* @link       http://www.metaio.com
* @author     Frank Angermann
**/

require_once ‘../library/poibuilder.class.php’;

/**
* When the channel is being viewed, a poi request will be sent
* $_GET[‘l’]…(optional) Position of the user when requesting poi search information
* $_GET[‘o’]…(optional) Orientation of the user when requesting poi search information
* $_GET[‘p’]…(optional) perimeter of the data requested in meters.
* $_GET[‘uid’]… Unique user identifier
* $_GET[‘m’]… (optional) limit of to be returned values
* $_GET[‘page’]…page number of result. e.g. m = 10: page 1: 1-10; page 2: 11-20, e.g.
**/

//use the poiBuilder class   — this might not be right, for
$jPoiBuilder = new JunaioBuilder();

//create the xml start
$jPoiBuilder->start(“http:YOURDOMAIN/resources/tracking.xml_enc”);

//bookcover-trackingimage1
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/movie-reel.mp4&#8221;, //texture
95, //scale
1, //cosID
“Universal Newspaper Newsreel November 6, 1933, uploaded to youtube by publicdomain101”, //description
“”, //thumbnail
“movie1”, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage2 pg 9 xxi-a:55 –
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/edited-museum-1.mp4&#8221;, //texture
90, //scale
2, //cosID
“Faces of Mexico – Museo Nacional de Antropologia”, //description
“”, //thumbnail
“movie2”, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://www.youtube.com/watch?v=Dfc257xI0eA&#8221;);

$poi->addCustomization($cust);
//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage3 pg 11 xxi-a:347 bighead -3d model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Effigy”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id3-big-head.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/big-head-statue_tex_0.jpg&#8221;, //resource (texture)
5, //scale
3, //cos ID -> which reference the POI is assigned to
“XXI-A:51”, //description
“”, //thumbnail
“Zapotec Effigy”, //id
“0,3.14,1.57” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage4 pg13 xxi-a:51 from shaft tomb, model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Shaft Tomb Figurine”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id4-shaft-grave.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-statue.jpg&#8221;, //resource (texture)
5, //scale
4, //cos ID -> which reference the POI is assigned to
“XXI-A:51”, //description
“”, //thumbnail
“Shaft Tomb Figurine”, //id
“0,0,3.14” //orientation
);
//echo the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage5 pg15 xxi-a:28
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/pg15-movie.mp4&#8221;, //texture
90, //scale
5, //cosID
“Showing the finished model in Meshlab”, //description
“”, //thumbnail
“movie3”, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage6 pg17 xxi-a:139 man with club movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane4_3.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/archaeologicalsites-1.mp4&#8221;, //texture
90, //scale
6, //cosID
“”, //description
“”, //thumbnail
“movie4”, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage7 pg19 xxi-a:27 fat dog model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Fat Dog”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/id5-dog-model.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/april25-dog_tex_0-small.png&#8221;, //resource (texture)
5, //scale
7, //cos ID -> which reference the POI is assigned to
“XXI-A:27, Created using 123D Catch”, //description
“”, //thumbnail
“Fat Dog”, //id
“0,0,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage8 pg21 xxi-a:373 ring of people – model
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Ring of People”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/ring.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/ring-2_tex_0.jpg&#8221;, //resource (texture)
5, //scale
8, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Ring of People”, //id
“1.57,0,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 9 pg 23 xxi-a:29 old woman with head on knee.
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI (
“Old Woman”, //name
“0,0,0”,  //translation
http://YOURDOMAIN/resources/statue2.zip&#8221;, //mainresource (model)
http://YOURDOMAIN/resources/Statue_try_1_tex_0.png&#8221;, //resource (texture)
5, //scale
9, //cos ID -> which reference the POI is assigned to
“XXI-A:29, Old woman seated with head on knee. Created using 123D Catch”, //description
“”, //thumbnail
“Old Woman”, //id
“0,3.14,3.14” //orientation
);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

//trackingimage 10 pg29 Anything but textbook movie
$poi = new SinglePOI();
$poi = $jPoiBuilder->createBasicGluePOI(
“Movie Texture”,    //name
“0,0,0”, //position
http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc&#8221;, //model
http://YOURDOMAIN/resources/carleton-promo.mp4&#8221;, //texture
100, //scale
10, //cosID
“Carleton University – Anything but textbook!”, //description
“”, //thumbnail
“movie5”, //id
“1.57,1.57,3.14”, //orientation
array(), //animation specification
“click”

);
$cust = new Customization();
$cust->setName(“Website”);
$cust->setNodeID(“click”);
$cust->setType(“url”);
$cust->setValue(“http://carleton.ca&#8221;);

$poi->addCustomization($cust);

//deliver the POI
$jPoiBuilder->outputPOI($poi);

///end of tracking images
$jPoiBuilder->end();

exit;

And that does it, basically. Each element, each augment, is called after ‘//deliver the POI’. After ‘createBasicGluePOI’ you get the parameters. You provide the direct URL to your ‘mainresource’ when it’s a 3d model. In the next line, the direct URL to the texture. You can make it bigger or smaller by adjusting ‘Scale’; cos ID is next. Make sure that these correspond with the images you uploaded when you created the tracking file. Otherwise you can get the wrong model or movie playing in the wrong spot. The ‘description’ is what will appear on the smartphone if somebody touches the screen at this point. ‘Orientation’ is a bugger to sort out, as it is in radians. So take your compass, and divide by 3.14 to figure it out. I believe that 0,0,0 would put your model flat against your tracking image, but I could be wrong.

(If you’ll notice, some of the POIS are movies. To display these, you map a ‘movie plane’ over your tracking image, and then play the movie on top of this. Use Junaio’s movie plane – the URL is http://dev.junaio.com/publisherDownload/tutorial/movieplane3_2.md2_enc and put that in the line after ‘translation’. The next line will be the direct URL to your movie. These need to be packaged a bit for iphone/android delivery. I used Handbrake to do this, with its presets. Load movie, export for iphone, and voila.)

Regarding packaging your models- you have to zip together the texture, the obj file, and the .mtl file that meshlab creates.  An MTL file looks like this inside:

#
# Wavefront material file
# Converted by Meshlab Group
#

newmtl material_0
Ka 0.200000 0.200000 0.200000
Kd 1.000000 1.000000 1.000000
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000
map_Kd big-head-statue_tex_0.jpg

newmtl material_1
Ka 0.200000 0.200000 0.200000
Kd 0.501961 0.501961 0.501961
Ks 1.000000 1.000000 1.000000
Tr 1.000000
illum 2
Ns 0.000000

Make sure the named  texture file (here, ‘big-head-statue-tex_0.jpg’) in this file is the same as the one in the zip file, and the same as the one called in the search.php. I confess a big of ignorance here: I found that I also had to have the texture file in the main resource folder, unzipped. This is the one the search.php points to; but if you don’t also have it in the zipped file, you get a 3d object without texture. I do not know why this is.

10. Go back to ‘my channels’ on Junaio. Click ‘validate’ beside your channel. This will tell you if everything is ok. Note – sometimes things come back as an error when they aren’t really a problem. The only way to know the difference is to click on ‘get QR code’ and then go to step 11:

11. With your smartphone, having already downloaded the Juanio app, click ‘scan’ and scan the QR code for your channel. Your content – if all is well – will load. Aim your phone at one of your tracking images, and your augmentation should appear. Voila!

So that should do it. Good luck, and enjoy. Keep in mind that with Junaio’s new api, a lot of this has been streamlined. I’ll get around to learning that, soon.

Mesoamerica in Gatineau: Augmented Reality Museum Catalogue Pop-Up Book

Would you like to take a look at the term project of my first year seminar course in digital antiquity at Carleton University? Now’s your chance!

Last winter, Terence Clark and Matt Betts, curators at the Museum of Civilization in Gatineau Quebec, saw on this blog that we were experimenting with 123D Catch (then called ‘Photofly’) to make volumetric models of objects from digital photographs. Terence and Matt were also experimenting with the same software. They invited us to the museum to select objects from the collection. The students were enchanted with materials from mesoamerica, and our term project was born: what if we used augmented reality to create a pop-up museum catalogue? The students researched the artefacts, designed and produced a catalogue, photographed artefacts, used 123D Catch to turn them into 3d models, Meshlab to clean the models up, and Junaio to do the augmentation. (I helped a bit on the augmentation. But now that I know, roughly, what I’m doing, I think I can teach the next round of students how to do this step for themselves, too).The hardest part was reducing the models to less than 750kb (per the Junaio specs) while retaining something of their visual complexity.

The results were stunning. We owe an enormous debt of gratitude to Drs. Clark and Betts, and the Museum of Civilization for this opportunity. Also, the folks at Junaio were always very quick to respond to cries for help, and we thank them for their patience!

Below, you’ll find the QR code to scan with Junaio, to load the augmentations into your phone. Then, scan the images to reveal the augmentation (you can just point your phone at the screen). Try to focus on a single image at a time.

Also, you may download the pdf of the book, and try it out. (Warning: large download).

Artefact images taken by Jenna & Tessa; courtesy of the Canadian Museum of Civilization