working with vuforia unity plugin for augmented reality

Notes to self:

– working on Mac.
-install vuforia plugin for unity
-followed this: http://developer.vuforia.com/library/articles/Solution/Compiling-a-Simple-Unity-Project
-this post is handy too. http://www.marcofolio.net/other/introduction_into_augmented_reality_with_vuforia.html
-however: you have to register a key for your app that you build https://developer.vuforia.com/targetmanager/licenseManager/licenseListing
-you also have to create an image tracking database. (under the ‘develop’ page of the Vuforia website). You upload your images, it creates a database xml in return. You download it, and while unity is open, double click on the download – it adds itself automatically.
-you have to have the ios or android sdk installed on your machine. I’m working with android. I had to find the location of the sdk; needed to ‘unhide’ the ‘Library’ folder in the Finder in order to do so (Unity will ask you for the location of the sdk when it builds your app. If you don’t unhide it, you can’t find or select it).

-some screenshots from within Unity, for reference:

Screen Shot 2015-05-30 at 4.18.11 PM

delete the default camera. add arcamera from ‘prefabs’ in the project. add imagetargets from ‘prefabs’. assets go underneath an imagetarget.

Screen Shot 2015-05-30 at 4.18.21 PM

prefabs are where the magic lies. the 3dmodels folder was one I added via the finder, and then I dropped my obj and texture pngs in there. Unity updated automatically, creating also the materials folder.

Screen Shot 2015-05-30 at 4.18.48 PM

you add the app license key that you made with the online licence manager here. If you have more than one tracking image, *I think* you put that info here in ‘max simultaneous tracked images’. If you’ve got more than 1 tracked object at a time, update accordingly.

Screen Shot 2015-05-30 at 4.19.12 PM

for your image target, you select ‘data set’ to grab images from the database you created, and then image target. You also have to set ‘width'; I do not understand, yet, what units these are in.

 

Problems with low-friction AR

Flickr, Camille Rose

Ok. I’ve had a bit of feedback from folks. The issues seem to be:

  • audio doesn’t always load
  • locations don’t always trigger

Those are two big issues. I’m not entirely sure what to do about them. I just spun up a story that takes place around the quad here; I took the Ottawa Anomaly code and plugged in different coordinates. When I playtested from my computer, audio loaded up, which was good. But when I went downstairs and outside, to where I knew the first trigger to be: no audio. The ‘play audio’ function reported, ‘what audio?’ so I know the <> macro in the initialization passage didn’t load up.

I went to a second location; the geotrigger didn’t trigger. It kept reloading the previous geotrigger. Except – if I reloaded the entire story, then the new trigger’d trig. So am I dealing with a caching issue? Do I need to clear the latitude/longitude variables when the player moves on?

You can download the html and import it into the Twine2 interface. Answers on a postcard…

[update a few hours later] So I went back downstairs, and outside, and lo! everything loaded up as per desired. Must be a loading issue. In which case, I should shorten the length of the clips downwards, and also have a ‘turn off’ whenever the user rescans for geotriggers – I was getting overlapping sounds.

 

On haunts & low-friction AR – thinking out loud

The frightening news is that we are living in a story. The reassuring part is that it’s a story we’re writing ourselves. Alas, though, most of us don’t even know it – or are afraid to accept it. Never before did we have so much access to the tools of storytelling – yet few of us are willing to participate in their creation.

– Douglas Ruskhoff, ‘Renaissance Now! The Gamers’ Perspective’ in Handbook of Computer Game Studies, MIT Press, 2005: 415.

Haunts is about the secret stories of spaces.

Haunts is about locative trauma.

Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.

The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.

– Mark Sample, “Haunts: Place, Play, and Trauma” Sample Reality http://www.samplereality.com/2010/06/01/haunts-place-play-and-trauma/

It’s been a while since I’ve delved into the literature surrounding locative place-based games. I’ve been doing so as I try to get my head in gear for this summer’s Digital Archaeology Institute where I’ll be teaching augmented reality for archaeology.

Archaeology and archaeological practice are so damned broad though; in order to do justice to the time spent, I feel like I have to cover lots of different possibilities for how AR could be used in archaeological practice, from several different perspectives. I know that I do want to spend a lot of time looking at AR from a game/playful perspective though.  A lot of what I do is a kind of digital bricolage, as I use whatever I have to hand to do whatever it is I do. I make no pretense that what I’m doing/using is the best method for x, only that it is a method, and one that works for me. So for augmented reality in archaeology, I’m thinking that what I need to teach are ways to get the maximum amount of storytelling/reality making into the greatest number of hands. (Which makes me think of this tweet from Colleen Morgan this am:

…but I digress.)

So much about what we find in archaeology is about trauma. Houses burn down: archaeology is created. Things are deliberately buried: archaeology is created. Materials are broken: archaeology is created.

Sample’s Haunts then provides a potential framework for doing archaeological AR. He goes on to write:

The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.

It was more of a proof of concept than anything else, but my ‘low-friction AR‘ ‘The Ottawa Anomaly‘ tries to not so much tell a story, but provide echoes of events in key areas around Ottawa’s downtown, such that each player’s experience of the story would be different – the sequence of geotriggers encountered would colour each subsequent trigger’s emotional content. If you hear the gunshot first, and then the crying, that implies a different story than if you heard them the other way around. The opening tries to frame a storyworld where it makes sense to hear these echoes of the past in the present, so that the technological mediation of the smartphone fits the world. It also is trying to make the player stop and look at the world around them with new eyes (something ‘Historical Friction‘ tries to do as well).

I once set a treasure hunt around campus for my first year students. One group however interpreted a clue as meaning a particular statue in downtown Ottawa; they returned to campus much later and told me a stunning tale of illuminati and the secret history of Official Ottawa that they had crafted to make sense of the clues. Same clues, different geographical setting (by mistake) = oddly compelling story. What I’m getting at: my audio fragments could evoke very different experiences not just in their order of encounter but also given the background of the person listening. I suggested in a tweet that

creating another level of storytelling on top of my own.

I imagine my low-friction AR as a way for multiple stories within the same geographic frame, and ‘rechoes’ or ‘fieldnotes’ as ways of cross-connecting different stories. I once toyed with the idea of printing out QR codes such that they could be pasted overtop of ‘official Ottawa‘ for similar purposes…

Low Friction Augmented Reality

But my arms get tired.

Maybe you’ve thought, ‘Augmented reality – meh’. I’ve thought that too. Peeping through my tablet or phone’s screen at a 3d model displayed on top of the viewfinder… it can be neat, but as Stu wrote years ago,

[with regard to ‘Streetmuseum’, a lauded AR app overlaying historic London on modern London] …it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

I’ve said before that, as historians and archaeologists, we’re maybe missing a trick by messing around with visual augmented reality. The past is aural. (If you want an example of how affecting an aural experience can be, try Blindside).

Maybe you’ve seen ‘Ghosts in the Garden‘. This is a good model. But what if you’re just one person at your organization? It’s hard to put together a website, let alone voice actors, custom cases and devices, and so on. I’ve been experimenting these last few days with trying to use the Twine interactive fiction platform as a low-friction AR environment. Normally, one uses Twine to create choose-your-own-adventure texts. A chunk of text, a few choices, those choices lead to new texts… and so on. Twine uses an editor that is rather like having little index cards that you move around, automatically creating new cards as you create new choices. When you’re finished, Twine exports everything you’ve done into a single html file that can live online somewhere.

That doesn’t begin to even touch the clever things that folks can do with Twine. Twine is indeed quite complex. For one thing, as we’ll see below, it’s possible to arrange things so that passages of text are triggered not by clicking, but by your position in geographical space.

You can augment reality with Twine. You don’t need to buy the fancy software package, or the monthly SDK license. You can do it yourself, and keep control over your materials, working with this fantastic open-source platform.

When the idea occurred to me, I had no idea how to make it happen. I posed the question on the Twine forums, and several folks chimed in with suggestions about how to make this work. I now have a platform for delivering an augmented reality experience. When you pass through an area where I’ve put a geotrigger, right now, it plays various audio files (I’m going for a horror-schlock vibe. Lots of backwards talking. Very Twin Peaks). What I have in mind is that you would have to listen carefully to figure out where other geotriggers might be (or it could be straight-up tour-guide type audio or video). I’ve also played with embedding 3d models (both with and without Oculus Rift enabled), another approach which is also full of potential – perhaps the player/reader has to carefully examine the annotations on the 3d model to figure out what happens next.

Getting it to work on my device was a bit awkward, as I had to turn on geolocation for apps, for Google, for everything that wanted it (I’ve since turned geolocation off again).

If you’re on Carleton’s campus, you can play the proof-of-concept now: http://philome.la/electricarchaeo/test-of-geolocation-triggers/play  But if you’re not on Carleton’s campus, well, that’s not all that useful.

To get this working for you, you need to start a new project in Twine 2. Under story format (click the up arrow beside your story title, bottom left of the editor), make sure you’ve selected Sugarcube (this is important; the different formats have different abilities, and we’re using a lot of javascript here). Then, in the same place, find ‘edit story javascript’ because you need to add a whole bunch of javascript:


(function () {
if ("geolocation" in navigator && typeof navigator.geolocation.getCurrentPosition === "function") {
// setup the success and error callbacks as well as the options object
var positionSuccess = function (position) {
// you could simply assign the `coords` object to `$Location`,
// however, this assigns only the latitude and longitude since
// that seems to have been what you were attempting to do before
state.active.variables["Location"] = {
latitude : position.coords.latitude,
longitude : position.coords.longitude
};
// access would be like: $Location.latitude and $Location.longitude
},
positionError = function (error) {
/* currently a no-op; code that handles errors */
},
positionOptions = {
timeout: 31000,
enableHighAccuracy: true,
maximumAge : 120000 // (in ms) cached results may not be older than 1 minute
// this can probably be tweaked upwards a bit
};

// since the API is asynchronous, we give `$Location` an initial value, so
// trying to access it immediately causes no issues if the first callback
// takes a while
state.active.variables["Location"] = { latitude : 0, longitude : 0 };

// make an initial call for a position while the system is still starting
// up, so we can get real data ASAP (probably not strictly necessary as the
// first call via the `predisplay` task [below] should happen soon enough)
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);

// register a `predisplay` task which attempts to update the `$Location`
// variable whenever passage navigation occurs
predisplay["geoGetCurrentPosition"] = function () {
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);
};
} else {
/* currently a no-op; code that handles a missing/disabled geolocation API */
}
}());

(function () {
window.approxEqual = function (a, b, allowedDiff) { // allowedDiff must always be > 0
if (a === b) { // handles various "exact" edge cases
return true;
}
allowedDiff = allowedDiff || 0.0005;
return Math.abs(a - b) < allowedDiff;
};
}());

The first function enables your Twine story to get geocoordinates. The second function enables us to put a buffer around the points of interest. Then, in our story, you have to call that code and compare the result against your points of interest so that Twine knows which passage to display. So in a new passage – call it ‘Search for Geotriggers’- you have this:

<<if approxEqual($Location.latitude, $Torontolat) and approxEqual($Location.longitude, $Torontolong)>>
<<display “Downtown Toronto”>>
<<else>>
<<display “I don’t know anything about where you are”>>
<</if>>

So that bit above says, if the location is more or less equal to the POI called Torontolat,Torontolong, then display the passage called “Downtown Toronto”. If you’re not within the buffer around the Toronto point, display the passage called “I don’t know anything about where you are”.

Back at the beginning of your story, you have an initialization passage (where your story starts) and you set some of those variables:

<<set $Torontolat = 43.653226>>
<<set $Torontolong = -79.3831843>>

[[Search for Geotriggers]]

And that’s the basics of building a DIY augmented reality. Augmented? Sure it’s augmented. You’re bringing digital ephemera into play (and I use the word play deliberately) in the real world. Whether you build a story around that, or go for more of the tour guide approach, or devise fiendish puzzles, is up to you.

I’m grateful to ‘Greyelf’ and ‘TheMadExile’ for their help and guidance as I futzed about doing this.

[update May 22: Here is the html for a game that takes place in and around downtown Ottawa Ontario. Download it somewhere handy, then open the Twine 2 editor. Open the game file in the editor via the Import button and you’ll see how I built it, organized the triggers and so on. Of course, it totally spoils any surprise or emergent experience once you can see all the working parts so if you’re in Ottawa, play it here on your device first before examining the plumbing!]

archaeogaming unconference – logistics

Madness

Madness

The #archaeogaming unconference will take place here: https://unhangout.media.mit.edu/event/archaeogaming at 11 am, EST, June 1st; y’all are welcome to throw together other spaces (hangouts, skype, collaborative docs, etherpads, what have you) to extend or push the idea further. Ideas that have come in so far can be found/voted on here: http://www.allourideas.org/archaeogaming/.

In terms of how the day will unravel (unroll? play out?) I’m imagining say 3 sessions with 3 breakout rooms, at 45 minutes each, 10 minutes between for refreshment. Unlike in-person unconferences, I think trying to agree a schedule on the morning might be too difficult, so I’d take the top voted topics, slot them into a google spreadsheet-schedule template, say next monday – and then people can leave comments on the the desired layout. I’d leave that open for the week, then adjust/publish the schedule that weekend, according to what seems like the majority will.

Then, morning of, I’ll remind/repost the URL to the unhangout, and we’d be off to the races. The unhangout can be broadcast via Youtube too (though I’m not entirely sure how that happens or what the channel will be – guess I should go and see which of my many accounts is plumbed into what service).

Sound good?

Update May 25th: proposed schedule may be commented on here.

Fumbling towards Virtuality

With apologies to Sarah.

So the Oculus Rift arrived some time ago. What with conferences and illness, I didn’t really get to play with it until today. I followed all directions, and eventually got the damned thing wired to my 5 yr old Windows 7 machine.

I know, I know.

I have dual monitors. Dual VGA monitors. My box does not have VGA ports. Or rather, it does but they don’t hook to anything (curse you pimplyfaced Best Buy salesman). So, five years ago, I had to hunt high and low for dvi and display port adaptors. At the time, dvi and dp monitors were more than I had coin for. I tell you this to explain part of this morning shennanigans; hooking the Rift up to the hdmi port upset the delicate balance that keeps my monitors working (seriously – there’s  wire loose somewhere, which happened after I had to replace the power supply).

I know, I know.

Anyway, once everything was hooked up, the cool blue light of the Rift’s eyepieces beckoned me to put the thing on. Did I mention I have astigmatism in both eyes?

I know, I know.

Behold – my desktop upside down, and my two monitors no longer in position left and right. They inverted. So all alerts, buttons, windows etc were in the Rift view. I should mention that when I went to download the SDK & the runtime, my antivirus freaked right out about trojans (thank you, 360 total security). False alarm. But the auto-quarantine thing had the effect of buggering up the download, so I had to figure out what was going on there before I could get it all downloaded. Anyway, after much futzing, I got the desktop to display correctly in the Rift, even though it would no longer mirror to my monitors. ‘I can work with this’, I thought.

I know, I know.

When I tried the demo, I started getting all sorts of error messages. After more futzing and googling, I arrived at that point that all of us eventually get to:

…and I reinstalled the bloody sdk, and the runtime. And lo! the demo ran, appearing on my screen. The headtracker appeared to work as well, for on the screen as I moved the Rift around the ceiling of the tuscan villa would appear, then the walls, then the floor… except, not within the bloody Rift itself. No, the Rift was showing an orange ‘trouble’ light.

And then the viewer crashed, and the graphics all buggered up, and… and… I blame my graphics card & its software (whether rightly or wrongly, something’s gonna take the blame). It’s an AMD Radeon HD5570, but yeah, something’s up. And I’ve lost the better part of this morning futzing with this.

Things are getting dire.

Why I’m doing this: I want to do something like what these folks are doing, immersive network viz & navigation.

Anyway, it’s probably time to replace my box and when I do, surely most (all?) of my issues will automagically disappear.

(Well, this issue here is probably the culprit and I need to run in extended desktop mode, but still).

Update: I switched it to extended desktop mode; nothing. Back to normal mode. Then hot damn, the thing works! So I am now fully oculus rift’d.

Let’s do something cool.

an #archaeogaming unconference

Madness.

Update:

Mark June 1st on your calendars folks! https://unhangout.media.mit.edu/event/archaeogaming

This is probably madness, but what the hell. Given the interest this past week in the intersection(s) of archaeology and gaming that seemed to be happening across various blogs & across the twittersphere, it occurred to me that this was a really good opportunity for me to learn how to throw a virtual unconference. (Wasn’t that everyone’s first thought?) So, in order to get a sense of what people might be interested in talking about, I cooked up an ‘all our ideas’ voting page which can be found here. It presents you with pairs of ideas, and you simply click on the idea you like better in any given pair. Don’t like the ideas at all? You can add your own, no registration required. Now, to host the unconference, I’m thinking the MIT ‘unhangout’ is the way to do it. I’ve never used it, but I like the look of it, and I think it’ll be useful for my teaching next year, so again, a good opportunity. Anyway, it allows for breakout rooms via some clever coding on top of the regular google hangout. The video explains more. https://player.vimeo.com/video/90475288 I’ll leave the ‘all our ideas’ page running for a few more days. When I’ve settled on a day & time (probably this month, likely a monday or tuesday) I’ll update this post. All welcome.

Grabbing data from Open Context

This morning, on Twitter, there was a conversation about site diaries and the possibilities of topic modeling for extracting insight from them. Open Context has 2618 diaries – here’s one of them. Eric, who runs Open Context, has an excellent API for all that kind of data. Append .json on the end of a file name, and *poof*, lots of data. Here’s the json version of that same diary.  So, I wanted all of those diaries – this URL (click & then note where the .json lives; delete the .json to see the regular html) has ’em all.

I copied and pasted that list of urls into a .txt file, and fed it to wget

wget -i urlstograb.txt -O output.txt

and now my computer is merrily pinging Eric’s, putting all of the info into a single txt file. And sometimes crashing it, too.

(Sorry Eric).

When it’s done, I’ll rename it .json and then use Rio to get it into useable form for R. The data has geographic coordinates too, so with much futzing I expect I could *probably* represent topics over space (maybe by exporting to Gephi & using its geolayout).

Futz: that’s the operative word, here.

a quick note on visualizing topic models as self organizing map

I wanted to visualize topic models as a self-organizing map. This code snippet was helpful. (Here’s its blog post).

In my standard topic modeling script in R, I added this:

library("kohonen")
head(doc.topics)
doc.topics.sc <- scale(doc.topics)
set.seed(80)
doc.topics.som <- som(doc.topics.sc, grid = somgrid(20, 16, "hexagonal"))
plot(doc.topics.som, main = "Self Organizing Map of Topics in Documents")

which gives something like this:

Screen Shot 2015-05-05 at 3.02.53 PM

Things to be desired: I don’t know which circle represents what document. Each pie slice represents a topic. If you have more than around 10 topics, you get a graph in the circle instead of a pie slice. I was colouring in areas by main pie slice colour in inkscape, but then the whole thing crashed on me. Still, a move in the right direction for getting a sense of the landscape of your entire corpus. What I’m eventually hoping for is to end up with something like this (from this page):

update

I found this: https://github.com/geoss/som_visualization_r which seems to work. In my topic model script, I need to save the doc.topics output as Rdata:

save(doc.topics, file = "doctopics.RData")

and then the following:

library(kohonen)

##Code for Plots
source("somComponentPlanePlottingFunction.R")
### source("Map_COUNTY_BMU.R") <- not necessary for SG
source("plotUMatrix.R")


#Load Data
## data is from a topic model of student writing in Eric's class
load("doctopics.RData")

#Build SOM
aGrid <- somgrid(xdim = 20, ydim = 16, topo="hexagonal")

##NEXT LINE IS SLOW!!!
##Rlen is arbitrarily low
aSom <- som(data=as.matrix(scale(doc.topics)), grid=aGrid, rlen=1, alpha=c(0.05, 0.01), keep.data=FALSE)

##VISUALIZE RESULTS
##COMPONENT PLANES
dev.off()
par(mar = rep(1, 4))
cplanelay <- layout(matrix(1:8, nrow=4))
vars <- colnames(aSom$data)
for(p in vars) {
  plotCplane(som_obj=aSom, variable=p, legend=FALSE, type="Quantile")
}
plot(0, 0, type = "n", axes = FALSE, xlim=c(0, 1), 
     ylim=c(0, 1), xlab="", ylab= "")
par(mar = c(0, 0, 0, 6))
image.plot(legend.only=TRUE, col=rev(designer.colors(n=10, col=brewer.pal(9, "Spectral"))), zlim=c(-1.5,1.5))
##END PLOT

##PLOT U-MATRIX
dev.off()
plotUmat(aSom)

plot(aSom)

…does the trick. Notice ‘doc.topics’ makes another appearance there – I’ve got the topic model loaded into memory. Also in ‘aGrid’ the x and y have to multiply to the max number of observations you’ve got. Not enough: no problem. More than what you’ve got: you’ll get error messages. So, here’s what I ended up with:

Screen Shot 2015-05-05 at 4.50.29 PM

Now I just need to figure out how to put labels on each hexagonal bin. By the way, the helper functions have to be in your working directory for ‘source’ to find them.

Calling for #archaeogames – some thoughts on potential processes

Some months ago, I was talking with a colleague about the changing landscape of academic publishing. I was encouraging her to try some of these various open access and/or post-publication peer review and/or open peer review experiments that I’ve published in. Like any true believer, I was a bit annoying.

A lot annoying.

To which she sensibly responded: “But you were hired here to do that sort of thing. I was not. My goal right now is to secure tenure. I can’t have a bunch of ‘failed’ experiments or things that are too out-there on my cv when I go up.”

I was taken aback, but upon reflection, I realized she was entirely right. It’s one thing to be hired officially as ‘the digital humanities’ guy. I was expected from the get go – it was in the original job description – to be different, to do these odd things. Now, when I went through the tenure process, I still had to tell a good story about what I was doing and why it mattered and why it merited serious consideration. But still, I was in a position that my colleague is not. As I reflect on this, I realize that another obligation of this freedom that I have is that it is not enough for me to try things out with my own research.

My own research itself should be about making it possible for others to do this as well.

That is, in the same way I teach digital methods to my second year undergrads as just ho-hum these are just normal things that we do, I need to put whatever credibility it is that I have myself on the line so that others can try things out too. As I think of this #archaeogaming thing that I suggested in my previous post, and I consider the excellent advice that Jack & Kristen gave, along with Andrew’s thoughts and Tara’s careful responses, I see that there are a number of various deep issues that a ‘call for games’ raises. What credibility I have can be usefully expended trying to address these issues to normalize games as a serious venue for doing scholarship. Consider this post and this ‘call for games’ business as an effort to spend my academic credit towards opening up a new front for writing/making/crafting/communicating scholarship

Right now, before going any further, you should read the links in that paragraph above to the original post, then Andrew, Tara, Jack & Kristen’s responses. Ok, now that we’re all caught up- In no particular order, and not necessarily responding to any particular point raised in this conversation, here are some thoughts occasioned by this conversation:

1. archaeologists are not game designers. Game designers are not archaeologists. Agreed.  This is not a problem, when we remember that ‘a video game’ does not need to mean a triple-A title, filled with whiz bang graphics etc. I’m thinking of games here in the way that Anna Anthropy discusses in ‘Rise of the Videogame Zinesters‘. I’m talking punk archaeology. I’m talking a kind of public archaeology, zine-like remixing.

2. any game that gets created has to be using the affordance of the medium, the platform, as an integral part of the argument being made. No archaeological window-dressing. I used a zork-like interface once to decentre the top-down view of the world we are used to from Google earth, to get my students to ‘think like a Roman’, an argument about how Romans themselves saw and navigated space. (Post mortem). It doesn’t have to be ‘fun’. It doesn’t have to be complete. It does have to make an argument.

3. radical transparency. If issuing a ‘call for #archaeogames’ is to be meaningful, then every step in the process has to be clear. We don’t all have to agree, but when disagreements emerge, we have to arrive at a resolution

4. a collected ‘volume’ (for lack of a better word) of #archaeogames has to teach the ‘reader’ how to interact with it. For better or worse, I think this means that there has to be a ‘paradata’ document. Last year’s HeritageJam introduced this concept to me. I really rather like the concept. Why I say ‘for worse’ above – for the reader, a written document is a life-ring, something to cling to, that absolves the ‘reader/player’ from critically engaging with the game. It’s text – phew, I can read text!  A paradata document can be a playable thing too though.

5. thinking of paradata makes me think of the ‘feelies‘ that accompanied the first wave of computer gaming (here’s the Grail Diary, by the way). Maybe a call for archaeogames should explicitly call for feelies that can be printed, bound, pdf’d, whatever so that it is impossible to rely on the text alone to understand the argument. Bind the material with the digital.

6. which reminds me of ARGs, but we’ll leave that to one side for now (though check out this).

7. an archaeogame does not necessarily have to be a video game. Board, card games, school-yard games, ‘barely games‘, playsets… we’re materialists, are we not?

And finally, spend some time looking at Amanda Visconti’s digital dissertation, and contrast that with the draft AHA guidelines for evaluating digital scholarship. The latter is very much concerned with making digital scholarship feel ‘ok’ to existing modes of scholarship (and that’s important); the former gives us a model for thinking through what the actual look of a ‘collected volume’ of #archaeogames might …look… like. I especially appreciate her approach to LOCKSS (lots of copies keeps stuff safe), with web archival recordings, submission of materials to the Internet Archive Wayback machine, zips of her github repo (itself something one could also fork – copy – as well), XML for all wordpress posts.

So these thoughts are banging around in my head. I started sketching on paper:

Let’s pick that apart, because my hands are shaky, the tablet is heavy and clunky, and the picture frankly is abysmal.

Theme – a call for archaeogames should have some sort of thematic focus. I’m a Romanist (was a Romanist?). Let’s set the theme as ‘Roman Urban Spaces’. Broad to allow many voices; narrow enough for some sort of thematic unity, some sort of understanding, to emerge from our digital scholarship. The call for games would want games that explicitly use the affordances of whatever platform the creator chooses to make their argument.

Process – what was neat about the Writing History in the Digital Age project, and the Web Writing project was the way every step in the process was extraordinarily clear. What would the process look like here? Drawing on Andrew & Tara’s suggestions, I’m thinking that since we’re both a) teaching/encouraging people to write scholarship via games and b) teaching/encouraging people to read/play scholarship written in games, we have to keep things fairly simple. So –

–  a sign up form, with a one-paragraph ‘here’s my idea, roughly’

– a website with a tumblr-like page for each project, where authors would document their process; in something like this, the process itself is an extremely important scholarly output. Reader/players would be encouraged to comment here. Gitbooks.io might be a good spot, as text and code can be integrated, multiple authorship is no problem. I’m sure there are many options here.

– each author to maintain a github repo with their code (which would also mean we might have to teach people how to use github), linked from their project page.

– three months to build the game (whatever form, genre, etc that it may take).

– a due date for the paradata (guidelines to be provided) at roughly the same time, remembering that paradata could be text or itself gameful

– an open review period after that due date, where reader/players comment on a holistic-view of the entire project written by the editors.

– a subsequent round of polishing for those authors’ work deemed to move on to the next phase, the decision being based on the impact of the argument, sophistication (not necessarily technical) of the piece, the engagement with the reader/players… obviously, something to flesh out a *lot* more.

Outcome – what’s the (ahem) endgame for all this? Maybe –

– final publication as a website along the Visconti model, with the playable games made available, and with all code lodged in an open repository. We’d have to find one of these, though there are more coming onstream every day.

– perhaps approach Internet Archaeology. We have a dataverse repo here at Carleton that could work too.

Clear statements on intellectual property absolutely would need to be developed at the outset. I see no reason why the IP shouldn’t remain with the author/creators.

… ?