Zettelkasten in Sublime (a note on Dan Sheffler’s script)

I’ve rapidly become a huge fan of Dan Sheffler’s workflow. One thing that I’m really looking forward to trying out is is ‘Zettelkasten‘(also here), a kind of flat wiki-like approach to note taking. I have always struggled with effective notetaking, but combining his markdown export from pdfs (via Skim) with the zettelkasten (which I could then push to github for open-notebooking purposes, or feed to Jekyll, or text mining, etc) has me (geek that I am) rather excited about the prospects.

Anyway, I’ve just gotten everything working. Here’s how:

1. Install Sublime Text 3.
2. Download the MyWiki zip from Dan’s repo on Github.
3. Install Package Control from packagecontrol.io
4. Open Sublime Text 3.
5. Under Preferences, go to ‘browse packages’. This opens the package location in your finder.
6. Meanwhile, unzip the MyWiki plugin. Copy the MyWiki folder to the package location.

I had trouble getting Sublime 3 to recognize the ‘keymap’, that is, the file telling Sublime what keys fire up what commands in the MyWiki.py file. I renamed it to ‘Default (OSX).sublime-keymap’ which should’ve solved the problem, but didn’t.

7. So instead, I went to Preferences – key bindings – user, and copied the text of Dan’s file into this default one.
8. In the file ‘MyWiki.sublime-settings’ I changed the wiki_directory like so:

“wiki_directory”: “/Users/shawngraham/Documents/notes-experiment-sublime/”,

I saved everything, restarted sublime text, and voila!

Screen Shot 2015-07-01 at 3.52.10 PM

Hard to see it, but I’ve just typed [[ in a markdown document, and an autocomplete with all of the file names (that is, notecards) appears so that I can ensure consistency across my files or create a new note.

Exporting your PDF Annotations from Skim

I’ve got to write this down now, while I still remember what I did. Dan Sheffler has a great blog. Lots of really neat & useful stuff. One of the things he has is a script for exporting your notes and annotations of pdfs in nicely formatted markdown. (All OS, I’m afraid Windows folks). First two things we need: Skim: http://skim-app.sourceforge.net/ Bibdesk: http://bibdesk.sourceforge.net/ And a pdf of some academic article. Download and install Bibdesk. Open Bibdesk. Drag & drop your pdf onto the Bibdesk window. Add relevant metadata. Crucial: the ‘Cite key’ field is like a shortcode for referencing all this. Screen Shot 2015-06-28 at 9.56.16 PM In the screenshot above, I’ve got Alice Watterson’s recent piece from Open Archaeology. My cite key is her name & the year. Now, in Skim, open that pdf up, and start making notes all over it. Save. Now, the first thing we’re going to do is use Sheffler’s script for making custom URLs that will link our notes to the relevant page of the pdf; these are URLs that we can then use in our Markdown documents. His instructions are at: http://www.dansheffler.com/blog/2014-07-02-custom-skim-urls/ To follow those instructions, find your AppleScript editor on your machine, and paste his code in. Save as an application. Then, find the application (he called his ‘Skimmer’) on your machine, and right click (or whatever you do to bring up the contextual menu) and inspect package contents. You then open the info.plist file in a text editor, and swap the default for what Sheffler instructs – see my screenshot:Screen Shot 2015-06-28 at 10.00.24 PM Run the Skimmer application. If all goes well, nothing much should appear to happen. Ok, so, let’s test this out. I went to dillinger.io and made the following md link: [@Watterson2015 [page 120](sk://Watterson2015#120)] and then exported it as html. I opened the html in my browser, and hey presto! When I clicked on the link, it opened the pdf in Skim at my note!

So that’s part 1 achieved. Now onwards to part 2.

Next, we want to export our Skim annotations as nicely formatted markdown (you could then use them in a blog post, or on github, or wherever). http://www.dansheffler.com/blog/2014-07-07-exporting-skim-notes/ Again, open up your applescript editor, and paste Sheffler’s code. This code requires that you have those custom URLs set up, BibDesk installed, and something called pdftk (I checked it out; seems to be a windows program, so I ignored it. Script ran well anyway, so *shrug*) [1]. Finally, the end of the script opens up Marked for viewing and editing the md. I don’t have that, so I simply changed that final line from Marked to Atom (which I do have) – and that works. If you right-click on your finder, select ‘go to folder’ and type in Library, you can find the ‘Application\Support’ folder. I made a new subfolder called ‘Skim\scripts’ and saved the script in there as skim-export. Fired Skim back up, opened Watterson 2015 in Skim, selected ‘skim-export’ and behold: Screen Shot 2015-06-28 at 10.11.59 PMA nicely formatted markdown document with my annotations – and crucially, the links back to the original pages. I can now save this markdown into a github repo, or any number of markdown flavoured wiki type things that run locally on this machine, or use pandoc to convert it to something else, or… Markdown is *wonderful*. These scripts are too! Thank you Dan. Next thing I want to try: http://www.dansheffler.com/blog/2015-05-11-my-zettelkasten-in-sublime/

[1] Dan responds at http://dansheffler.com/blog/2015-07-01-electric-archaeology/ to note that the tool in question is here: https://www.pdflabs.com/tools/pdftk-server/ and runs on the command line.

historical maps into Unity3d

This should work.

Say there’s a historical map that you want to digitize.  It may or may not have contour lines on it, but there is some indication of the topography (hatching or shading or what not). Say you wanted to digitize it such that a person could explore its conception of geography from a first person perspective.

Here’s a workflow for making that happen.

Some time ago, the folks at the NYPL put together a tutorial explaining how to turn such a map into a minecraft world. So let’s do the first part of their tutorial. In essence, what we do is take the georectified map (which you could georectify using something like the Harvard Map Warper), load that into QGIS, add elevation points, generate a surface from that elevation, turn it into grayscale, export that image, convert to raw format, import into Unity3d.

Easy peasy.

For the first part, we follow the NYPL:

Requirements

QGIS 2.2.0 ( http://qgis.org )

  • Activate Contour plugin
  • Activate GRASS plugin if not already activated

A map image to work from

  • We used a geo-rectified TIFF exported from this map but any high rez scan of a map with elevation data and features will suffice.

Process:

Layer > Add Raster Layer > [select rectified tiff]

  • Repeat for each tiff to be analyzed

Layer > New > New Shapefile Layer

  • Type: Point
  • New Attribute: add ‘elevation’ type whole number
  • remove id

Contour (plugin)

  • Vector Layer: choose points layer just created
  • Data field: elevation
  • Number: at least 20 (maybe.. number of distinct elevations + 2)
  • Layer name: default is fine

Export and import contours as vector layer:

  • right click save (e.g. port-washington-contours.shp)
  • May report error like “Only 19 of 20 features written.” Doesn’t seem to matter much

Layer > Add Vector Layer > [add .shp layer just exported]

Edit Current Grass Region (to reduce rendering time)

  • clip to minimal lat longs

Open Grass Tools

  • Modules List: Select “v.in.ogr.qgis”
  • Select recently added contours layer
  • Run, View output, and close

Open Grass Tools

  • Modules List: Select “v.to.rast.attr”
  • Name of input vector map: (layer just generated)
  • Attribute field: elevation
  • Run, View output, and close

Open Grass Tools

  • Modules List: Select “r.surf.contour”
  • Name of existing raster map containing colors: (layer just generated)
  • Run (will take a while), View output, and close

Hide points and contours (and anything else above bw elevation image) Project > Save as Image

You may want to create a cropped version of the result to remove un-analyzed/messy edges

As I noted a while ago, there are some “hidden, tacit bits [concerning] installing the Contour plugin, and working with GRASS tools (especially the bit about ‘editing the current grass region’, which always is fiddly, I find).”  Unhelpfully, I didn’t write down what these were.

Anyway, now that you have a grayscale image, open it in Gimp (or Photoshop; if you do have photoshop go watch this video and you’re done.).

For those of us without photoshop, this next bit comes from the addendum to a previous post of mine:L

    1. open the grayscale image in Gimp.
    2. resized the image as power of 2 + 1 (*shrug* everything indicates this is what you do, with unity); in this case I chose 1025.
    3. save as file type RAW. IMPORTANT: in the dialogue that opens, set ‘RGB save type to ‘planar’.
    4. Change the file extension from .data to .raw in mac Finder or windows Explorer.

Now you can import this historical elevation map in Unity. In Unity, add a gameobject -> 3d object -> terrain to the project. In the inspector window, there’s a cogwheel. Click this; it opens the settings. One of the options will be ‘import raw’. Click this.

Select your .raw grayscale image.

  1. On the import dialogue, change it to 8-bit image rather than 16-bit.
  2. Change the width, height, x and z to all be 1025. Changed the y to be 75 (yours will be different; look at the range in your original map of highest and lowest point, and input that. For reference, please also see this post which saved me: http://newton64.github.io/blog/2013-07-24-gimp-unity-terrain.html

Ta da – a white glacial landscape with your elevation data.

Screen Shot 2015-06-09 at 12.14.30 PMNow the fun stuff can happen. But – before someone can ‘walk’ around your landscape, you have to add controls to your project. So, in Unity 3d, go to:

Assets – Import package – characters.

Once that’s all done, you’ll drag-and-drop a ‘FPSController’ into your project. You’ll find it as below:

Screen Shot 2015-06-09 at 7.26.52 PM

Click and grab that blue box and move it up into your project (just drop it in the main window). Make sure that the control is above (and also, not intersecting any part of) your landscape, or when you go to play, you’ll either be stuck or indeed falling to the centre of the earth. We don’t want that. Also, delete the ‘camera’ from the hierarchy; the fpscontroller has its own camera. My interface looks like this:

Screen Shot 2015-06-09 at 7.30.47 PM

You do the grass and trees etc from the terrain inspector, as in the window there on the right. I’ll play some more with that aspect, report back soonish. Notice the column drum in the right foreground, and the tombstone in the back? Those were made with 3d photogrammetry; both are hosted on Sketchfab, as it happens. Anyway, in Meshlab I converted from .obj to .dae, after having reduced the polygons with quadratic edge decimation, to make them a bit simpler. You can add such models to your landscape by dropping the folder into the ‘assets’ folder of your Unity project (via the mac Finder or windows explorer).  Then, as you did with the fpscontroller block, you drag them into your scene and reposition them as you want.

Here’s my version, pushed to webGL

Enjoy!

(by the way, it occurs to me that you could use that workflow to visualize damned near anything that can be mapped, not just geography. Convert the output of a topic model into a grayscale elevation map; take a network and add elevation points to match betweeness metrics…)

importing GIS data into unity

Per Stu’s workflow, I wanted to load a DEM of the local area into Unity.

1. Stu’s workflow: http://www.dead-mens-eyes.org/embodied-gis-howto-part-1-loading-archaeological-landscapes-into-unity3d-via-blender/

I obtained a DEM from the university library. QGIS would not open the damned thing; folks on Twitter suggested that the header file might be corrupt.

However, I was able to view the DEM using MicroDEM. I exported a grayscale geotiff from MicroDEM. The next step is to import into Unity. Stu’s workflow is pretty complicated, but in the comment thread, he notes this:

2. Alastair’s workflow: https://alastaira.wordpress.com/2013/11/12/importing-dem-terrain-heightmaps-for-unity-using-gdal/

Alrighty then, gdal. I’d already installed gdal when I updated QGIS. But, I couldn’t seem to get it to work from the command line. Hmmm. Turns out, you’ve got to put things into the path (lord how I loathe environment variables, paths, etc.)

export PATH=/Library/Frameworks/GDAL.framework/Programs:$PATH

Now I can use the command line. Hooray! However, his suggested command for converting from geotiff to raw heightmap expected by unity:

gdal_translate –ot UInt16 –scale –of ENVI –outsize 1025 1025 srtm_36_02_warped_cropped.tif heightmap.raw

(using my own file names, of course) kept giving me ‘too many options’ errors. I examined the help file on that gdal_translate, and by rearranging the sequence of flags in my command to how they’re listed in the help,

gdal_translate -ot UInt16 -of ENVI -outsize 1025 1025 -scale localdem.tif heightmap.raw

the issue went away. Poof!

Into Unity3d I went, creating a new project, with a new terrain object, importing the raw heightmap. Nothing. Nada. Rien.

Knowing that with computers sometimes, when you just keep doing things over and over expecting a different result you actually get a different result:

So from above, something like my original dem, though flipped a bit. I’m not too sure how to tie scripts into things, so we’ll let that pass for now. But as I examined the object closely, there were all sorts of jitters and jags and … well, it only looks like terrain from directly above.

A bit more googling, and I found this video:

which seems to imply that interleaving in the Raw format might be to blame (? I donno). Anyway, I don’t have Photoshop or anything handy on this machine for dealing with raster images. I might just go back to Qgis with the geotiff I made with Microdem.

(I went to install Gimp, saw that you could do it with Macports, and I’ve been waiting for the better part of an hour. I should not have done that, I suppose).

Anyway, the reason for all this – I’m trying to replicate Stu’s embodied gis concept. The first part of that is to get the target landscape into the unity engine. Then it gets pushed through vuforia… (I think. Need to go back and read his book, if I could remember who I let borrow it).

update june 9 – Success!

  1. I opened the grayscale image exported from MicroDem in Gimp.
  2. I resized the image as power of 2 + 1 (*shrug* everything indicates this is what you do, with unity); in this case, 1025.
  3. Saved as RAW.
  4. Changed the file extension from .data to .raw.
  5. Created a new 3d terrain object in Unity.
  6. Imported my .raw image.
  7. On the import dialogue, changed it to 8-bit image rather than 16-bit.
  8. Changed the width, height, x and z to all be 1025. Changed the y to be 75 (as the original image height is somewhere around 60 m above sea level, the highest point 135 m above sea level. Otherwise, I was getting these monstrous mountains when going with the default ‘600’).
  9. This post provided the solution: http://newton64.github.io/blog/2013-07-24-gimp-unity-terrain.html

I still need to rotate things, add water, add controls, etc. But now I could add my 3d models of the cemetery (which is on the banks of this river), perhaps also Oculus support etc. Talking with Stu a bit more, I see that his embodied GIS is still a bit beyond what I can do (lots of custom scripting), but imagine publishing an excavation this way, especially if ‘Excavation is destruction digitization‘…

Screen Shot 2015-06-09 at 12.14.30 PM

Success! That’s the mighty Rideau you’re looking at there.

Field Notes from a Virtual Unconference

archaeogaming-mission-control

The view from my computer getting #archaeogaming up and running. Hi Angus!

 

I’ll write something with a bit more substance and reflection in due course, but we’ve just ended the first ever archaeogaming virtual unconference.

Total bill:

  • catering: $0
  • space rental: $0
  • airfare: $0
  • hotel: $0
  • registration fee: $0.

Time spent:

  • me, in planning things out, setting up tech, writing about it: approximately 4-6 hours

Platforms:

Things that worked really well:

  • Having a schedule built before hand, to give people not only structure but sense of what the day is about
  • Great facilitators in Andrew and Tara to keep conversations flowing, to close or open sessions on the fly as needed

Things that could’ve been better:

  • Timing of the schedule. I closed one session out rather early
  • Time slots don’t have to be all the same length; early in the day might’ve been better to have shorter sessions, etc. Something to think about.

Things that were *really* awesome:

  • The nearly 30* people who participated at various times during the day! I can’t thank you all enough for coming out and contributing, however you did that.

Thank you!

 

*about 25 at our busiest. I’m an optimistic rounder. Most folks were grad or undergrad students I think!

 

working with vuforia unity plugin for augmented reality

Notes to self:

– working on Mac.
-install vuforia plugin for unity
-followed this: http://developer.vuforia.com/library/articles/Solution/Compiling-a-Simple-Unity-Project
-this post is handy too. http://www.marcofolio.net/other/introduction_into_augmented_reality_with_vuforia.html
-however: you have to register a key for your app that you build https://developer.vuforia.com/targetmanager/licenseManager/licenseListing
-you also have to create an image tracking database. (under the ‘develop’ page of the Vuforia website). You upload your images, it creates a database xml in return. You download it, and while unity is open, double click on the download – it adds itself automatically.
-you have to have the ios or android sdk installed on your machine. I’m working with android. I had to find the location of the sdk; needed to ‘unhide’ the ‘Library’ folder in the Finder in order to do so (Unity will ask you for the location of the sdk when it builds your app. If you don’t unhide it, you can’t find or select it).

-some screenshots from within Unity, for reference:

Screen Shot 2015-05-30 at 4.18.11 PM

delete the default camera. add arcamera from ‘prefabs’ in the project. add imagetargets from ‘prefabs’. assets go underneath an imagetarget.

Screen Shot 2015-05-30 at 4.18.21 PM

prefabs are where the magic lies. the 3dmodels folder was one I added via the finder, and then I dropped my obj and texture pngs in there. Unity updated automatically, creating also the materials folder.

Screen Shot 2015-05-30 at 4.18.48 PM

you add the app license key that you made with the online licence manager here. If you have more than one tracking image, *I think* you put that info here in ‘max simultaneous tracked images’. If you’ve got more than 1 tracked object at a time, update accordingly.

Screen Shot 2015-05-30 at 4.19.12 PM

for your image target, you select ‘data set’ to grab images from the database you created, and then image target. You also have to set ‘width'; I do not understand, yet, what units these are in.

 

Problems with low-friction AR

Flickr, Camille Rose

Ok. I’ve had a bit of feedback from folks. The issues seem to be:

  • audio doesn’t always load
  • locations don’t always trigger

Those are two big issues. I’m not entirely sure what to do about them. I just spun up a story that takes place around the quad here; I took the Ottawa Anomaly code and plugged in different coordinates. When I playtested from my computer, audio loaded up, which was good. But when I went downstairs and outside, to where I knew the first trigger to be: no audio. The ‘play audio’ function reported, ‘what audio?’ so I know the <> macro in the initialization passage didn’t load up.

I went to a second location; the geotrigger didn’t trigger. It kept reloading the previous geotrigger. Except – if I reloaded the entire story, then the new trigger’d trig. So am I dealing with a caching issue? Do I need to clear the latitude/longitude variables when the player moves on?

You can download the html and import it into the Twine2 interface. Answers on a postcard…

[update a few hours later] So I went back downstairs, and outside, and lo! everything loaded up as per desired. Must be a loading issue. In which case, I should shorten the length of the clips downwards, and also have a ‘turn off’ whenever the user rescans for geotriggers – I was getting overlapping sounds.

 

On haunts & low-friction AR – thinking out loud

The frightening news is that we are living in a story. The reassuring part is that it’s a story we’re writing ourselves. Alas, though, most of us don’t even know it – or are afraid to accept it. Never before did we have so much access to the tools of storytelling – yet few of us are willing to participate in their creation.

– Douglas Ruskhoff, ‘Renaissance Now! The Gamers’ Perspective’ in Handbook of Computer Game Studies, MIT Press, 2005: 415.

Haunts is about the secret stories of spaces.

Haunts is about locative trauma.

Haunts is about the production of what Foucault calls “heterotopias”—a single real place in which incompatible counter-sites are layered upon or juxtaposed against one another.

The general idea behind Haunts is this: students work in teams, visiting various public places and tagging them with fragments of either a real life-inspired or fictional trauma story. Each team will work from an overarching traumatic narrative that they’ve created, but because the place-based tips are limited to text-message-sized bits, the story will emerge only in glimpses and traces, across a series of spaces.

– Mark Sample, “Haunts: Place, Play, and Trauma” Sample Reality http://www.samplereality.com/2010/06/01/haunts-place-play-and-trauma/

It’s been a while since I’ve delved into the literature surrounding locative place-based games. I’ve been doing so as I try to get my head in gear for this summer’s Digital Archaeology Institute where I’ll be teaching augmented reality for archaeology.

Archaeology and archaeological practice are so damned broad though; in order to do justice to the time spent, I feel like I have to cover lots of different possibilities for how AR could be used in archaeological practice, from several different perspectives. I know that I do want to spend a lot of time looking at AR from a game/playful perspective though.  A lot of what I do is a kind of digital bricolage, as I use whatever I have to hand to do whatever it is I do. I make no pretense that what I’m doing/using is the best method for x, only that it is a method, and one that works for me. So for augmented reality in archaeology, I’m thinking that what I need to teach are ways to get the maximum amount of storytelling/reality making into the greatest number of hands. (Which makes me think of this tweet from Colleen Morgan this am:

…but I digress.)

So much about what we find in archaeology is about trauma. Houses burn down: archaeology is created. Things are deliberately buried: archaeology is created. Materials are broken: archaeology is created.

Sample’s Haunts then provides a potential framework for doing archaeological AR. He goes on to write:

The narrative and geographic path of a single team’s story should alone be engaging enough to follow, but even more promising is a kind of cross-pollination between haunts, in which each team builds upon one or two shared narrative events, exquisite corpse style. Imagine the same traumatic kernel, being told again and again, from different points of views. Different narrative and geographic points of views. Eventually these multiple paths could be aggregated onto a master narrative—or more likely, a master database—so that Haunts could be seen (if not experienced) in its totality.

It was more of a proof of concept than anything else, but my ‘low-friction AR‘ ‘The Ottawa Anomaly‘ tries to not so much tell a story, but provide echoes of events in key areas around Ottawa’s downtown, such that each player’s experience of the story would be different – the sequence of geotriggers encountered would colour each subsequent trigger’s emotional content. If you hear the gunshot first, and then the crying, that implies a different story than if you heard them the other way around. The opening tries to frame a storyworld where it makes sense to hear these echoes of the past in the present, so that the technological mediation of the smartphone fits the world. It also is trying to make the player stop and look at the world around them with new eyes (something ‘Historical Friction‘ tries to do as well).

I once set a treasure hunt around campus for my first year students. One group however interpreted a clue as meaning a particular statue in downtown Ottawa; they returned to campus much later and told me a stunning tale of illuminati and the secret history of Official Ottawa that they had crafted to make sense of the clues. Same clues, different geographical setting (by mistake) = oddly compelling story. What I’m getting at: my audio fragments could evoke very different experiences not just in their order of encounter but also given the background of the person listening. I suggested in a tweet that

creating another level of storytelling on top of my own.

I imagine my low-friction AR as a way for multiple stories within the same geographic frame, and ‘rechoes’ or ‘fieldnotes’ as ways of cross-connecting different stories. I once toyed with the idea of printing out QR codes such that they could be pasted overtop of ‘official Ottawa‘ for similar purposes…

Low Friction Augmented Reality

But my arms get tired.

Maybe you’ve thought, ‘Augmented reality – meh’. I’ve thought that too. Peeping through my tablet or phone’s screen at a 3d model displayed on top of the viewfinder… it can be neat, but as Stu wrote years ago,

[with regard to ‘Streetmuseum’, a lauded AR app overlaying historic London on modern London] …it is really the equivalent of using your GPS to query a database and get back a picture of where you are. Or indeed going to the local postcard kiosk buying an old paper postcard of, say, St. Paul’s Cathedral and then holding it up as you walk around the cathedral grounds.

I’ve said before that, as historians and archaeologists, we’re maybe missing a trick by messing around with visual augmented reality. The past is aural. (If you want an example of how affecting an aural experience can be, try Blindside).

Maybe you’ve seen ‘Ghosts in the Garden‘. This is a good model. But what if you’re just one person at your organization? It’s hard to put together a website, let alone voice actors, custom cases and devices, and so on. I’ve been experimenting these last few days with trying to use the Twine interactive fiction platform as a low-friction AR environment. Normally, one uses Twine to create choose-your-own-adventure texts. A chunk of text, a few choices, those choices lead to new texts… and so on. Twine uses an editor that is rather like having little index cards that you move around, automatically creating new cards as you create new choices. When you’re finished, Twine exports everything you’ve done into a single html file that can live online somewhere.

That doesn’t begin to even touch the clever things that folks can do with Twine. Twine is indeed quite complex. For one thing, as we’ll see below, it’s possible to arrange things so that passages of text are triggered not by clicking, but by your position in geographical space.

You can augment reality with Twine. You don’t need to buy the fancy software package, or the monthly SDK license. You can do it yourself, and keep control over your materials, working with this fantastic open-source platform.

When the idea occurred to me, I had no idea how to make it happen. I posed the question on the Twine forums, and several folks chimed in with suggestions about how to make this work. I now have a platform for delivering an augmented reality experience. When you pass through an area where I’ve put a geotrigger, right now, it plays various audio files (I’m going for a horror-schlock vibe. Lots of backwards talking. Very Twin Peaks). What I have in mind is that you would have to listen carefully to figure out where other geotriggers might be (or it could be straight-up tour-guide type audio or video). I’ve also played with embedding 3d models (both with and without Oculus Rift enabled), another approach which is also full of potential – perhaps the player/reader has to carefully examine the annotations on the 3d model to figure out what happens next.

Getting it to work on my device was a bit awkward, as I had to turn on geolocation for apps, for Google, for everything that wanted it (I’ve since turned geolocation off again).

If you’re on Carleton’s campus, you can play the proof-of-concept now: http://philome.la/electricarchaeo/test-of-geolocation-triggers/play  But if you’re not on Carleton’s campus, well, that’s not all that useful.

To get this working for you, you need to start a new project in Twine 2. Under story format (click the up arrow beside your story title, bottom left of the editor), make sure you’ve selected Sugarcube (this is important; the different formats have different abilities, and we’re using a lot of javascript here). Then, in the same place, find ‘edit story javascript’ because you need to add a whole bunch of javascript:


(function () {
if ("geolocation" in navigator && typeof navigator.geolocation.getCurrentPosition === "function") {
// setup the success and error callbacks as well as the options object
var positionSuccess = function (position) {
// you could simply assign the `coords` object to `$Location`,
// however, this assigns only the latitude and longitude since
// that seems to have been what you were attempting to do before
state.active.variables["Location"] = {
latitude : position.coords.latitude,
longitude : position.coords.longitude
};
// access would be like: $Location.latitude and $Location.longitude
},
positionError = function (error) {
/* currently a no-op; code that handles errors */
},
positionOptions = {
timeout: 31000,
enableHighAccuracy: true,
maximumAge : 120000 // (in ms) cached results may not be older than 1 minute
// this can probably be tweaked upwards a bit
};

// since the API is asynchronous, we give `$Location` an initial value, so
// trying to access it immediately causes no issues if the first callback
// takes a while
state.active.variables["Location"] = { latitude : 0, longitude : 0 };

// make an initial call for a position while the system is still starting
// up, so we can get real data ASAP (probably not strictly necessary as the
// first call via the `predisplay` task [below] should happen soon enough)
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);

// register a `predisplay` task which attempts to update the `$Location`
// variable whenever passage navigation occurs
predisplay["geoGetCurrentPosition"] = function () {
navigator.geolocation.getCurrentPosition(
positionSuccess,
positionError,
positionOptions
);
};
} else {
/* currently a no-op; code that handles a missing/disabled geolocation API */
}
}());

(function () {
window.approxEqual = function (a, b, allowedDiff) { // allowedDiff must always be > 0
if (a === b) { // handles various "exact" edge cases
return true;
}
allowedDiff = allowedDiff || 0.0005;
return Math.abs(a - b) < allowedDiff;
};
}());

The first function enables your Twine story to get geocoordinates. The second function enables us to put a buffer around the points of interest. Then, in our story, you have to call that code and compare the result against your points of interest so that Twine knows which passage to display. So in a new passage – call it ‘Search for Geotriggers’- you have this:

<<if approxEqual($Location.latitude, $Torontolat) and approxEqual($Location.longitude, $Torontolong)>>
<<display “Downtown Toronto”>>
<<else>>
<<display “I don’t know anything about where you are”>>
<</if>>

So that bit above says, if the location is more or less equal to the POI called Torontolat,Torontolong, then display the passage called “Downtown Toronto”. If you’re not within the buffer around the Toronto point, display the passage called “I don’t know anything about where you are”.

Back at the beginning of your story, you have an initialization passage (where your story starts) and you set some of those variables:

<<set $Torontolat = 43.653226>>
<<set $Torontolong = -79.3831843>>

[[Search for Geotriggers]]

And that’s the basics of building a DIY augmented reality. Augmented? Sure it’s augmented. You’re bringing digital ephemera into play (and I use the word play deliberately) in the real world. Whether you build a story around that, or go for more of the tour guide approach, or devise fiendish puzzles, is up to you.

I’m grateful to ‘Greyelf’ and ‘TheMadExile’ for their help and guidance as I futzed about doing this.

[update May 22: Here is the html for a game that takes place in and around downtown Ottawa Ontario. Download it somewhere handy, then open the Twine 2 editor. Open the game file in the editor via the Import button and you’ll see how I built it, organized the triggers and so on. Of course, it totally spoils any surprise or emergent experience once you can see all the working parts so if you’re in Ottawa, play it here on your device first before examining the plumbing!]

archaeogaming unconference – logistics

Madness

Madness

The #archaeogaming unconference will take place here: https://unhangout.media.mit.edu/event/archaeogaming at 11 am, EST, June 1st; y’all are welcome to throw together other spaces (hangouts, skype, collaborative docs, etherpads, what have you) to extend or push the idea further. Ideas that have come in so far can be found/voted on here: http://www.allourideas.org/archaeogaming/.

In terms of how the day will unravel (unroll? play out?) I’m imagining say 3 sessions with 3 breakout rooms, at 45 minutes each, 10 minutes between for refreshment. Unlike in-person unconferences, I think trying to agree a schedule on the morning might be too difficult, so I’d take the top voted topics, slot them into a google spreadsheet-schedule template, say next monday – and then people can leave comments on the the desired layout. I’d leave that open for the week, then adjust/publish the schedule that weekend, according to what seems like the majority will.

Then, morning of, I’ll remind/repost the URL to the unhangout, and we’d be off to the races. The unhangout can be broadcast via Youtube too (though I’m not entirely sure how that happens or what the channel will be – guess I should go and see which of my many accounts is plumbed into what service).

Sound good?

Update May 25th: proposed schedule may be commented on here.