Prompted by Lee, I’m collating here materials that I’ve put out there regarding my teaching/thinking related to video games & history and archaeology. The list below is in no recognizable bibliographic style (mostly because I’m tapping this out and can’t be bothered this AM).
The folks at the New York Public Library have a workflow and python script for translating historical maps into Minecraft. It’s a three-step (quite big steps) process. First, they generate a DEM (digital elevation model) from the historical map, using QGIS. This is saved as ‘elevation.tiff’. Then, using Inkscape, they trace over the features from the historical map that they want to translate into Minecraft. Different colours equal different kinds of blocks. This is saved as ‘features.tiff’. Then, using a custom python script, the two layers are combined to create a minecraft map, which can either be in ‘creative’ mode or ‘survival’ mode.
There are a number of unspoken steps in that workflow, including a number of dependencies for the python script that have to be installed first. Similarly, QGIS and its plugins also have a steep (sometimes hidden) learning curve. As does Inkscape. And Imagemagick. This isn’t a criticism; it’s just the way this kind of thing works. The problem, from my perspective, is that if I want to use this in the classroom, I have to guide 40 students with widely varying degrees of digital fluency.* I’ve found in the past that many of my students “didn’t study history to have to work with computers” and that the payoff sometimes (to them) doesn’t seem to have (immediate) value. The pros and cons of that kind of work shall be the post for another day.
Right now, my immediate problem is, how can I smooth the gradient of the learning curve? I will do this by providing 3 separate paths for creating the digital elevation model.
Path 1, for when real world geography is not the most important aspect.
It may be that the shape of the world described by the historical map is what is of interest, rather than the current topography of the world. For example, I could imagine a student wanting to explore the historical geography of the Chats Falls before they were flooded by the building of a hydro dam. Current topographic maps and DEMs are not useful. For this path, the student will need to use the process described by the NYPL folks:
Edit Current Grass Region (to reduce rendering time)
clip to minimal lat longs
Open Grass Tools
Modules List: Select “v.in.ogr.qgis”
Select recently added contours layer
Run, View output, and close
Open Grass Tools
Modules List: Select “v.to.rast.attr”
Name of input vector map: (layer just generated)
Attribute field: elevation
Run, View output, and close
Open Grass Tools
Modules List: Select “r.surf.contour”
Name of existing raster map containing colors: (layer just generated)
Run (will take a while), View output, and close
Hide points and contours (and anything else above bw elevation image) Project > Save as Image
You may want to create a cropped version of the result to remove un-analyzed/messy edges
The hidden, tacit bits here involve installing the Countour plugin, and working with GRASS tools (especially the bit about ‘editing the current grass region’, which always is fiddly, I find). Students pursuing this path will need a lot of one-on-one.
Path 2, for when you already have a shapefile from a GIS:
This was cooked up for me by Joel Rivard, one of our GIS & Map specialists in the Library. He writes,
1) In the menu, go to Layer > Add Vector Layer. Find the point shapefile that has the elevation information.
Ensure that you select point in the file type.
2) In the menu, go to Raster > Interpolation. Select “Field 3” (this corresponds to the z or elevation field) for Interpolation attribute and click on “Add”.
Feel free to keep the rest as default and save the output file as an Image (.asc, bmp, jpg or any other raster – probably best to use .asc since that’s what MicroDEM likes.
We’ll talk about MicroDEM in a moment. I haven’t tested this path yet, myself. But it should work.
Path 3 For when modern topography is fine for your purposes
In this situation, modern topography is just what you need.
2. Install MicroDEM and all of its bits and pieces (the installer wants a whole bunch of other supporting bits; just say yes. MicroDEM is PC software, but I’ve run it on a Mac within WineBottler).
3. This video tutorial covers working with MicroDEM and Worldpainter:
But here’s some screenshots – basically, you open up your .tiff or your .asc image file within MicroDEM, crop to the area you are interested in, and then convert the image to grayscale:
MicroDEM: open image, crop image.Convert to grayscaleRemove legends, marginalia
Save your grayscaled image as a .tiff.
Regardless of the path you took (and think about the historical implications of those paths) you now have a gray scale DEM image that you can use to generate your mindcraft world.
Converting your grayscale DEM to a Minecraft World
At this point, the easiest thing to do is to use WorldPainter. It’s free, but you can donate to its developers to help them maintain and update it. Now, the video shown above shows how to load your DEM image into WorldPainter. It parses the black-to-white pixel values and turns them into elevations. You have the option of setting where ‘sea level’ is on your map (so elevations below that point are covered with water). There are many, many options here; play with it! Adam Clarke, who made the video, suggests scaling up your image to 900%, but I’ve found that that makes absolutely monstrous worlds. You’ll have to play around to see what makes most sense for you, but with real-world data of any area larger than a few kilometres on a side, I think 100 to 200% is fine.
Now, the crucial bit for us: you can import an image into WorldPainter to use as an overlay to guide the placement of blocks, terrain, buildings, whatever. So, rather than me simply regurgitating what Adam narrates, go watch the video. Save as a .world file for editing; export to Minecraft when you’re ready (be warned: big maps can take *a very long time* to render. That’s another reason why I don’t scale up the way Adam suggests).
* another problem I’ve encountered is that my features colours don’t map onto the index values for blocks in the script. I’ve tried modifying the script to allow for a bit of fuzziness (a kind of, ‘if the pixel value is between x and y, treat as z’). I end up with worlds filled with water. If I run the script on the Fort Washington maps provided by NYPL, it works perfectly. The script is supposed to only be looking at the R of the RGB values when it assigns blocks, but I wonder if there isn’t something else going on. I had it work once, correctly, for me – but I used MS Paint to recolour my image with the exact colours from the Fort Washington map. Tried it again, exact same workflow on a different map, nada. Nyet. Zip. Zilch. Just a whole of tears and heartache.
Unfortunately, for me, it’s not working. I document here what I’ve been doing and ideally someone far more clever than me will figure out what needs to happen…
The first parts of the tutorial – working with QGIS & Inkscape – go very well (although there might be a problem with colours, but more on that anon). Let’s look at the python script for combining the elevation map (generated from QGIS) with the blocks map (generated from Inkscape). Oh, you also need to install imagemagick, which you then run from the command line, to convert SVG to TIF.
“The script for generating the worlds uses PIL to load the TIFF bitmaps into memory, and pymclevel to generate a Minecraft worlds, one block at a time. It’s run successfully on both Mac OS X and Linux.”
After digitizing, looks like this.
I’ve tried both Mac and Linux, with python installed, and PIL, and pymclevel. No joy (for the same reasons as for Windows, detailed below). Like most things computational, there are dependencies that we only uncover quite by accident…
Anyway, when you’ve got python installed on Windows, you can just type the python file name at the command prompt and you’re off. So I download pymclevel, unzip it, open a command prompt in that folder (shift + right click, ‘open command prompt here’), and type ‘setup.py’. Error message. Turns out, I need setuptools. Which I obtain from:
I download, unzip, go to that folder, setup.py. Problem. Some file called ‘vcvarsall.bat’ is needed. Solution? Turns out I need to donwload Microsoft Visual Studio 10. Then, I needed to create an environment variable called ‘vs90comntools’, which I did by typing this at the command prompt:
set VS90COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools\
Wunderbar. I go back to the pymclevel folder, I run setup.py again, and hooray! It installs. I had PIL installed from a previous foray into things pythonesque, so at least I didn’t have to fight with that again.
I copy the generate_map.py script into notepad++, change the file names within it (so that it finds my own elevation.tif and features.tif files, which are called hogs-elevation.tif and hogs-features.tif; the area I’m looking at is the Hogsback Falls section of the Rideau. In the script, just change ‘fort-washington’ to ‘hogs’ or whatever your files are called). In my folder, at the command prompt, I type generate_map.py and get a whole bunch of error messages: various ‘yaml’ files can’t be found.
Did I mention PyYaml has to be installed? Fortunately, it has a windows installer. Oh, and by the way – PyWin is also needed; I got that error message at one point (something obscure about win32api), and downloading/installing from here solved it: http://sourceforge.net/projects/pywin32/files/pywin32/
Ok, so where were we? Right, missing yaml files, like ‘minecraft.yaml’ and ‘classic.yaml’, and ‘indev.yaml’ and ‘pocket.yaml’. These files were there in the original repository, but for whatever, they didn’t install into the pymclevel that now lives in the Python directory. So I went to the pymclevel repo on github, copied-and-pasted the code into new documents in notepad++, saved them as thus:
Phew. Back to where I was working on my maps, and have my generate_map.py, which I duly enter and…. error. can’t find ‘tree import Tree, treeObjs’. Googling around to solve this is a fool’s errand: ‘tree’ is such a common word, concept in programming that I just can’t figure out what’s going on here. So I turned that line off with a # in the code. Run it again…. and it seems to work (but is this the key glitch that kills all that follows?).
(update: as Jonathan Goodwin points out, ‘tree.py’ is there, in the NYPL repo
…so I uncommented out the line in generate_map.py, saved tree.py in the same directory, and ran the script again. Everything that follows still happens. So perhaps there’s something screwed-up with my map itself.)
The script tells me I need to tell it whether I’m creating a creative mode map or a survival mode:
so for creative mode: c:>generate_map.py map
for survival: c:>generate_map.py game
And it chugs along. All is good with the world. Then: error message. KeyError: 255 in line 241, block_id, block_data, depth = block_id_lookup[block_id]. This is the bit of code that tells the script how to map minecraft blocks to the colour scheme I used in Inkcraft to paint the information from the map into my features.tif. Thing is, I never used RGB R value of 255. Where’s it getting this from? I go back over my drawing, inspecting each element, trying to figure it out. All seems good with the drawing. So I just add this line to the code in the table:
block_id_lookup = {
[..existing code…]
255 : (m.Water.ID, 0, 1),
}
And run it again. Now it’s 254. And then 253. Then 249. 246. 244. 241. Now 238.
….when I first saw the tutorial from the NYPL, I figured, hey! I could use this with my students! I think not, at least, not yet.
(update 2: have downloaded the original map tifs that the NYPL folks used, and am running the script on them. So far, so good: which shows that, once all this stuff is installed, that it’s my maps that are the problem. This is good to know!)
Part Two:
(updated about 30 minutes after initial post) So after some to-and-fro on Twitter, we’ve got the tree.py problem sorted out. Thinking that it’s the maps where the problem is, I’ve opened the original Fort Washinton features.tif in MS Paint (which is really an underappreciated piece of software). I’ve zoomed in on some of the features, and compared the edges with my own map (similarly opened and zoomed upon). In my map, there are extremely faint colour differentations/gradations where blocks of colour meet. This, I think, is what has gone wrong. So, back to Inkscape I go…
Update the Third: looks like I made (another) silly error – big strip of white on the left hand side of my features.tif. So I’ve stripped that out. But I can’t seem to suss the pixel antialiasing issue. Grrrrr! Am now adding all of the pixels into the dictionary, thus:
…there’s probably a far more elegant way of dealing with this. Rounding? Range lookup? I’m not v. python-able…
Update, 2.20pm: Ok. I can run the script on the Fort Washington maps and end up with a playable map (yay!). But my own maps continue to contain pixels of colours the script doesn’t want to play with. I suppose I could just add 255 lines worth, as above, but that seems silly. The imagemagick command, I’m told, works fine on a mac, but doesn’t seem to achieve anything on my PC. So something to look into (and perhaps try this http://www.graphicsmagick.org/ instead). In the meantime, I’ve opened the Fort Washington map in good ol’ Paint, grabbing snippets of the colours to paste into my own map (also open in Paint). Then, I use Paint’s tools to clean up the colour gradients at the edges on my map. In essence, I trace the outlines.
C:\Users\[me]\AppData\Roaming\.minecraft\saves and fire up the game:
Rideau River in Minecraft!
Does it actually look like the Hogs’ Back to Dow’s Lake section of the Rideau Canal and the Rideau River? Well, not quite. Some issues with my basic elevation points. But – BUT! – the workflow works! So now to find some better maps and to start again…
I was interviewed by Ben Meredith on procedurally generated game worlds and their affinities with archaeology, for Kill Screen Magazine. The piece was published this morning. It’s a good read, and an interesting take on one of the more interesting recent developments in gaming. I asked Ben if I could post the unedited communication we had, from which he drew on for his article. He said ‘yes!’, so here it is.
Hi Ben,
It seems to me that archaeology and video games share a number of affinities, not least of which because they are both procedurally generated. There is a method for field archaeology; follow the method, and you will have correctly excavated the site/surveyed the landscape/recorded the standing remains/etc. These procedures contain within them various ways of looking at the world, and emphasize certain kinds of values over others, which is why it is possible to have a marxist archaeology, or a gendered archaeology, or so on. Thus, it also seems obvious to me that you can have an archaeology within video games (not to be confused with media archaeology, or an archaeology of video games). A great example of this kind of work is Andrew Rheinhart’s exploration of the beta of Elder Scrolls Online – you should touch base with him, too.http://archaeogaming.wordpress.com/2014/01/22/beta-testing-archaeology-in-elder-scrolls-online-taken-down/
On to your questions!
What motivated you to become an archaeologist?
Romance, mystery, allure, the ‘other’, the desire to travel… my initial impetus for getting into archaeology comes from the fact that I’m ‘from the bush’ in rural Canada and as a teenager I wanted so much more from the world. I now recognize that there’s some amazing archaeology in my own backyard (as it were) but I was too young and immature to recognize it then. The Greek Bronze Age, the Mycenaean heroes, the Minoans, Thera… all these captured my imagination. And there was no snow!
Personally, what single facet of archaeology captures the spirit of the field most effectively?
Which game do you think, so far, best achieves this?
A hard question to answer. But I think I’d go with Minecraft, for its community and especially its ability to be adopted in educational circles, for the way it requires the player to build and engage with the environments created. The world is what you make it, in Minecraft. So too in archaeology.
If a game attempted to procedurally generate ancient civilizations, what do you think would be the three most important elements that had to be generated?
I’ve done a lot of agent-based simulation. http://www.graeworks.net/category/simulations/ . Such a game would have to be built on an agent-based framework, for the NPCs. Each NPC would have to be unique. Those rules of behaviours that describe how the NPCs interact with each other, the environment, and the player would have to accurately capture the target ancient civilization. You can’t just have an ‘ancient civilization’; you’ll have to consider one very particular culture in one very particular time and place. That’s what a procedural rhetoric is all about: an argument in code about how this aspect of the world worked/is/existed.
Would investigation play an integral part in a video game interpretation?
I’m not sure I follow. Procedural generation on its own still is meaningless; it would have to be interpreted. The act of playing the game (and see the work of Roger Travis on http://playthepast.org on practicomimetics) sings it into existence.
Conversely, for you would stumbling blindly upon a ruin diminish the effect?
If the world is procedurally generated, then there would be clues in the landscape that would attune the attentive player to the presence of the past in that location. If there is no rhyme or reason – we stumble blindly – then the procedures do not describe an ancient (or any) civilization.
Do you think an archaeology simulator would be best implemented in first person (e.g. Minecraft) or third person (e.g. Terraria)? Would it be more important to convey an intimate atmosphere or impressive scale?
I like first person, but on a screen, first person can just induce nausea in the player. Maybe with an Oculus Rift that’s not a concern, in which case I’d say go first person! On a screen, I think third is better. Why not go AR and put your procedurally generated civilization into the local landscape?
One of the things I want my students to engage with in my ‘cities and countryside in antiquity’ class is the idea that in antiquity, one navigates space not with a two dimensional top-down mental map, but rather as a series of what-comes-next. That navigating required socializing, asking directions, paying attention to landmarks. I’m in part inspired by R. Ling’s 1990 article, Stranger in Town, and in part by Elijah Meek’s and Walter Scheidel’s ORBIS project. Elijah and I have in fact been talking about marrying a text-based interface for Orbis for this very reason.
But I’m also interested in gaming, simulation and storytelling for their own merits, so I’m trying my hand at an interactive fiction written using Inform 7 along the same lines. Instead of interfacing directly with the model represented in Orbis, I’ve queried Orbis for travel data, and have begun to write a bit of a narrative around it. (One could’ve composed this in Latin, in which case you’d get not just the spatial ideas, but also the language learning too!).
Anyway, I present to you version 0.1, a beta (perhaps ‘alpha’ is more appropriate) for ‘Stranger in These Parts‘, by Shawn Graham. I’m using Playfic to host it. I’d be happy to hear your thoughts. (And a hint to get going: check to see what you’ve got on you, and ‘ask Eros’ about things…)
Obviously, some things are lacking at the moment. I’ll want the player to be able to select different modes of transport sometimes (and thus to skip settings). There’s a point system, but it’s meant more to signal to the students that there is more to find. Depending on which NPCs a student talks with, different kinds of routes should become available. Time passes within the IF, and so night time matters – no travel then. As far as I know, there’s no such thing as multi-player IF or head-to-head IF, but that’d be fun if it were possible: can you get to Pompeii before your classmates?
In terms of the learning exercise, the students will play through this, and then explore the same territory in Orbis. In the light of their readings and experiences, I’ll be asking them to reflect on the Roman experience of space. Once we’ve done that, now being suitably disabused of 21st century views of how to navigate space, we’ll start looking at the landscape archaeology of other ancient cultures.
We won tickets to see the Ottawa – Tampa Bay game on Saturday night. 100 level. Row B. This is a big deal for a hockey fan, since those are the kind of tickets that are normally not within your average budget. More to the point of this post, it put us right down at ice level, against the glass.
Against the glass!!!
Normally we watch a hockey game on TV, or from up in the nose-bleeds. From way up there, you can see the play develop, the guy out in the open (“pass! pass! pleeeease pass the puck!” we all shout, from our aerie abode), same way as you see it on the tv.
But down at the glass…. ah. It’s a different scene entirely. There is a tangle of legs, bodies, sticks. It is hectic, confusing. It’s fast! From above, everything unfolds slowly… but at the ice surface you really begin to appreciate how fast these guys move. Two men, skating as fast as they can, each one weighing around 200 pounds, slamming into the boards in the race to get the puck. For the entire first period, I’d duck every time they came close. I’d jump in my chair, sympathetic magic at work as I willed the hit, made the pass, launched the puck.
For three wonderful periods, I was on the ice. I was in the game. I was there.
So…. what does this have to do with Play the Past? It has to do with immersion, and the various kinds that may exist or that games might permit. Like sitting at the glass at the hockey game, an immersive world (whether Azeroth or somewhere else) doesn’t have to put me in the game itself; it’s enough to put me in close proximity, and let that sympathetic magic take over. Cloud my senses; remove the omniscient point of view, and let me feel it viscerally. Make me care, and I’ll be quite happy that I don’t actually have my skates on.
Good enough virtuality is what Ed Castronova called it a few years back, when Second Life was at the top of its hype cycle.But we never even began to approach what that might mean. I think perhaps it is time to revisit those worlds, as the ‘productivity plateau’ may be in site.
In an earlier post, Ethan asked, where are the serious games in archaeology? My response is, ‘working on it, boss’. A few years ago, I was very much enamored of the possibilities that Second Life (and other similar worlds/platforms) could offer for public archaeology. I began working on a virtual excavation, where the metaphors of archaeology could be made real, where the participant could remove contexts, measure features, record the data for him or herself (I drew data in from Open Context; I was using Nabonidus for an in-world recording system). But I switched institutions, the plug was pulled, and it all vanished into the aether (digital curation of digital artefacts is a very real and pressing concern, though not as discussed as it ought to be). I’m now working on reviving those experiments and implementing them in the Web.Alive environment. It’s part of our Virtual Carleton campus, a platform for distance education and other training situations.
My ur-dig for the digital doppleganger comes from a field experience program at a local high school that I helped direct. I’m taking the context sheets, the plans, the photographs, and working on the problems of digital representation in the 3d environment. We’ve created contexts and layers that can be removed, measured, and planned. Ideally, we hope to learn from this experience the ways in which we can make immersion work. Can we re-excavate? Can we represent how archaeological knowledge is created? What will participants take away from the experience? If all those questions are answered positively, then what kinds of standards would we need to develop, if we turned this into a platform where we could take *any* excavation and procedurally represent it? I’m releasing students into it towards the start of next month. We’ve only got a prototype up at the moment, so things are still quite rough.
The other part of immersion that sometimes gets forgotten is the part about, what do people do when they’re there? That’s the sympathetic magic, and maybe it’s the missing ingredient from the earlier hype about Second Life. There was nothing to do. In a world where ‘anything is possible‘, you need rules, boundaries, purpose. We sometimes call it gamification, meaningfication, crowdscaffolding, and roleplaying. Mix it all together, and I don’t think there’s any reason for a virtual world to not be as exciting, as meaningful, as being there with your nose at the glass when Spezza scores.
Or when you uncover something wonderful in the digital dirt. But that’s a post for the future, when my students return from their virtual field season.
The topic of virtual worlds for archaeology and history seems to have hit a bit of a lull in recent months; on the other hand, that could simply be because I haven’t been looking. This morning, in preparing for my talk to the WAGenesis developer community, I came across Blue Mars, an online virtual world whose tools would appear to be more useful than those in Second Life, in that you can import your meshes, grids, etc from common 3d modeling programs. For archaeologists with a lot of 3d CAD reconstructions, this could be quite a boon. You can even import topographic maps into Blue Mars, and so recreate not just the buildings, but the landscapes. Virtual Landscape Archaeology, anyone?
In the film below, the builder has imported the topographic map of Mars…
It’s the part of Mars that has the four large volcanoes,” he tells me, “roughly what’s in this photo.” Daniel imported NASA satellite imagery of the Tharsis Montes area of Mars where Olympus Mons resides, which is about 2400 x 2400 kilometers, and scaled that down to fit into the Blue Mars terrain map, which is 8 x 8 km. As he explains, “You can import terrain maps (both height and color texture) into a Blue Mars city. They need to be greyscale and color bitmap (.bmp) files respectively, and the correct size… But you can import any real world map as a base
Meanwhile, at Ball State, they’re recreating parts of the Panama 1915 World’s Fair in Blue Mars, for explicitly historical immersive education. Stay tuned!
Time was, if you wanted some augmented reality, you had to upload your own points of interest into something like Wikitude or Layar. However, in its quest for world domination, Google seems to be working on something that will render those services moot: Google Goggles (silly name, profound implications).
As Leonard Low says on the MLearning Blog:
The official Google site for the project (which is still in development) provides a number of ways Goggles can be used to accomplish a “visual search”, including landmarks, books, contact information, artwork, places, logos, and even wine labels (which I anticipate could go much further, to cover product packaging more broadly).
So why is this a significant development for m-learning? Because this innovation will enable learners to “explore” the physical world without assuming any prior knowledge. If you know absolutely nothing about an object, Goggles will provide you with a start. Here’s an example: you’re studying industrial design, and you happen to spot a rather nicely-designed chair. However, there’s no information on the chair about who designed it. How do you find out some information about the chair, which you’d like to note as an influence in your own designs? A textual search is useless, but a visual search would allow you to take a photo of the chair and let Google’s servers offer some suggestions about who might have manufactured, designed, or sold it. Ditto unusual insects, species of tree, graphic designs, sculptures, or whatever you might happen to by interested in learning.
Just watch this space. I think Google Goggles is going to rock m-learning…
Now imagine this in action with an archaeological site, and google connects you with something less than what we as archaeological professionals would like to see. Say it was some sort of aboriginal site with profound cultural significance – but the site it connects with argues for the opposite. Another argument for archaeologists and historians to ‘create signal’ and to tell Google what’s important.
From Kevin Kee’s team at Brock U, an excellent augmented reality application for history:
Take a trip into the past with Niagara 1812. Using your iPhone, visit places and people from the War of 1812 and beyond. Choose Roam Mode, walk around one of the historic towns of Niagara, Canada, and discover the stories that lie behind the bricks and mortar. Or choose Quest Mode, and solve a centuries-old mystery in an immersive adventure.
With Niagara 1812, you carry history with you, in the palm of your hand.
I saw a prototype of this game earlier in the year, and with the website up and running, I guess it’s been launched! Having just finally purchased an iPhone, I can’t wait to give this a try. I like that it comes in two flavors – roam mode, and quest mode. Not everyone is up for playing AR games, so the choice is a nice usability feature. To get a sense of what the quest mode is about, go to the website and play the prologue…
Over on Kickstarter, I’ve come across the ‘Civil War Augmented Reality Project‘. I can imagine many ways of incorporating a bit of AR/VR on an historic site, and I think what these folks are proposing is eminently doable. It’s easy to get caught up in the tech side of such projects, so their focus on the end user is laudable. From their project page:
The Civil War Augmented Reality Project was conceived by several public educators with technology experience and a desire to offer more interactivity to students and the general public visiting historic sites. The objective of the project is to develop and implement augmented reality services related to the American Civil War in Pennsylvania, and to modify soon to be released tablet personal computers to allow the general public a chance to experience the applications. The project’s inception is planned to give ample development time in the run up to the Sesquicentennial of the Civil War, beginning in 2011. It is hoped that early support could generate interest in Maryland and Virginia.
We also propose to construct stationary devices patterned after the “pay binoculars” often found at scenic overlooks. These devices will offer a virtual geographic view from a few hundred yards above the user. Physically swiveling the viewer left and right changes the direction of the view in real time, just as swiveling up and down changes the view. The intuitive nature of the device is intended to invite “non-tech oriented” persons to try the experience, and learn more about AR and the Civil War. We propose that these binoculars be set up at locations across the region touched by fighting in the war. In order to give the user a sense of the historical connections between each location, a nearby screen will project realtime webcam images of people using the devices at other locations.
Roger Travis is doing amazing things in his classics classes. He creates immersive learning experiences, and often his tools are low-tech, or old-tech, like interactive fiction (which I think doesn’t get enough respect in terms of digital learning!)
In Operation KTHMA, the course on Herodotus and Thucydides, my students stood trial for breaking and entering the home of Pericles’ rival Thucydides son of Melesias. In FABULA AMORIS ROMANI, my students had to sing for Augustus, first emperor of Rome. In these moments, fun is being had—I have video of some of these moments, and there are actual smiles on my students’ faces!—but fun isn’t the thing that matters most. What matters is engagement in the material, and, if they’re to be believed in their comments on the course at the end of the semester, my students were engaged. In (Gaming) Homer, my students are caught up in an ARG where they must become homeric bards by observing and playing The Lord of the Rings Online in relation to the Iliad and the Odyssey.
His latest looks fantastic:
The Demiurge recruits the students as operatives in Project
ΑΡΧΑΙΑ in the usual way (cryptic e-mails on the course’s web-site saying that their services have been commandeered to save Western Civilization yada yada yada). In order to reach the mission objectives of knowledge and skill necessary to brief the world about Greek cvilization (including sub-objectives of reading ancient Greek), the Demiurge has coded the following practomimetic simulation into the TSTT:
It’s the lead-up to the trial of Socrates, and operatives are inserted into Athenians who could be called on to be jurors. In order to make the best possible decision about his guilt and his penalty, they must learn everything they can about how Socrates ended up on trial (which is, when told correctly, a story that goes back to the Bronze Age), and what the consequences of the trial have been for Western Civilization.
Megan Smith, an artist with an interest in the intersection of physical and virtual places, continues to do interesting things:
Pst! is the surreptitious beckoning of attention and the acronym for Physical Space Tweets. It is a small storyteller installed in public space giving an audience a glimpse into a geo-tagged community’s topic feed. For the Leeds Pavillion at Mediamatic’s Amsterdam Biennale 2009 Pst! chronicled life in Leeds through it’s twitter feed.
The piece locates a public social narrative by pulling an information feed from Twitter User profiles geographically aligned to Leeds with Twitter’s geocode API and then prints this information onto a mini LCD screen. By removing the peripheral of the computer a Pst! device can be placed in a non-space providing a window directly into a geo-located public space.
I could imagine installing one of these at say a heritage site, pulling all tweets that mention the site onto the display – or perhaps pulling the latest research on the site, to the site, for public consumption…. hmmm!
You must be logged in to post a comment.