Call for Collaborators: The Open Digital Archaeology Textbook Environment (ODATE)

The Open Digital Archaeology Textbook Environment is a collaborative writing project led by myself, Neha Gupta, Michael Carter, and Beth Compton. (See earlier posts on this project here).  We recognize that this is a pretty big topic to tackle. We would like to invite friends and allies to become co-authors with us. Contact us by Jan 31st; see below.

Here is the current live draft of the textbook. It is, like all live-written openly accessible texts, a thing in the process of becoming, replete with warts, errors, clunky phrasing, and odd memos-to-self. I’m always quietly terrified to share work in progress, but I firmly believe in both the pedagogical and collegial value of such endeavours. While our progress has been a bit slower than one might’ve liked, here is where we currently stand:

  1. We’ve got the framework set up to allow open review and collaboration via the Hypothes.is web annotation framework and the use of Github and gh-pages to serve up the book
  2. The book is written in the bookdown framework with R Markdown and so can have actionable code within it, should the need arise
  3. This also has the happy effect of making collaboration open and transparent (although not necessarily easy)
  4. The DHBox computational environment has been set up and is running on Carleton’s servers. It’s currently behind a firewall, but that’ll be changing at some point during this term (you can road-test things on DHBox)
  5. We are customizing it to add QGIS and VSFM and some other bits and bobs that’d be useful for archaeologists. Suggestions welcome
  6. We ran a test of the DHBox this past summer with 60 students. My gut feeling is that not only did this make teaching easier and keep all the students on the same page, but the students also came away with a better ability to roll with whatever their own computers threw at them.
  7. Of six projected chapters, chapter one is in pretty good – though rough – shape

So, while the majority of this book is being written by Graham, Gupta, Carter and Compton, we know that we are leaving a great deal of material un-discussed. We would be delighted to consider additions to ODATE, if you have particular expertise that you would like to share. As you can see, many sections in this work have yet to be written, and so we would be happy to consider contributions aimed there as well. Keep in mind that we are writing for an introductory audience (who may or may not have foundational digital literacy skills) and that we are writing for a linux-based environment. Whether you are an academic, a professional archaeologist, a graduate student, or a friend of archaeology more generally, we’d be delighted to hear from you.

Please write to Shawn at shawn dot graham at carleton dot ca to discuss your idea and how it might fit into the overall arc of ODATE by January 31st 2018. The primary authors will discuss whether or not to invite a full draft. A full draft will need to be submitted by March 15th 2018. We will then offer feedback. The piece will go up on this draft site by the end of the month, whereupon it will enjoy the same open review as the other parts. Accepted contributors will be listed as full authors, eg ‘Graham, Gupta, Carter, Compton, YOUR NAME, 2018 The Open Digital Archaeology Textbook Environment, eCampusOntario…..

For help on how to fork, edit, make pull requests and so on, please see this repo

 

Featured Image: “My Life Through a Lens”, bamagal, Unsplash

R is for Archaeology: A report on the 2017 Society of American Archaeology meeting, by B Marwick

This guest post is by Ben Marwick of The University of Washington in Seattle. He reports on R workshop at the recent SAA in Vancouver.

The Society of American Archaeology (SAA) is one of the largest professional organisations for archaeologists in the world, and just concluded its annual meeting in Vancouver, BC at the end of March. The R language has been a part of this meeting for more than a decade, with occasional citations of R Core in the posters, and more recently, the distinctive ggplot2 graphics appearing infrequently on posters and slides. However, among the few archaeologists that have heard of R, it has a reputation for being difficult to learn and use, idiosyncratic, and only suitable for highly specialized analyses. Generally, archaeology students are raised on Excel and SPSS. This year, a few of us thought it was time to administer some first aid to R’s reputation among archaeologists and generally broaden awareness of this wonderful tool. We developed a plan for this year’s SAA meeting to show our colleagues that R is not too hard to learn, it is useful for almost anything that involves numbers, and it has lots of fun and cool people that use it to get their research done quicker and easier.

Our plan had three main elements. The first element was the debut of two new SAA Interest Groups. The Open Science Interest Group (OSIG) was directly inspired by Andrew MacDonald’s work founding the ESA Open Science section, with the OSIG being approved by the SAA Board this year. It aims to promote the use of preprints (e.g. SocArXiv), open data (e.g. tDAR, Open Context), and open methods (e.g. R and GitHub). The OSIG recently released a manifesto describing these aims in more detail. At this SAA meeting we also saw the first appearance of the Quantitative Archaeology Interest Group, which has a strong focus on supporting the use R for archaeological research. The appearance of these two groups shows the rest of the archaeological community that there is now a substantial group of R users among academic and professional archaeologists, and they are keen to get organised so they can more effectively help others who are learning R. Some of us in these interest groups were also participants in fora and discussants in sessions throughout the conference, and so had opportunities to tell our colleagues, for example, that it would be ideal if R scripts were available for for certain interesting new analytical methods, or that R code should be submitted when manuscripts are submitted for publication.

The second element of our plan was a normal conference session titled ‘Archaeological Science Using R’. This was a two hour session of nine presentations by academic and professional archaeologists that were live code demonstrations of innovative uses of R to solve archaeological research problems. We collected R markdown files and data files from the presenters before the conference, and tested them extensively to ensure they’d work perfectly during the presentations. We also made a few editorial changes to speed things up a bit, for example using readr::read_csv instead of read.csv. We were told in advance by the conference organisers that we couldn’t count on good internet access, so we also had to ensure that the code demos worked offline. On the day, the live-coding presentations went very well, with no-one crashing and burning, and some presenters even doing some off-script code improvisation to answer questions from the audience. At the start of the session we announced the release of our online book containing the full text of all contributions, including code, data and narrative text, which is online at https://benmarwick.github.io/How-To-Do-Archaeological-Science-Using-R/ We could only do this thanks to the bookdown package, which allowed us to quickly combine the R markdown files into a single, easily readable website. I think this might be a new record for the time from an SAA conference session to a public release of an edited volume. The online book also uses Matthew Salganik’s Open Review Toolkit to collect feedback while we’re preparing this for publication as an edited volume by Springer (go ahead and leave us some feedback!). There was a lot of enthusiastic chatter later in the conference about a weird new kind of session where people were demoing R code instead of showing slides. We took this as an indicator of success, and received several requests for it to be a recurring event in future meetings.

The third element of our plan was a three hour training workshop during the conference to introduce archaeologists to R for data analysis and visualization. Using pedagogical techniques from Software Carpentry (i.e. sticky notes, live coding and lots of exercises), Matt Harris and I got people using RStudio (and discovering the miracle of tab-complete) and modern R packages such as readxl, dplyr, tidyr, ggplot2. At the end of three hours we found that our room wasn’t booked for anything, so the students requested a further hour of Q&A, which lead to demonstrations of knitr, plotly, mapview, sf, some more advanced ggplot2, and a little git. Despite being located in the Vancouver Hilton, this was another low-bandwidth situation (which we were warned about in advance), so we loaded all the packages to the student’s computers from USB sticks. In this case we downloaded package binaries for both Windows and OSX, put them on the USB sticks before the workshop, and had the students run a little bit of R code that used install.packages() to install the binaries to the .libpaths() location (for Windows) or untar’d the binaries to that location (for OSX). That worked perfectly, and seemed to be a very quick and lightweight method to get packages and their dependencies to all our students without using the internet. Getting the students started by running this bit of code was also a nice way to orient them to the RStudio layout, since they were seeing that for the first time.

This workshop was a first for the SAA, and was a huge success. Much of this is due to our sponsors who helped us pay for the venue hire (which was surprisingly expensive!). We got some major support from Microsoft Data Science User Group (which we learned about from a post by Joseph Rickert and Open Context, as well as cool stickers and swag for the students from RStudio, rOpenSci, and the Centre for Open Science. We used the stickers like tiny certificates of accomplishment, for example when our students produced their first plot, we handed out the ggplot2 stickers as a little reward.

Given the positive reception of our workshop, forum and interest groups, our feeling is that archaeologists are generally receptive to new tools for working with data, perhaps more so now than in the past (i.e. pre-tidyverse). Younger researchers seem especially motivated to learn R because they may have heard of it, but not had a chance to learn it because their degree program doesn’t offer it. If you are a researcher in a field where R (or any programming language) is only rarely used by your colleagues, now might be a good time to organise a rehabilitation of R’s reputation in your field. Our strategy of interest groups, code demos in a conference session, and a short training workshop during the meeting is one that we would recommend, and we imagine will transfer easily to many other disciplines. We’re happy to share more details with anyone who wants to try!

Notes on running the DH-USB

Our digital archaeology textbook will be intertwined with an instance of the DHBox. One of the participants in that project is Jonathan Reeve, who has been building a version that runs off a bootable USB. So naturally, I had to give it a spin. I ran out, got a new usb stick and….

…had to figure out Bittorrent. Every time I went to install the client, every browser I had on every machine kept blocking it as malicious. Normally I can work around this sort of thing, but it was really pernicious. Turned out, my stable of computers were all quite happy with uTorrent instead. With that installed, I grabbed the torrent files from the DH-USB repository, and let them do their magic. It took 3 hrs to get the full .img file.

…had to figure out how to put that .img onto a usb stick such that it would be bootable. Unetbootin should’ve worked, but didn’t. In the end, I had to do it from the command line, per the ‘alternative instructions’:

MacOS: Identify the label of your USB drive with the command diskutil list. Then unmount the disk with diskutil unmountDisk /dev/diskX, replacing diskX with your drive name. Finally, run sudo dd if=/path/to/dh-usb.img of=/dev/rdiskX bs=1m again replacing /path/to/dh-usb.img with the path to the .img file, and diskX with the name of your disk.

Then I had to figure out how to get the damned machines to boot from the stick rather than their own hard drive. On the Mac, this was easy – just hold the alt key down while the machine powers up, and you can then select the usb stick. NB: you can also, it seems, select whatever wifi network happens to be in the air at this stage, but if you do this (I did) everything will go sproing shortly thereafter and the stick won’t boot. So don’t do this. On the Windows 10 machine I had access to, booting up from a disk or stick is no longer the straight-forward ‘hold down f11’ or whatever anymore. No, you have to search for the ‘advanced startup’ options, and then find the boot from disk option, where  you specify the usb stick. THEN the machine powers down and up again… and will tell you that the security settings won’t let you proceed any further. Apparently, there’s a setting somewhere in the BIOS that you have to switch, but as it wasn’t my machine and I’d had enough, I abandoned it. Windows folks, godspeed. (Incidentally, for various reasons, computers much older than about five years are out of luck, as some key pieces of ur-code have changed in recent years:

[you need] a modern system that supports UEFI. Legacy BIOS boot may be possible, but it hasn’t been extensively tested

I had some other issues subsequent as I tried to install R and R Studio, but I’ve sorted those out with Jonathan and by the time you read this, they probably won’t be issues any more (but you can click on the ‘closed issues’ on the repo to see what my issues were). One thing that drove me nuts was trying to persuade Arch Linux to find the damned wifi.

I eventually stumbled across this re ubuntu: https://help.ubuntu.com/community/WifiDocs/Driver/bcm43xx

so tried this:

$ lspci -vvnn | grep -A 9 Network

and saw that I had kernal modules: brcmfmac, wl, but none in use. So I tried this:

$ sudo modprobe brcmfmac

and ran the first command again; kernal now in use!

$ sudo wifi-menu

…and connected. Kept getting connection errors; went to settings > network and connected through there, ta da!

~o0o~

There you have it. A portable DH computer on a stick, ready to go. For use in classes, it’s easy enough to imagine just buying a bunch of usb sticks and filling them up with not only the computing parts but also the data sets, supporting documentation, articles etc and distributing them in class; for my online class this summer maybe the installation-onto-the-stick steps can be made more streamlined… of course, that’s what DH-Box prime is for, so I’ve asked the kind folks over in the school of computer science if they wouldn’t mind installing it on their open stack. We shall see.

The OpenContext & Carleton Prize for Archaeological Data Visualization

We are pleased to announce that the winner of the 1st OpenContext & Carleton University Data Visualization Prize is awarded to the ‘Poggio Civitate VR Data Viewer’, created by the team led by Russell Alleen-Willems. 

The team hacked this data viewer together over a weekend as a proof-of-concept. In the typical spirit of the digital humanities and digital archaeology, they developed a playful approach exploring the materials using the potential of the HTC Vive sdk to ingest Open Context data as json, and then to place it into a relative 3d space. We particularly appreciated their candour and self-assessment of what worked, and didn’t work about their project, and their plans for the future. We look forward to seeing their work progress, and hope that this prize will help them move forward. Please explore their project at https://vrcheology.github.io/ .

Congratulations to the team, and thank you to all who participated. Please keep your eyes peeled for next year’s edition of the prize!

The team members are:

  • Russell Alleen-Willems (Archaeology domain knowledge, Unity/C# Scripting)
  • Mader Bradley (JSON Data Parsing/Unity Scripting)
  • Jeanpierre Chery (UX Design, Unity/C# Scripting)
  • Blair Lyons (Unity/C# Scripting)
  • Aileen McGraw (Instructional Design and Program Storytelling)
  • Tania Pavlisak (3D modeler)
  • Jami Schwarzwalder (Git Management, Team Organization, and Social Media)
  • Paul Schwarzwalder (Unity/C# Scripting)
  • Stephen Silver (Background Music)

Regarding Slack

Zach Whalen is team teaching a course using Slack at the moment. He writes up his initial observations on using it here. It’s a face-to-face course with Slack serving as the catalyst bringing all of the different sections together. I quizzed him and Lee Skallerup Bessette this morning on Twitter, to see how their experience has differed from my own.

That’s given me much to think about. Once the dust cleared, and my student numbers stabilized, I have about half the course who will engage with (or have engaged with) each other on Slack. Some of that 50% are power users and whom I see a lot of, some are once or twice’ers. Then there are my ghosts. Lord knows what they’re up to. But Slack is certainly not what they’ve been trained to expect in terms of online courses round here. I suspect I might be the only one using Slack in the context of a fully online course at this university. When round two of this course runs next year, some things to modify:

  1. More energy into explaining how Slack works, my expectations, some guided exercises to get students into the habit of using Slack. I need to ween ’em off discussion boards, mutliple choices, short answers, and the rest of the Moodle bag of tricks. This is a really big issue that needs serious consideration and unpacking. I expect there’ll be some more posts on this eventually.
  2. Total rookie error: I assumed that what I was seeing on Slack was what my students were seeing.
  3. I only discovered the /feed command today as a result of reading Zach’s piece. I would’ve used that right from the start to grab everyone’s feed from the domains (we’re set up with Reclaim Hosting) into an appropriate channel, to make that hum that signals a vibrant space.
  4. My students most emphatically do not want video chat. The /appear and /hangouts integrations are never used. I promised folks here I’d use Big Blue Button as well; those sessions have been …poorly… attended. So you’d think they’d be all over the text based interactions of Slack…

Anyway. Things are improving. Next time round will be even better.

learning

 

 

 

Historian’s Macroscope- how we’re organizing things

‘One of the sideshows was wrestling’ from National Library of Scotland on Flickr Commons; found by running this post through http://serendipomatic.org

How do you coordinate something as massive as a book project, between three authors across two countries?

Writing is a bit like sausage making. I write this, thinking of Otto von Bismarck, but Wikipedia tells me:

  • Laws, like sausages, cease to inspire respect in proportion as we know how they are made.
    • As quoted in University Chronicle. University of Michigan (27 March 1869) books.google.de, Daily Cleveland Herald (29 March 1869), McKean Miner (22 April 1869), and “Quote… Misquote” by Fred R. Shapiro in The New York Times (21 July 2008); similar remarks have long been attributed to Otto von Bismarck, but this is the earliest known quote regarding laws and sausages, and according to Shapiro’s research, such remarks only began to be attributed to Bismarck in the 1930s.

I was thinking just about the messiness rather that inspiring respect; but we think there is a lot to gain when we reveal the messiness of writing. Nevertheless, there are some messy first-first-first drafts that really ought not to see the light of day. We want to do a bit of writing ‘behind the curtain’, before we make the bits and pieces visible on our Commentpress site, themacroscope.org.  We are all fans of Scrivener, too, for the way it allows the bits and pieces to be moved around, annotated, rejected, resurrected and so on. Two of us are windows folks, the other a Mac. We initially tried using Scrivener and Github, as a way of managing version control over time and to provide access to the latest version simultaneously. This worked fine, for about three days, until I detached the head.

Who knew that decapitation was possible? Then, we started getting weird line breaks and dropped index cards happening. So we switched tacts and moved our project into a shared dropbox folder. We know that with dropbox we absolutely can’t have more than one of us be in the project at the same time. We started emailing each other to say, ‘hey, I’m in the project….now. It’s 2.05 pm’ but that got very messy. We installed yshout  and set it up to log our chats. Now, we can just check to see who’s in, and leave quick memos about what we were up to.

Once we’ve got a bit of the mess cleaned up, we’ll push bits and pieces to our Commentpress site for comments. Then, we’ll incorporate that feedback back in our Scrivener, and perhaps re-push it out for further thoughts.

One promising avenue that we are not going down, at least for now, is to use Draft.  Draft has many attractive features, such as multiple authors, side-by-side comparisons, and automatic pushing to places such as WordPress. It even does footnotes! I’m cooking up an assignment for one of my classes that will require students to collaboratively write something, using Draft. More on that some other day.

Fantastic PhotoFly: 3d Scanning for the Rest of Us

I’ve been amazed for some time by what can be achieved with LIDAR, a game engine, and a bit of processing power. A few years ago, Digital Urban posted a series of tutorials for getting architectural models from Sketchup or 3d Max into the Oblivion game engine, as a way for exploring built space. I was always blown away by that. The issue I had was in getting the 3d model created in the first place.

Enter PhotoFly, from Autodesk. PhotoFly is currently free, and it works magic. It transforms your computer and digital camera into a 3d scanner. You take a series of overlapping photographs, upload them to the program, and the program sends them to Autodesk for processing. The system works out from the photographs three dimensional points, and uses these to stitch your images into a wire mesh model (with your photos as the overlay). The results are impressive – once you figure out the trick of taking sufficiently redundant photographs to provide the necessary information.

I used a Kodak EasyShare camera, Z612, and tried four different scenes before I finally started to get the hang of it. My first was a ceramic coffee mug with a glossy white finish. The shininess of the mug confused the processor. I then tried a Campbell’s Soup Can (my nod to Mr. Warhol). I put it on a lazy susan, set my camera up, and rotated the lazy susan through 5 degree increments, thinking I would get good coverage. This did not work (which I would’ve known had I watched the tutorial videos, but really, who has time for that?). The reason it did not work is that the algorithms that stitch everything together count on differences in perspective, focal depth, and so on to work out the relative placement of the camera for each shot, and hence the distance between the focus point and overlapping points that can be identified in the shots. At least, I believe that’s the reason. I tried again, moving around the soup can, but this time I didn’t get enough overlap.

My next attempt was to do my office (imagine, scanning an interior space!). I had more success this time, but again my overlap and the sheer clutter in here defeated me. Finally, I put a small toy car (about 5 inches long by 2 inches wide) on a chair, and proceeded to take about 20 photographs of it from every angle, varying the depth and distance. By this point, I was starting to get the hang of it, and it uploaded quite well. The ‘draft’ model came back more or less complete, and I saved it and sent it to youtube:

In the ‘draft’ mode, you can select triangles to clip, provide real-world coordinates and measurements (useful for creating scans of buildings and interior spaces especially) and basically do enough pre-processing that the model looks just about complete. One then changes to either ‘mobile’, ‘standard’, or ‘high’ quality and the model is sent back to Autodesk for more processing. At this point, the model can be exported in a variety of formats, especially CAD formats like DWG. This is where I get very excited. Sketchup Pro for instance can import DWG. And Sketchup can be used to create AR. I could imagine scanning an interior space, sending the resultant model to Sketchup and then into AR, tying the model to a QR code. Since my model can be sized 1:1, it should be possible for instance to scan say a cave-shelter, and then step into it somewhere else (the middle of a playing field, for example). I’m very excited about the possibilities; more on these as I explore.

My final model of the toy car is below. All in all, it took approximately 30 minutes to get to this stage. Obviously, my model has some flaws in it, but for 30 minutes work… not bad.

Problems with the Software

From time to time, the software just mysteriously died during the upload process. Save early, and save often. Also, exporting to youtube was often fraught. One enters the mysterious world of picking the right quality and codec to make it work. Once I selected ‘mobile’ for everything, things seemed to work better.

9/10 stars from the electric archaeologist. This is the most exciting piece of software I’ve played with in ages.

HeritageCrowd.org: crowdsourcing cultural heritage

I have a small summer project running, using the Ushahidi and Omeka platforms for crowdsourcing local history, called HeritageCrowd. I have two Carleton University undergraduate students, Guy Massie and Nadine Feuerherm helping me with this; we’re blogging the experience here. Please check us out; comments & critiques (and submissions, of course!) are most welcome.

Guy writes,

This project, headed by Professor Shawn Graham and students Nadine Feuerherm and Guy Massie at Carleton University, rethinks the way that people share and interact with local history and heritage. Through the use of a number of technologies such as text messaging, voice mail, and the internet, we will test the possibility for creating a database of local history knowledge by asking for contributions from the community. This type of approach is known as “crowdsourcing,” and while it has been used to gather information about ongoing events such as the violence following the 2008 elections in Kenya, it has yet to be used in this way in the area of heritage and local history. The contributions made to the project will be stored and displayed on a website for the public. On our ‘stories‘ page, students and researchers can do further research on the items contributed by the public, creating exhibitions and other digital stories.

In the same way that the Memory Project is working to record the stories of veterans from the Second World War, we believe that there is an untapped resource in the historical knowledge of members from the community. The different ways of contributing to this project mean that anyone with telephone or internet access can share what they know about a place, event, building, or other topic related to local history. Our goal is to create an automated method of storing and digitally curating local heritage and history. In this way, our research hopes to benefit rural areas, and other regions of the world, that may otherwise face obstacles in attracting interest and attention about local history from the larger public.

This project is funded by a Junior Research Fellowship, and will make extensive use of the Omeka and Ushahidi web-based platforms. It will use the Upper Ottawa Valley as a “testing ground” for the project, and in particular the Pontiac MRC region in Western Quebec.

 

Of Hockey, Sympathetic Magic, and Digital Dirt

We won tickets to see the Ottawa – Tampa Bay game on Saturday night. 100 level. Row B. This is a big deal for a hockey fan, since those are the kind of tickets that are normally not within your average budget. More to the point of this post, it put us right down at ice level, against the glass.

Against the glass!!!

Normally we watch a hockey game on TV, or from up in the nose-bleeds. From way up there, you can see the play develop, the guy out in the open (“pass! pass! pleeeease pass the puck!” we all shout, from our aerie abode), same way as you see it on the tv.

But down at the glass…. ah. It’s a different scene entirely. There is a tangle of legs, bodies, sticks. It is hectic, confusing. It’s fast! From above, everything unfolds slowly… but at the ice surface you really begin to appreciate how fast these guys move. Two men, skating as fast as they can, each one weighing around 200 pounds, slamming into the boards in the race to get the puck. For the entire first period, I’d duck every time they came close. I’d jump in my chair, sympathetic magic at work as I willed the hit, made the pass, launched the puck.

For three wonderful periods, I was on the ice. I was in the game. I was there.

So…. what does this have to do with Play the Past? It has to do with immersion, and the various kinds that may exist or that games might permit. Like sitting at the glass at the hockey game, an immersive world (whether Azeroth or somewhere else) doesn’t have to put me in the game itself; it’s enough to put me in close proximity, and let that sympathetic magic take over. Cloud my senses; remove the omniscient point of view, and let me feel it viscerally. Make me care, and I’ll be quite happy that I don’t actually have my skates on.

Good enough virtuality is what Ed Castronova called it a few years back, when Second Life was at the top of its hype cycle.But we never even began to approach what that might mean. I think perhaps it is time to revisit those worlds, as the ‘productivity plateau’ may be in site.

In an earlier post, Ethan asked, where are the serious games in archaeology? My response is, ‘working on it, boss’.  A few years ago, I was very much enamored of the possibilities that Second Life (and other similar worlds/platforms) could offer for public archaeology. I began working on a virtual excavation, where the metaphors of archaeology could be made real, where the participant could remove contexts, measure features, record the data for him or herself (I drew data in from Open Context; I was using Nabonidus for an in-world recording system).  But I switched institutions, the plug was pulled, and it all vanished into the aether (digital curation of digital artefacts is a very real and pressing concern, though not as discussed as it ought to be). I’m now working on reviving those experiments and implementing them in the Web.Alive environment. It’s part of our Virtual Carleton campus, a platform for distance education and other training situations.

My ur-dig for the digital doppleganger comes from a field experience program at a local high school that I helped direct.  I’m taking the context sheets, the plans, the photographs, and working on the problems of digital representation in the 3d environment. We’ve created contexts and layers that can be removed, measured, and planned. Ideally, we hope to learn from this experience the ways in which we can make immersion work. Can we re-excavate? Can we represent how archaeological knowledge is created? What will participants take away from the experience? If all those questions are answered positively, then what kinds of standards would we need to develop, if we turned this into a platform where we could take *any* excavation and procedurally represent it?  I’m releasing students into it towards the start of next month. We’ve only got a prototype up at the moment, so things are still quite rough.

The other part of immersion that sometimes gets forgotten is the part about, what do people do when they’re there? That’s the sympathetic magic, and maybe it’s the missing ingredient from the earlier hype about Second Life. There was nothing to do. In a world where ‘anything is possible‘, you need rules, boundaries, purpose. We sometimes call it gamification, meaningfication, crowdscaffolding, and roleplaying.  Mix it all together, and I don’t think there’s any reason for a virtual world to not be as exciting, as meaningful, as being there with your nose at the glass when Spezza scores.

Or when you uncover something wonderful in the digital dirt. But that’s a post for the future, when my students return from their virtual field season.

(cross-posted at Play the Past)

Google Goggles: Augmented Reality

Google Goggles translating on the flyTime was, if you wanted some augmented reality, you had to upload your own points of interest into something like Wikitude or Layar. However, in its quest for world domination, Google seems to be working on something that will render those services moot: Google Goggles (silly name, profound implications).

As Leonard Low says on the MLearning Blog:

The official Google site for the project (which is still in development) provides a number of ways Goggles can be used to accomplish a “visual search”, including landmarks, books, contact information, artwork, places, logos, and even wine labels (which I anticipate could go much further, to cover product packaging more broadly).

So why is this a significant development for m-learning? Because this innovation will enable learners to “explore” the physical world without assuming any prior knowledge. If you know absolutely nothing about an object, Goggles will provide you with a start. Here’s an example: you’re studying industrial design, and you happen to spot a rather nicely-designed chair. However, there’s no information on the chair about who designed it. How do you find out some information about the chair, which you’d like to note as an influence in your own designs? A textual search is useless, but a visual search would allow you to take a photo of the chair and let Google’s servers offer some suggestions about who might have manufactured, designed, or sold it. Ditto unusual insects, species of tree, graphic designs, sculptures, or whatever you might happen to by interested in learning.

Just watch this space. I think Google Goggles is going to rock m-learning…

Now imagine this in action with an archaeological site, and google connects you with something less than what we as archaeological professionals would like to see.  Say it was some sort of aboriginal site with profound cultural significance – but the site it connects with argues for the opposite. Another argument for archaeologists and historians to ‘create signal’ and to tell Google what’s important.

See the video:

Civil War Augmented Reality Project

Over on Kickstarter, I’ve come across the ‘Civil War Augmented Reality Project‘.  I can imagine many ways of incorporating a bit of AR/VR on an historic site, and I think what these folks are proposing is eminently doable. It’s easy to get caught up in the tech side of such projects, so their focus on the end user is laudable. From their project page:

The Civil War Augmented Reality Project was conceived by several public educators with technology experience and a desire to offer more interactivity to students and the general public visiting historic sites. The objective of the project is to develop and implement augmented reality services related to the American Civil War in Pennsylvania, and to modify soon to be released tablet personal computers to allow the general public a chance to experience the applications. The project’s inception is planned to give ample development time in the run up to the Sesquicentennial of the Civil War, beginning in 2011. It is hoped that early support could generate interest in Maryland and Virginia.
We also propose to construct stationary devices patterned after the “pay binoculars” often found at scenic overlooks. These devices will offer a virtual geographic view from a few hundred yards above the user. Physically swiveling the viewer left and right changes the direction of the view in real time, just as swiveling up and down changes the view. The intuitive nature of the device is intended to invite “non-tech oriented” persons to try the experience, and learn more about AR and the Civil War. We propose that these binoculars be set up at locations across the region touched by fighting in the war. In order to give the user a sense of the historical connections between each location, a nearby screen will project realtime webcam images of people using the devices at other locations.

Second Site: Keith Challis’ work on archaeological visualization

I learned this morning of Keith Challis’ blog, ‘Second Site‘. Keith is a researcher with Birmingham University’s ‘Visual and Spatial Technology Center’.

Keith is exploring ways of using game engines to render & explore archaeological landscapes (a great use of LIDAR if ever I saw one).  In a recent post, ‘Ideas of Landscape‘ he writes,

One of the key ideas behind using computer games to visualise archaeological landscapes is that they take us away from the god-like view from above that typical computer-based visualisation provides.  In Ideas of Landscape, Matthew Johnson reflects on the dichotomy between the romantic, Wordsworthian view of landscape, rooted amongst other things in the view from above, and Hoskin’s assertion that “the real work [in the study of landscape] is accomplished by the men and women with muddy boots…”
Computer visualisation, particularly of remotely collected landscape data (for example the airborne lidar used here) has almost inevitably forced us to explore only one path; landscapes become data objects, interpreted as a whole and understood as abstract entities, devoid of sense and experience.

The first person view of game-based visualisation places us back in the realm of “muddy boots” landscape is explored and experienced, like Hoskins we “explore England on foot”.  Does that improve our understanding of landscape?  At one level probably not, arguably morphology of landscape is best appreciated from above, but landscape is more than form and function, and the relationship between elements of landscape is better appreciated from the ground.

This connected (in my head, at least), with some ideas I’ve long held, about the way landscape-as-social-network can give us something of that ‘muddy boots’ experience, in terms of landscape as culture.  That at least is the premise of one paper of mine, ‘The Space Between’ (full text)

The key thing to remember I suppose is that the cartographic understanding of landscape is a fairly recent innovation, and that we miss important aspects of human interaction with the land if the map is our only tool.