Bless your little cotton socks: reflecting on Carleton’s Data Day

I went to Carleton’s ‘Big Data Day’ yesterday. My student, Hollis, had entered a poster in the poster competition, detailing his approach to data mining the audio recordings of THATCamp Accessibility. Looks like he and I were the only two from the humanities end of the spectrum at the event. The day opened with a panel discussion, then a series of presentations from faculty involved in Carleton’s new MA in Data Science (which it appears has support from IBM; there’s a webpage at Carleton but it has a warning on it to all and sundry not to share the link. So go to carleton.ca and do this search). Carleton has aspirations in this regard. The program, and the event, was a collaboration from Computer Science, Biology, Business, Systems and Computer Engineering, Economics, Geography and Environmental Studies.

(And of course, not to be confused with our MA in Digital Humanities, which does not have support from a major tech firm).

Hollis explains his research at Carleton’s Data Day

Anyway, some observations on what this particular approach to big data seems to be about:

1. ‘Big data’ is not defined in this forum.

2. Data are an unalloyed good.

3. Data have agency.

4. “You can have privacy, or you can have cool online stuff. Which do you want?” – this from the Shopify data analyst (biology grad, self-described hacker).

I had a question for the panel. I asked about algorithmic prisons: the uses to which data are put. I pointed out that the data, by themselves, mean nothing. Interpretation, power, and control, and the creation of algorithms that have a view of the world which may not in themselves be warranted. Wasn’t there a role for the humanities in all of this?

Well bless my little cotton socks. It is rare that I am on the receiving end of techno-condescenion. I hope to god I’ve never done this to some one else – make assumptions about your level of engagment or knowledge of the field this way –  but it was apparent that I lost the panel the moment I said, ‘hi, I’m from the history department’. The shopify guy proceeded to lecture me on disruption, and that even the book was a disruption; but we got over that. Unintended side-effects? Sure, there’s always those, but don’t worry.

Big data will save us.

Shopify guy did have one interesting point to make: it’s not ‘big data’, but ‘big interest in data’. He then framed the 20th century as the century of deterministic solutions, whereas the 21st century will be the one of statistical solutions.

Oh good.

On the privacy issue, one comment from the floor in response to the ‘you can have privacy or you can have cool online tools’ was, ‘you’re just not trying hard enough’. The day progressed.

There were good points throughout the day; one of the data librarians from Carleton talked about data research management plans.

That’s why you need humanities.

As I was sitting in the room, I was also following the tweets from #caa2014 and #saa2014.  At those two conferences, archaeologists are wrestling with big data issues themselves. Hell, archaeology could be the original big data (if measured in sheer metric tonnage… but also in more conventional measurements of data). But what was striking was the contrast between the two displicinary perspectives. The archaeologists were a lot more humble in the face of dealing with big data describing the human past. There were tweets about uncertainty (about how to model it; about how to think through its implications). There were tweets about power, control, ownership, engagement, community. Tweets about liberation, tweets about being chained. (I’m freewheeling right now, from memory, rather than linking through).

Humility was lacking, at data day.  More data, better data, and our algorithms are an unalloyed good.

I’m not against making money. But… but… but… not like this.

So. The takeaway from this day? The big data, analytics-for-business crowd, need the humanities. There’s space there already in their discussions, though they don’t seem to recognize it. Or at least not yesterday.

Yes.

 

Hollis Peirce, George Garth Graham Research Fellow

Hollis Peirce on Twitter: https://twitter.com/HollPeirce

Mr. Hollis Peirce https://twitter.com/HollPeirce

I am pleased to announce that the first George Garth Graham Undergraduate Digital History Research Fellow will be Mr. Hollis Peirce.

Hollis is a remarkable fellow. He attended the Digital Humanities Summer Institute at the University of Victoria in the summer of 2012. At DHSI he successfully completed a course called “Digitization Fundamentals and Their Application”. In the fall semester of 2012 he was the impetus behind, and helped to organize,  THATCamp Accessibility on the subject of the impact of digital history on accessibility in every sense of the word.

Hollis writes,

Life for me has been riddled with challenges.  The majority of them coming on account of the fact that I, Hollis Peirce, am living life as a disabled individual with Congenital Muscular Dystrophy as many things are not accessible to me.  However, I have never let this fact hold me back from accomplishing my goals.  Because of this, when I first started studying history I knew I was not choosing an easy subject for a disabled individual such as myself.  All those old, heavy, books on high library shelves that history is known for made it one of the most inaccessible subjects possible to study.  All that changed however, when I discovered digital history.

It was thanks to a new mandatory class for history majors at Carleton University called The Historian’s Craft taught by a professor named Dr Shawn Graham.  This course was aimed at teaching students all about how to become a historian, and how a historian is evolving through technology.  At that moment the idea for ‘Accessibility & Digital History’ came to mind.  From that point on many steps have been taken to advance my studies in this field, which has led to being selected as the first George Garth Graham Undergraduate Digital History Reseach Fellow.

Hollis and I have had our first meeting, about what his project might entail. When I initially cooked this idea up, I thought it would allow students the opportunity to work on my projects, or those of my colleagues around the university. As we chatted about Hollis’ ideas, (and where I batted around some of my own stuff),  I realized that I had the directionality of this relationship completely backwards.

It’s not that Hollis gets to work on my projects. It’s that I get to work on his.

Here’s what we came up with.

At THATCamp Accessibility, we recorded every session. We bounced around the idea of transcribing those sessions, but realized that that was not really feasible for us. We started talking about zeroing in on certain segments, to tell a history of the future of an accessible digital humanities… and ideas started to fizz. I showed Hollis some of Jentery Sayer’s stuff, especially his work with Scalar . 

Jentery writes,

the platform particularly facilitates work with visual materials and dynamic media (such as video and audio)….it enables writers to assemble content from multiple sources and juxtapose them with their own compositions.

Can we use Scalar to tell the story of THATCamp Accessibility that captures the spontaneity, creativity, and excitement of that day in a way that highlights the issues of accessibility that Hollis  wants to explore? And if we can, how can we make it accessible for others (screenreaders, text-to-speech, etc?) And if we focus on telling history with an eye to accessibility (oh, how our metaphors privilege certain senses, ways of knowing!) maybe there will be lessons for telling history, full stop?

Stay tuned! Hollis is setting up his blog this week, but he’ll be posting over at http://hollispeirce.grahamresearchfellow.org/

Some Assembly Required: teaching through/with/about/by/because of, the Digital Humanities (slides & notes)

I’m giving a keynote address to the Canadian Network for Innovation in Education conference, at Carleton on Thursday (10.30, River Building). I’ve never done a keynote before, so I’ll confess to being a bit nervous. ‘Provoke!’ I’ve been told. ‘Inspire! Challenge!’ Well, here goes….

These are the slides and the more-or-less complete speaker’s notes. I often write things out, and then completely adlib on the day, but this is more or less the flavour I’m going for.

http://www.slideshare.net/DoctorG/some-assembly-required-teaching-throughwithaboutbybecause-of-the-digital-humanities

[Title]

I never appreciated how scary those three words were until I had kids. ‘Some assembly required’. That first Christmas was all, slide Tab A into Slot B. Where’s the 5/8ths gripley? Is that an Allen key? Why are there so many screws left over? The toys, with time, get broken, get fixed, get recombined with different play sets, are the main characters and the exotic locales for epic stories. I get a lot of mileage out of the stories my kids tell and act out with these toys.

My job is the DH guy in the history department. DH, as I see it, is a bit like the way my kids play with the imperfectly built things – it’s about making things, about breaking things, about being playful with those things. This talk is about what that kind of perspective might imply for our teaching and learning.

[2]

I don’t know what persuaded my parents that it’d be a good idea to spend $300 in 1983 dollars on a Vic20, but I’m glad they did. You turn on your ipad, it all just happens magically, whoosh! In those days, if you had a computer, you had to figure out how to make it do stuff, the hard way. A bit disappointing, that first ‘Ready’ prompt. Ready to do what? My brothers and I wanted to play games. So, we sat down to learn how to program them.  If you had a vic-20, do you remember how exciting it was when that ball first bounced off the corners of your screen? A bit like the apes in the opening scene of ‘2001’.  At least, in our house.

[3]

‘Wargame’, film with Matthew Broderick. This scared me; but I loved the idea of being able to reach out to someone else, someone far from where I lived in Western Quebec. So we settled for occasional trips to the Commodore store in Ottawa, bootleg copies of Compute! Magazine, and my most treasured book, a ‘how to make adventure games’ manual for kids, that my Aunt purchased for me at the Ontario Science centre.

[4]

Do you remember old-school text adventures? They’re games! They promote reading! Literacy! They are a Good Thing. Let’s play a bit of this game, ‘Action Castle’, to remind us how they worked.

To play an interactive fiction is to foreground how the rules work; it’s easy to see, with IF. But that same interrogation needs to happen whenever we encounter digital media.

[5]

Games like Bioshock – a criticism of Randian philosophy. Here, the interplay between the rules and the illusion of agency are critical to making the argument work.

When you play any kind of game, or interact with any kind of medium, you generally achieve success once you begin to think like the machine. What do games teach us? How to play the game: how to think like a computer. This is a ‘cyborg’ consciousness. The ‘cyb’ in ‘Cyborg’ comes from the greek for ‘governor’ or ‘ship’s captain’. Who is doing the governing? The code. This is why humanities NEEDS to consider the digital. It’s too important to leave to the folks who are already good at thinking like machines. This is the first strand of what ‘digital humanities’ might mean.

[6]

A second strand comes from that same impulse that my brothers and I had – let’s make something! Trying to make something on the computer inevitably leads to deformation. This deformation can be on purpose, like an artist; or it can be accidental, a result of either the user’s skill or the way that the underlying code imagines the world to work.

 [7]

‘Historical Friction’ is my attempt to realize a day-dream: what if the history of a place was thick enough to impede movement through it? I knew that I could find a) enough information about virtually everywhere on Wikipedia; that b) I could access this through mobile computing and c) something that often stops me in my tracks is not primarily visual but rather auditory. But I don’t have the coding chops to build something like that from scratch.

What I can do, though, is mash things together, sometimes. But when I do that, I’m beholden to design choices others have made. ‘Historical Friction’ is my first stab at this, welding someone else’s Wikipedia tool to someone else’s voice synthesizer. Let’s take a listen.

…So this second strand of DH is to deform (with its connotations of a kind of performance) different ways of knowing.

[8]

A third strand of DH comes from the reflexive use of technology. My training is in archaeology. As an archaeologist, I became Eastern Canada’s only expert in Roman Brick Stamps. Not a lot of call for that.

But I recognized that I could use this material to extract fossilized social networks, that the information in the stamps was all about connections. Once I had this social network, I began to wonder how I could reanimate it, and so I turned to simulation modeling. After much exploration, I’ve realized that what I resurrect on these social networks is NOT the past, but rather the story I am telling about the past. I simulate historiography. I create a population of zombie Romans (individual computing objects) and I give them rules of behavior that describe some phenomenon in the past that I am interested in. These rules are formulated at the level of the individual. I let the zombies go, and watch how they interact. In this way, I develop a way to interrogate the unintended or emergent consequences of the story I tell about the past: a kind of probabilistic historiography.

So DH allows me to deform my own understandings of the world; it allows me to put the stories I tell to the test.

[9]recap

There’s an awful lot of work that goes under the rubric of ‘digital humanities’. But these three strands are I think the critical ones for understanding what university teaching informed by DH might look like.

[10]

Did I mention my background was in archaeology? There’s a lot that goes under the rubric of ‘experimental’ archaeology that ties in to or is congruent with the digital humanities as well. Fundamentally, you might file it under the caption of ‘making as a way of knowing’.

[11]

Experimental archaeology has been around for decades. So too has DH (and its earlier incarnation as ‘humanities computing’) which goes back to at least the 1940s and Father Busa, who famously persuaded IBM to give him a research lab and computer scientists to help him create his concordance of the work praesans in the writings of Thomas Aquinas.

So despite the current buzz, DH is not just a fad, but rather has (comparatively) deep antecedents. The ‘Humanities’ as an organizing concept in universities has scarcely been around for much longer.

[12]

So let’s consider then what DH implies for university teaching.

[13]salt

But I feel I should warn you. My abilities to forecast the future are entirely suspect. As an undergrad, in 1994, I was asked to go on the ‘world wide web’, this new thing, and create an annotated bibliography concerning as many websites as I could that dealt with the Etruscans. The first site I found (before the days of content filters) was headlined, ‘the Sex Communist Manifesto’. Unimpressed, I wrote a screed that began, “The so-called ‘world wide web’ will never be useful for academics.”

Please do take everything I say then with a grain or two of salt.

[14]

Let me tell you about some of the things I have tried, built on these ideas of recognizing our increasingly cyborg consciousness, deformation of our materials, and of our perspectives. I’m pretty much a one-man band, so I’ve not done a lot with a lot of bells and whistles, but I have tried to foster a kind of playfulness, whether that’s role-playing, game playing, or just screwing around.

[15]epic fails

Some of this has failed horribly; and partly the failure emerged because I didn’t understand that, just like digital media, our institutions have rule sets that students are aware of; sometimes, our ‘best’ students are ‘best’ not because they have a deep understanding of the materials but rather because they have learned to play the games that our rules have created. In the game of being a student, the rules are well understood – especially in history (which is where I currently have my departmental home). Write an essay; follow certain rhetorical devices; write a midterm; write a final. Rinse. Repeat. Woe betide the prof who messes with that formula!

I once taught in a distance ed program, teaching an introduction to Roman culture class. The materials were already developed; I was little more than a glorified scantron machine. I was getting essay after essay that contained clangers along the lines of, ‘Vespasian won the civil war of AD 69, because Vespasian was later the Emperor.’ I played a lot of Civilization IV at the time, so I thought, I bet if I could get students to play out the scenario of AD69, students would understand a lot more of the contingency of the period, that Vespasian’s win was not foreordained. I crafted the scenario, built an alternative essay around it (’play the scenario, contrast the game’s history with ‘real’ history’), found students who had the game. Though many played it, they all opted to just write the original essay prompt. My failure was two-fold. One,‘playing a game for credit’ did not mesh with ‘the game of being a student’; there was no space there. Two, I created a ‘creepy treehouse’, a transgression into the student’s world where I did not belong. Profs do not play games. It’d be like inviting all my students to friend me on Facebook. It was creepy.

I tried again, in a history course last winter. The first assessment exercise – an icebreaker, really – was to play an interactive fiction that recreated some of the social aspects of moving through Roman space. The player had to find her way from Beneventum to Pompeii, without recourse to maps. What panic! What chaos! I lost a third of the class that week. Again, the concern was, ‘how does playing a game fit into the game of being a student’. Learning from the previous fiasco, I thought I’d laid a better foundation this time. Nope. The thing I neglected: there is safety in the herd. No one was willing to play as an individual and submit an individual response – ‘who wants to be a guinea pig?’ might have been the name of THIS game, as far as the students were concerned. I changed course, and we played it as a group, in class. Suddenly, it was safe.

[16]epic wins

But from failure, we learn, and we sometimes have epic wins (failures almost always are more interesting than wins). Imagine if we had a system that short-circuited the game of being a student, to allow students the freedom to fail, to try things out, and to grow! One of the major fails of my Year of the Four Emperors experiment was that it was I who did all the building. It should’ve been the students. When I built my scenario, I was doing it in public on one of the game’s community forums. I’ve since started crafting courses (or at least, trying to) where the students are continually building upwards from zero, where they do it in public, and where all of their writing and crafting is done in the open, in the context of a special group. This changes the game considerably.

[17]

To many of you, this is no doubt a coals-to-newcastle, preaching-to-the-choir kind of moment.

[18]

And again, I hear you say, what would an entire university look like, if all this was our foundation? Well, it’s starting to look a little better than it did when we first asked the question…

 [19]dh will save us

…but DH has been pushed an awful lot lately. DH will save us! It’ll make the humanities ‘relevant’: to funding bodies, to government, to parents! Just sprinkle DH fairy dust, and all will be safe, right?

[19]memes & dark side

You’ve probably heard that. It’s happened enough that there’s even memes about it.

Yep. No doubt – a lot of folks are sick of hearing about ‘the digital humanities’. At the most recent MLA, there was a good deal of pushback, including a session called ‘the dark side of DH’. Wendy Chun wrote,

For today, I want to propose that the dark side of the digital humanities is its bright side, its alleged promise: its alleged promise to save the humanities by making them and their graduates relevant, by giving their graduates technical skills that will allow them to thrive in a difficult and precarious job market. Speaking partly as a former engineer, this promise strikes me as bull: knowing GIS or basic statistics or basic scripting (or even server side scripting) is not going to make English majors competitive with engineers or CS geeks trained here or increasingly abroad […] It allows us to believe that the problem facing our students and our profession is a lack of technical savvy rather than an economic system that undermines the future of our students.”

  (That’s not a DH that I recognize, by the way, as I hope you’ll have noticed given my three strands).

Now, I wasn’t at that meeting, but I saw a lot of chatter flutter by that day, as in that same session MOOCs were conflated with the digital humanities; that somehow the embrace of DH enables the proliferation of MOOCs. As Amanda French, who has coordinated an extraordinary number of digital humanities ‘THATCamp’ conferences, has said, ‘I don’t know a single digital humanist who likes MOOcs.”

We’ve heard a lot about MOOCs today, and I’m certainly in no position to critique them as I’ve never offered nor successfully finished one. But as I’ve identified the strands of DH today, there *is* an affinity though with the so-called ‘cMOOC’.

[21]Know Your MOOCs

Before there was coursera, udacity, and glorified talking heads over the internet, there was the cMOOC. The Canadian MOOC. The personal learning environment. Isn’t it interesting that Pearson, a text book publisher, is a heavy investor in the MOOC scene? Frankly, as xMOOCs are currently designed, they seem to me to be a challenge to publishers of textbooks rather than to teaching. We can do better, and I think DH ties well with the idea of personal learning environments. ‘Massive’ is not, in and of itself, a virtue, and we’d do well to remember that.

[22]Rainbow Castle

So, following my three strands, we’d:

 [23]

-identify the ways our institutions and our uses of technology force particular ways of thinking

-we’d deform the content we teach

-we’d set up our institutions and our uses of technology to deform the way our students think: including the ways our institutions are set up.

[24]

So let’s turn the university inside out. It’s been about silos for so long (also known as ivory towers). I grew up on a farm: do you know what gets put into a silo, what comes out? It’s silage, chopped up, often a bit fermented, cattle food: pre-processed cud. Let’s not do that anymore.

[25]Walled Gardens, online dating

For all their massiveness, MOOCs and Universities are still walled gardens. And what’s the unit of connection? It’s the course. It’s the container. I used to work with a guy who often said, ‘once we get the contract, we’ll just get monkeys to do the work’. That guy is no longer in business. I used to work for a for-profit university in the States that had a similar approach to hiring online faculty.

MOOCs are not disruptive in that sense. Want to be really disruptive? Let’s turn to a model that massively connects people together who have a shared interest. I hereby banish the use of any metaphor that frames the relationship at a university in terms of clients, or customers. Instead, what if the metaphor used was more in line with a dating service?

In online dating, the site brings together two kinds of people, both looking for the same thing. Typically, the men pay a fee to be on the site; women are wooed to the site by all sorts of free promos etc.  No point having a dating site that does not have any available ‘others’ on it. In which case, the university could be in the business of bringing together students [the ‘men’] with faculty [the ‘women’]. If a university had that metaphor in its mind, it would be thinking, ‘what can we do to make our site – the university – an attractive place for faculty to be?’ Imagine that!

Students would not be signing up for classes, but rather, to follow and learn from particular profs. Typically on something like eBay or a dating site, there are reputation systems embedded in the site. You do not buy from the person with the bad rep in eBay; you do not contact the person whose profile has gotten many negative reviews. Since the university knows the grades of the students and has teaching evaluations and other indicators of faculty interests and reputations, it has the ability to put together faculty and students in a dynamic way. “Others who have enjoyed learning about Roman civilization with Dr. Graham have loved learning about Bronze Age Greece with…”.  Wouldn’t it be something to allow students to select their areas of interest knowing the reputation of the profs who work in a particular area; and for profs to select their students based on their demonstrated interests and aptitudes? Let faculty and students have ‘tokens’ – this is my first choice, this is my second choice, this is my third choice prof/student to work with for the session. Facilitate the matching of students and faculty. Let the student craft their way through university following individuals, and crafting a ‘masterpiece’ for their final demonstration of making as a way of knowing, for their BA? Hmmm. Kinda sounds like a return to the Guild, as it were.

You might not like that, which is fine; there are probably better ideas out there. We’ve got all this damned information around! Maybe there are earlier models that could work better with our new technologies, maybe there are new models for our new techs. But surely we can do better than merely replicate processes that were designed for the late 19th and early 20th century? Whatever metaphor we use to frame what the university does, it goes a long way to framing the ways learning can happen. That’s what DH and its exploration of a cyborg consciousness should make us at least explore.

[26]domain of one’s own

And once we’ve done that, let’s have some real openness. Let the world see that faculty-student, and student-student, relationship develop. Invite the rest of the world in. Folks like Ethan Watrall at MSU already do that for their on-campus courses putting all course materials and assessment activities on open websites, inviting the wider world to participate and to interact with the students.

Give every student, at the time of registration, a domain of their own, like Mary Washington is starting to do. Pay for it, help the student maintain it, for their time at university. At graduation, the student could archive it, or take over its maintenance. Let the learning community continue after formal assessment ends. The robots that construct our knowledge from the world wide web – Google and the content aggregators – depend on strong signals, on a creative class. If each and every student at your institution (and your alumni!) is using a domain of their own as a repository for their own IP, a personal learning environment, a node in a frequently re-configuring network of learners, your university would generate real gravity on the web, become the well out of which the wider world draws its knowledge. Use the structure and logic of the web to embed the learning life of the university so deeply into the wider world that it cannot be extricated!

[27]

Because right now, that’s not happening. If you study the structure of the web for different kinds of academic knowledge (here, Roman archaeology), there’s a huge disconnect between where the people are, and where the academics are. If we allow that to continue, it becomes increasingly more easy for outsiders to frame ‘academic’ knowledge as a synonym ‘pointless’. With the embedded university, the university inside out, there are no outsiders. If we embed our teaching through the personal learning environments of our students, our research production will become similarly embedded.

[28]

If the university is inside out, and not in splendid isolation, then it is embedded.

Forget massively ‘open’.

Think massively embedded.

Think massively accessible.

(Not the best image I could fine, but hey! that boulder, part of a structure, is embedded in a massively accessible landscape.)

[29]Check mark list

So what’s tuition for, then? Well, it’s an opportunity to have my one-on-one undivided attention; it’s icetime, an opportunity to skate. But we need to have more opportunities for sideways access to that attention too, for people who have benefited from participating in our openness, our embeddedness to demonstrate what they’ve learned. There’s much to recommend in Western Governors’ University’s approach to the evaluation of non-traditional learners.

[30]

The digital humanities, as a perspective, has changed the way I’ve come to teach. I didn’t set out to be a digital humanist; I wanted to be an archaeologist. But the multiple ways in which archaeological knowledge is constructed, its pan-disciplinary need to draw from different wells, pushed me into DH. There are many different strands to DH work; I’ve identified here what I think are three major ones that could become the framework, the weave and the weft, for something truly disruptive.

Practical Necromancy talk @Scholarslab – part I

Below is a draft of the first part of my talk for Scholarslab this week, at the University of Virginia. It needs to be whittled down, but I thought that those of you who can’t drop by on Thursday might enjoy this sneak peak.

Thursday, March 21 at 2:00pm
in Scholars’ Lab, 4th floor Alderman Library.

When I go to parties, people will ask me, ‘what do you do?’. I’ll say, I’m in the history department at Carleton. If they don’t walk away, sometimes they’ll follow that up with, ‘I love history! I always wanted to be an archaeologist!’, to which I’ll say, ‘So did I!’

My background is in Roman archaeology. Somewhere along the line, I became a ‘digital humanist’, so I am honoured to be here to speak with you today, here at the epicentre, where the digital humanities movement all began.

If the digital humanities were a zombie flick, somewhere in this room would be patient zero.

Somewhere along the line, I became interested in the fossilized traces of social networks that I could find in the archaeology. I became deeply interested – I’m still interested – in exploring those networks with social network analysis. But I became disenchanted with the whole affair, because all I could develop were static snapshots of the networks at different times. I couldn’t fill in the gaps. Worse, I couldn’t really explore what flowed over those networks, or how those networks intersected with broader social & physical environments.

It was this problem that got me interested in agent based modeling. At the time, I had just won a postdoc in Roman Archaeology at the University of Manitoba with Lea Stirling. When pressed about what I was actually doing, I would glibly respond, ‘Oh, just a bit of practical necromancy, raising the dead, you know how it is’. Lea would just laugh, and once said to me, ‘I have no idea what it is you’re doing, but it seems cool, so let’s see what happens next!’

How amazing to meet someone with the confidence to dance out on a limb like that!

But there was truth in that glib response. It really is a form of practical necromancy, and the connections with actual necromancy and technologies of death is a bit more profound than I first considered.

So today, let me take you through a bit of the deep history of divination, necromancy, and talking with the dead; then we’ll consider modern simulation technologies as a form of divination in the same mold; and then I’ll discuss how we can use this power for good instead of evil, of how it fits into the oft-quote digital humanities ethos of ‘hacking as a way of knowing’ (which is rather like experimental archaeology, when you think about it), and how I’m able to generate a probabilistic historiography through this technique.

And like all good necromancers, it’s important to test things out on unwilling victims, so I would also like to thank the students of HIST3812 who’ve had all of the ideas road-tested on them earlier this term.

Zombies clearly fill a niche in modern western culture. The president of the University of Toronto recently spoke about ‘zombie ideas’ that despite our best efforts, persist, infect administrators, politicians, and students alike, trying to eat the brains of university education.

Zombies emerge in popular culture in times of angst, fear, and uncertainty. If hollywood has taught us anything, it’s that Zombies are bad news. Sometimes the zombies are formerly dead humans; sometimes they are humans who have been transformed. Sometimes we deliberately create a zombie. The zombie can be controlled, and made to do useful work; zombie as a kind of slavery. More often, the zombies break loose, or are the result of interfering with things humanity was wont not too; apocalypse beckons. But sometimes, like ‘Fido’, a zombie can be useful, can be harnessed, and somehow, be more human than the humans. [Fido]

If you’d like to raise the dead yourself, the answer is always just a click away [ehow].

There are other uses for the restless dead. Before our current fixation with apocalypse, the restless dead could be useful for keeping the world from ending.

In video games, we call this ‘the problem space’ – what is it that a particular simulation or interaction is trying to achieve? For humanity, at a cosmological level, the response to that problem is through necromancy and divination.

I’m generalizing horribly, of course, and the anthropologists in the audience are probably gritting their teeth. Nevertheless, when we look at the deep history and archaeology of many peoples, a lot can be tied to this problem of keeping the world from ending. A solution to the problem was to converse with those who had gone before, those who were currently inhabiting another realm. Shamanism was one such response. The agony of shamanism ties well into subsequent elaborations such as the ball games of mesoamerica, or other ‘game’ like experiences. The ritualized agony of the athlete was one portal into recreating the cosmogonies and cosmologies of a people, thus keeping the world going.

The bull-leaping game at Knossos is perhaps one example of this, according to some commentators. Some have seen in the plan of the middle minoan phase of this palace (towards the end of the 2nd millenium BC) a replication in architecture of a broader cosmology, that its very layout reflects the way the Minoans saw the world (this is partly also because this plan seems to replicate in other Minoan centres around the Aegean). Jeffrey Soles, pointing to the architectural play of light and shadow throughout the various levels of Knossos argues that this maze-like structure was all part of the ecstatic journey, and ties shamanism directly to the agonies of sport & game in this location. We don’t have the Minoans’ own stories, of course, but we do have these frescoes of bull-leaping, and other paraphernalia which tie in nicely with the later dark-age myths of Greece

So I’m making a connection here between the way a people see the world working, and their games & rituals. I’m arguing that the deep history of games  is a simulation of how the world works.

This carries through to more recent periods as well. Herodotus wrote about the coming of the Etruscans to Italy: “In the reign of Atys son of Menes there was a great scarcity of food in all Lydia. For a while the Lydians bore this with patience; but soon, when the famine continued, they looked for remedies, and various plans were suggested. It was then that they invented the games of dice, knucklebones, and ball, and all the other games of pastime, except for checkers, which the Lydians do not claim to have invented. Then, using their discovery to forget all about the famine, they would play every other day, all day, so that they would not have to eat… This was their way of life for eighteen years. Since the famine still did not end, however, but grew worse, the king at last divided the people into two groups and made them draw lots, so that one should stay and the other leave the country’.

Here I think Herodotus misses the import of the games: not as a pasttime, but as a way of trying to control, predict, solve, or otherwise intercede with the divine, to resolve the famine. In later Etruscan and Roman society, gladiatorial games for instance were not about entertainment but rather about cleansing society of disruptive elements, about bringing everything into balance again, hence the elaborate theatre of death that developed.

The specialist never disappears though, the one who has that special connection with the other side and intercedes for broader society as it navigates that original problem space. These were the magicians and priests. But there is an important distinction here. The priest is passive in reading signs, portents, and omens. Religion is revealed, at its proper time and place, through proper observation of the rituals. The magician is active – he (and she) compels the numinous to reveal itself, the spirits are dragged into this realm; it is the magician’s skill and knowledge which causes the future to unfurl before her eye.

The priest was holy, the magician was unholy.

Straddling this divide is the Oracle. The oracle has both elements of revelation and compulsion. Any decent oracle worth its salt would not give a straight-up answer, either, but rather required layers of revelation and interpretation. At Delphi, the God spoke to the Pythia, the priestess, who sat on the stool over the crack in the earth. When the god spoke, the fumes from below would overcome her, causing her to babble and writhe uncontrollably. Priests would then ‘interpret’ the prophecy, in form of a riddle.

Why riddles? Riddles are ancient. They appear on cuneiform texts. Even Gollum knew what a true riddle should look like – a kind of lyric poem asking a question that guards the right answer in hints and wordplay.

‘I tremble at each breath of air/ And yet can heaviest burders bear. [implicit question being asked is who am I? – water]

Bilbo cheated.

We could not get away from a discussion of riddles in the digital humanities without of course mentioning the I-ching. It’s a collection of texts that, depending on dice throws, get combined and read in particular ways. Because this is essentially a number of yes-or-no answers, the book can be easily coded onto a computer or represented mechanically. In which case, it’s not really a ‘book’ at all, but a machine for producing riddles.

Ruth Wehlau writes, “Riddlers, like poets, imitate God by creating their own cosmos; they recreate through words, making familiar objects into something completely new, rearranging the parts of pieces of things to produce creatures with strange combinations of arms, legs, eyes and mouths. In this transformed world, a distorted mirror of the real world, the riddler is in control, but the reader has the ability to break the code and solve the mystery (wehlau 1997)

Riddles & divination are related, and are dangerous. But they also create a simulation, of how the world can come to be, of how it can be controlled.

One can almost see the impetus for necromancy, when living in a world described by riddles. Saul visits the Witch of Endor; Oddyseus goes straight to the source.

…and Professor Hix prefers the term ‘post mortem communications’. However you spin it, though, the element of compulsion, of speaking with the dead, marks it out as a transgression; necromancers and those who seek their aid never end well.

It remains true today, that those who practice simulation, are similarly held in dubious regard. If that was not the case, tongue in cheek articles titles such as this would not be necessary.

I am making the argument that modern computational simulation, especially in the humanities, is more akin to necromancy than it is to divination, for all of these reasons.

But it’s also the fact that we do our simulation through computation itself that marks this out as a kind of necromancy.

The history of the modern digital computer is tied up with the need to accurately simulate the yields of atomic bombs,  of blast zones, and potential fallout, of death and war. Modern technoculture has its roots in the need to accurately model the outcome of nuclear war, an inversion of the age old problem space, ‘how can we keep the world from ending’ through the doctrines of mutually assured destruction.

The playfulness of those scientists, and the acceleration of hardware technology lead to video games, but that’s a talk for another day (and indeed, has been recently well treated by Rob MacDougall of Western University).

‘But wait! Are you implying that you can simulate humans just as you could individual bits of uranium and atoms, and so on, like the nuclear physicists?’ No, I’m not saying that, but it’s not for nothing that Isaac Asimov gave the world Hari Seldon & the idea of ‘psychohistory’ in the 1950s. As Wikipedia so ably puts it, “Psychohistory is a fictional science in Isaac Asimov’s Foundation universe which combines history, sociology, etc., and mathematical statistics to make general predictions about the future behavior of very large groups of people, such as the Galactic Empire.”

Even if you could do Seldon’s psychohistorical approach, it’s predicated on a population of an entire galaxy. One planetfull, or one empire-full, or one region-full, of people just isn’t enough. Remember, this is a talk on ‘practical’ necromancy, not science-fiction.

Well what about so-called ‘cliodynamics’? Cliodynamics looks for recurring patterns in aggregate statistics of human culture. It may well find such patterns, but it doesn’t really have anything to say about ‘why’ such patterns might emerge. Both psycohistory and cliodynamics are concerned with large aggregates of people. As an archaeologist, all I ever find are the traces of individuals, of individual decisions in the past. It always requires some sort of leap to jump from these individual traces to something larger like ‘the group’ or ‘the state’. A Roman aqueduct is, at base, still the result of many individual actions.

A practical necromancy therefore is a simulation of the individual.

There are many objections to simulation of human beings, rather than things like atoms, nuclear bombs, or the weather. Our simulations can only do what we program them to do. So they are only simulations of how we believe the world works (ah! Cosmology!). In some cases, like weather, our beliefs and reality match quite well, at least for a few days, and we know much about how the variables intersect. But, as complexity theory tells us, starting conditions strongly affect how things transpire. Therefore we forecast from multiple runs with slightly different starting conditions. That’s what a 10% chance of rain really means: We ran the simulation 100 times, and in 10 of them, rain emerged.

And humans are a whole lot more complex than the water cycle. In the case of humans, we don’t know all the variables; we don’t know how free will works; we don’t know how a given individual will react; we don’t understand how individuals and society influence each other. We do have theories though.

This isn’t a bug, it’s a feature. The direction of simulation is misplaced. We cannot really simulate the future, except in extremely circumscribed situations, such as pedestrian flow. So let us not simulate the future, as humanists. Let us create some zombies, and see how they interact. Let our zombies represent individuals in the past. Give these zombies rules for interacting that represent our best beliefs, our best stories, of how some aspect of the past worked. Let them interact. The resulting range of possible outcomes becomes a kind of probabilistic historiography. We end up with not just a story about the past, but also about other possible pasts that could have happened if our initial story we are telling about how individuals in the past acted is true, for a given value of true.

 We create simulacra, zombies, empty husks representing past actors. We give them rules to be interpreted given local conditions. We set them in motion from various starting positions. We watch what emerges, and thus can sweep the entire behavior space, the entire realm of possible outcomes given this understanding. We map what did occur (as best as we understand it) against the predictions of the model. For the archaeologist, for the historian, the strength of agent based modeling is that it allows us to explore the unintended consequences inherent in the stories we tell about the past. This isn’t easy. But it can be done. And compared to actually raising the dead, it is indeed practical.

[and here begins part II, which runs through some of my published ABMS, what they do, why they do it. All of this has to fit within an hour, so I need to do some trimming.]

[Postscriptum, March 23: the image of the book of random digits came from Mark Sample’s ‘An Account of Randomness in Literary Computing, & was meant to remind me to talk about some of the things Mark brought up. As it happens, I didn’t do that when I presented the other day, but you really should go read his post.]

Living the Life Electric

I’m addressing the Underhill Graduate Students’ Colloquium tomorrow, here in the history department at Carleton U. Below are my slides for ‘Living the Life Electric: On Becoming a Digital Humanist’

update March 7: here are my speaking notes. These give a rough sense of what I intend to talk about at various points. Bolded titles are the titles of slides. Not every slide is listed, as some speak more or less for themselves.

I wanted to be an archaeologist - I graduated in 2002.

‘Digital Humanities’ wasn’t coined until 2004.

It emerges from ‘humanities computing’, which has been around since the 1940s.

In fact, computing wouldn’t be the way it is today without the Humanities, and the Jesuit, Father Busa.

Eastern Canada’s Only Stamped Brick Specialist -Roman archaeology

Stamped brick

Eastern Canada’s only Stamped Brick Specialist, probably

….things were pretty lean in 2003…

Life from a suitcase

Comin’ Home Again

Youth development grant to study cultural heritage of my home township

Also a small teaching excavation based in Shawville

Which led to a teaching gig at the local high school.

A Year of Living Secondarily

What was it about my academic work that I really enjoyed?

Networks

Possibilities of Simulation

Random Chances and the virtues of ‘What the Hell’

Coronation Hall

Meanwhile, I enter business – 3 different startups, one of which has survived (so far!)

Heritage focus

Heritage education – learned how to install my own software, LMS

Trying to monetize the information I uncovered in my cultural heritage study

Coronation Hall Cider Mills

(Shameless Plug).

What are the digital humanities  – think about it: modern computers were developed in order to allow us to map, forecast, the consequences of massive annihilation and death. Simulation is rooted in the desire to predict future death counts. My interest emerged from trying to simulate my own understandings of the past, to understand the unintended consequences of my understandings, to put some sort of order on the necessarily incomplete materials I was looking at. I call it ‘practical necromancy’

Do your work in public blog was originally intended to chronicle my work on simulation, but it has become very much the driver of my online identity, the calling card that others see when they intersect my work – and because it’s been up for so long, with a sustained focus, it creates a very strong signal which our algorithms, Google, pick up. This is how academics can push the public discourse: interfere with the world’s extended mind, their entangled consciousness of cyberspace & meatspace.

Allows you to develop your ideas

Forces you to write in small chunks

Exposes your work to potential audiences

My blog posts have been cited in others’ academic monographs

Has improved the readership of my published work

A quarter million page reads over the last six years.

My book: maybe 40 copies, if I’m lucky.

Basic Word Counts

Top words:

digital 1082 research 650 university 577 experience 499 library 393 humanities 386

History: 177 times

Broadly, not useful or surprising. But consider the structure of word use…

Group 1: gives you a sense of technical skills, but for the most part not the kinds of analyses that one would use that for. That’s an important distinction. The analysis should drive the skill set, not the other way around (a man with a hammer, everything looks like a nail)

Group 2: European centres!

Group 3: Canada!

Job adverts – to – topics. Six broad groups based on how the adverts share particular discourses. Gives a sense of where academic departments think this field is going. If I’d done this according to individual researcher’s blogs, or the ‘about’ pages for different centres, you’d get a very different picture – game studies, for instance.

 

Important point: I wanted to show you how you can begin to approach large masses of material, and extract insights, suss out, underlying structures of ideas. This is going to be big in the future, as more and more data about our every waking moment gets recorded. Google Glass? It’s not about the user: it’s about everything the user sees, which’ll get recorded in the googleplex. Governments. Marketers. University Administrations. Learn to extract signals from this noise, and you’ll never go hungry again.

Keep in mind that in 1994 I wrote that the internet would never be useful for academics. My ability to predict the future is thus suspect.

So how to join this brave new world? Twitter, etc.

 

 

 

Text Analysis of 2012 Digital Humanities Job Adverts part 2

digital-humanities-jobsIf we look at simple word frequencies in the 2012 job advertisement documents for Digital Humanities, we find these top words and raw frequency counts:

research    650
university    577
experience    499
library            393
work            334
information    303
position    299
project            269
applications    257

(I’ve deleted ‘digital’ and ‘humanities’ from this list).

If job advertisements are a way of signalling what an institution hopes the future will hold, one gets the sense that the focus of digital humanities work will be on projects, on research, in conjunction with libraries. But we can extract more nuance, using network analysis. You can feed the texts into Voyant’s ‘RezoViz’ tool, which extracts paired nouns in each document.

This can be outputted as a .net file, and then imported into Gephi. The resulting graph has 1461 nodes, and 20649 edges. Of course, there are some duplicates (like ‘US’ and ‘United States’), but this is only meant to be rough and ready, ‘generative‘, as it were (and note also that a network visualization is not necessary for the analysis. So no spaghetti balls. What’s important are the metrics). What I’d like to find out are what concepts are doing the heavy lifting in these job advertisements? What is the hidden structure of the future of digital humanities, as evidenced by job advertisements in the English speaking world?

My suspicion is that ‘modularity’ aka ‘community detection’, and ‘betweeness centrality’, are going to be the key metrics for figuring this out. Modularity groups nodes on the basis of shared similar local patternings of ties (or, to put it another way, it decomposes the global network into maximal subnetworks). Seth Long recently did some network analysis on the Unabomber’s manifesto, and lucidly explains why betweeness centrality is a useful metric for understanding semantic meaning: “A word with high betweenness centrality is a word through which many meanings in a text circulate.” In other words, the heavy lifters.

So let’s peer into the future.

I ended up with about 15 groups. The first three groups by modularity account for 75% of the nodes, and 80% of the ties. These are the groups where the action lies. So let’s look at words with the highest betweenness centrality scores for those first three groups.

The first group

University
CSS
PHP
Digital
Ruby
METS (Metadata encoding and transmission standard)
United States
Python
MLS
New York

‘University’ is not surprising, and not useful. So let us discard it and bring in the next highest word:

MySQL

This one group by modularity also has all of the highest betweenness centrality scores – and it reads like a laundry list of the skills a budding DH practitioner must hold. The US, and New York would seem to be the centre of the world, too.

If we take the next ten words, we get:

MODS (Netadata Object Description Schema)
XHTML
University Libraries
CLIR (Council on Library and Information Resources)
University of Alberta
North America
Drupal
XML
MARC
Duke University

Again, skills and places figure – in Canada, U of A appears. So far, the impression is that DH is all about text, markup, and metadata. Our favorite programming languages are python and ruby. We use php, xhtml, xml, and drupal (plain-jane vanilla html eventually turns up in the list, but it’s buried very, very deep.).

So that’s an impression of the first group. (Remembering that groups are defined by patterns of similarity in their linkages).

The Second Group

The next group looks like this:
Digital Humanities
London
UK
CV
Dublin
Europe
Ireland
ICT
Department of Digital Humanities
Department of History

“digital humanities” is probably not helpful, so let’s eliminate that and go one more down: “US”. Indeed, let’s take a look at the next ten, too:

Human Resources
Department
Computer Science
BCE
Head of School
Faculty of Humanities
European
University of Amsterdam
MA
Italy

Here, we’re dealing very much with a UK, Ireland, and European focus. The ‘BCE’ is telling, for it suggests an archaeological focus in there, somewhere (unless this is some new DH acronym of which I’m not aware; I’m assuming ‘before the common era’).

The Third Group

In the final group we’ll consider here, we find a strong Canadian focus:

CRC (Canada Research Chair)
Canada
Waterloo
TEI (Text Encoding Initiative)
SSHRC
Victoria
Canada Research Chair
Skype
Digital Humanities Summer Institute
University of Victoria

Since we’ve got some duplication in here, let’s look at the next ten:

Canadian
Quebec
ETCL (Electronic Textual Cultures Laboratory, U Victoria)
Montreal
Concordia
University of Waterloo
DHSI (Digital Humanities Summer Institute)
Stratford
Faculty of Arts
Stratford Campus

‘Canada Research Chairs’ are well-funded government appointments, and so give an indication of where the state would like to see some research. Victoria continually punches above its weight, with look ins from Waterloo and Concordia.

So what have we learned? Well, despite the efforts of the digital history community, ‘digital humanities’ is still largely a literary endeavor – although it’s quite possible that a lot of the marking up that these job advertisements might envision could be of historical documents. Invest in some python skills (see Programming Historian). My friends in government tell me that if you can data mine, you’ll be set for life, as the government is looking for those skills. (Alright, that didn’t come out in this analysis at all, but he’s looking over my shoulder right now).

Finally – London, Dublin, New York, Edmonton, Victoria, Waterloo, Montreal – these seem to be the geographic hotspots. Speaking of temperature, Victoria has the nicest weather. Go there, young student!

Or come to Carleton and study with me.  We’ve got tunnels.

update March 4th: jobs-topics-dh as a network graph IN the analysis above, I’ve generated a network using Voyant’s RezoViz tool. Today, I topic modelled all of the texts looking for 10 topics. So a slightly different approach. I turned the resulting document composition (ie doc 1 is 44% topic 1, 22% topic 4, 10% topic 3, etc) into a two mode graph, job advert to top two constituent topics. I then turned this into a 1 mode graph where job adverts are tied to other job adverts based on topic composition. Then I ran modularity, and found 3 groups by modularity; edges are percent composition by topics discerned through topic modeling.Nodes are ‘betweenness centrality’. Most between? George Mason University. I’m not sure what ‘betweenness centrality’ means though in this context, yet.

Makes for interesting clusters of job adverts. Topic model results to be discussed tomorrow.

Text analysis of 2012 Digital Humanities Job Adverts

wordcloud-dhjobs

2012 was a good year for hirings in the digital humanities. See for yourself at this archive of DH jobs: http://jobs.lofhm.org/ Now: what do these job adverts tell us, if you’re a graduate student trying to find your way?

Next week, I’m speaking to the Underhill Graduate Students’ Colloquium at Carleton University on ‘Living the life electric: becoming a digital humanist’. It’s broadly autobiographical in that I’ll talk about my own idiosyncratic path into this field.

That’s quite the point: there’s no firm/accepted/typical/you-ought-to-do X recipe for becoming a digital humanist. You have to find your own way, though the growing body of courses, books, journals, blog-o-sphere and twitterverse certainly makes a huge difference.

But in the interests of providing perhaps a more satisfying answer, I’ll try my hand at data mining those job posts (some 150 of them) using Voyant and MALLET to see what augurs for the future of the field.

Feel free to explore the corpus uploaded into Voyant. In any graphs you produce, January is on the left, December is on the right. If you spot anything interesting/curious, let me know.

And, because word counts are amazing:

Word Count
digital 1082
research 650
university 577
experience 499
library 393
humanities 386
work 334
information 303
position 299
project 269
applications 257
new 223
faculty 222
development 216
collections 210
department 207
management 206
projects 195
knowledge 192
data 187
including 185
ability 182
services 180
teaching 180
history 177
libraries 176
skills 176
qualifications 172
technology 169
required 166
media 163
jobs 151
application 149
original 146
program 145
link 143
web 143
working 142
loading 140
related 140
staff 138
academic 137
communication 133
job 132
college 130
degree 127
professor 126
education 125
students 125
studies 123

Why I Play Games

(originally posted at #HIST3812, my course blog for this term’s History3812: Gaming and Simulations for Historians, at Carleton University).

I play because I enjoy video games, obviously, but I also get something else out of it.  Games are a ‘lively art'; they are an expressive art, and the artistry lies in encoding rules (descriptions) about how the world works at some microlevel: and then watching how this artistry is further expressed in the unintended consequences of those rules, their intersections, their cancellations, causing new phenomena to emerge.

This strikes me as the most profound use of humanities computation out there. Physicists tell us that the world is made of itty bitty things that interact in particular ways. In which case, everything else is emergent: including history. I’m not saying that there are ‘laws’ of human action; but we do live in this universe. So, if I can understand some small part of the way life was lived in the past, I can model that understanding, and explore the unintended outcomes of that understanding… and go back to the beginning and model those.

I grew up with the video game industry. Adventure? I played that. We had a vic-20 . If you wanted to play a game, you had to type it in yourself. There used to be a magaine (Compute!) that would have all of the code printed within, along with screenshots. Snake, Tank Wars – yep. My older brother would type, and I would read the individual letters (and spaces, and characters) out. After about a week, we’d have a game.

And there would be bugs. O lord, there were bugs.

When we could afford games, we’d buy text adventures from Infocom. In high school, my older brother programmed a quiz game as his history project for the year. Gosh, we were cool. But it was! Here we were, making the machine do things.

As the years went on, I stopped programming my own games. Graphics & technology had moved too fast. In college, we used to play Doom (in a darkened room, with the computer wired to the stereo. Beer often figured). We played SimCity. We played the original Civilization.

These are the games that framed my interactions with computers. Then, after I finished my PhD, I returned to programming when I realized that I could use the incredible artificial intelligences, the simulation engines, of modern games, to do research. To enhance my teaching.

I got into Agent Based Modeling, using the Netlogo platform. This turned my career around: I ceased to be a run-of-the-mill materials specialist (Roman archaeology), and became this new thing, a ‘digital humanist’. Turns out, I’m now an expert on simulation and history.

Cool, eh?

And it’s all down to the fact that I’m a crappy player of games. I get more out of opening the hood, looking at how the thing works. Civilization IV and V are incredible simulation engines. So: what kinds of history are appropriate to simulate? What kinds of questions can we ask? That’s what I’m looking forward to exploring with you (and of course, seeing what you come up with in your final projects).

But maybe a more fruitful question to start with, in the context of the final project of this course, is, ‘what is the strangest game you’ve ever played?’

What made it strange? Was it the content, the mechanics, the interface?

I played one once where you had to draw the platform with crayons, and then the physics engine would take over. The point was to try to get a ball to roll up to a star. Draw a teeter-totter under the star, and perhaps the ball would fall on it, shooting the star up to fall down on the ball, for instance. A neat way of interacting with the underlying physics of game engines.

I’d encourage everyone to think differently about what the games might be. For instance, I could imagine a game that shows real-time documents (grabbed from a database), and you have to dive into it, following the connected discourses (procedurally generated using topic models and network graphing software to find these – and if this makes no sense to you, take a quick peek at the Programming Historian) within it to free the voices trapped within…

This is why I play. Because it makes me think differently about the materials I encounter.

Evaluating Digital Work in the Humanities

http://projects.chass.utoronto.ca/amphoras/ills/sym99-f2.gifLeave it to an archaeologist, but when I heard the CFP from Digital Humanities Now on ‘evaluating’ digital work, I immediately started thinking about typologies, about categorizing. If it is desirable to have criteria for evaluating DH work, then we should know roughly the different kinds of DH work, right? The criteria for determining ‘good’ or ‘relevant’, or other indications of value will probably be different, for different kinds of work.

In which case, I think there are at least two dimensions, though likely more, for creating typologies of DH work. The first – let’s call it the Owens dimension, in honour of Trevor’s post on the matter- extends along a continuum we could call ‘purpose’, from ‘discovery’ through to ‘justification’. In that vein I was mulling over the different kinds of digital archaeological work a few days ago. I decided that the closer to ‘discovery’ the work was, the more it fell within the worldview of the digital humanities.

The other dimension concerns computing skill/knowledge, and its explication. There are lots of level of skill in the digital humanities. Me, I can barely work Git or other command-line interventions, though I’m fairly useful at agent simulation in Netlogo. It’s not the kinds of skills here I am thinking about, but rather how well we fill in the blanks for others. There is so much tacit knowledge in the digital world. Read any tutorial, and there’s always some little bit that the author has left out because, well, isn’t that obvious? Do I really need to tell you that? I’m afraid the answer is yes. “Good” work on this dimension is work that provides an abundance of detail about how the work was done so that a complete neophyte can replicate it. This doesn’t mean that it has to be right there in the main body of the work – it could be in a detailed FAQ, a blog post, a stand alone site, a post at Digital Humanities Q&A, whatever.

For instance, I’ve recently decided to start a project that uses Neatline. Having put together a couple of Omeka sites before, and having played around with adding plugins, I found that (for me) the documentation supporting Neatline is quite robust. Nevertheless, I became (am still) stumped on the problem of the geoserver to serve up my georectified historical maps. Over the course of a few days, I discovered that since Geoserver is java-based, most website hosting companies charge a premium or monthly charge to host it. Not only that, it needs Apache Tomcat installed on the server first, to act as a ‘container’. I eventually found a site – Openshift - that would host all of this for free (! cost always being an issue for the one-man-band digital humanist), but this required me to install Ruby and Git on my machine, then to clone the repository to my own computer, then to drop a WAR file (as nasty as it sounds) into the webapps folder (but what is this? There are two separate webapp folders!) , then ‘commit, push’ everything back to openshift. Then I found some tutorials that were explicitly about putting Geoserver on Openshift, so I followed them to the letter…. turns out they’re out of date and a lot can change online quite quickly.

If you saw any of my tweets on Friday, you’ll appreciate how much time all of this took…. and at the end of the day, still nothing to show for it (though I did manage to delete the default html). Incidentally, Steve from Openshift saw my tweets and is coaching me through things, but still…

So: an importance axis for evaluating work in the digital humanities is explication. Since so much of what we do consists of linking together lots of disparate parts, we need to spell out how all the different bits fit together and what the neophyte needs to do to replicate what we’ve just done. (Incidentally, I’m not slagging the Neatline or Omeka folks; Wayne Graham and James Smithies have been brilliant in helping me out – thank you gentlemen!).  The Programming Historian has an interesting workflow in this regard. The piece that Scott, Ian, and I put together on topic modelling was reviewed by folks who were definitely in the digital humanities world, but not necessarily well-versed in the skills that topic modeling requires. Their reviews, going over our step by step instructions, pointed out the many, many, places where we were blind to our assumptions about the target audience. If that tutorial has been useful to anyone, it’s entirely thanks to the reviewers, John Fink, Alan MacEachern, and Adam Crymble.

So, it’s late. But measure digital humanities work along these two axes, and I think you’ll have useful clustering in order to further ‘evaluate’ the work.

Deformative Digital Archaeology

An archaeological visualization.

Is digital archaeology part of the digital humanities?

This isn’t to get into another who’s in/who’s out conversation. Rather, I was thinking about the ways archaeologists use computing in archaeology, and to what ends. The Computer Applications in Archaeology Conference has been publishing proceedings since 1973, or longer than I’ve been on this earth. Archaeologists have been running simulations, doing spatial analysis, clustering, imaging, geophysicing, 3d modeling, neutron activation analyzing, x-tent modeling , etc, for what seems like ages.

Surely, then, digital archaeologists are digital humanists too? Trevor Owens has a recent post that sheds useful light on the matter. Trevor draws attention to the purpose behind one’s use of computational power – generative discovery versus justification of an hypothesis. For Trevor, if we are using computational power to deform our texts, we are trying to see things in a new light, new juxtapositions, to spark new insight. Ramsay talks about this too in Reading Machines (2011: 33), discussing the work of Jerome McGann and Lisa Samuels. “Reading a poem backward is like viewing the face of a watch sideways – a way of unleashing the potentialities that altered perspectives may reveal”. This kind of reading of data (especially, but not necessarily, through digital manipulation), does not happen very much at all in archaeology. If ‘deformance’ is a key sign of the digital humanities, then digital archaeologists are not digital humanists. Trevor’s point isn’t to signal who’s in or who’s out, but rather to draw attention to the fact that:

When we separate out the the context of discovery and exploration from the context of justification we end up clarifying the terms of our conversation. There is a huge difference between “here is an interesting way of thinking about this” and “This evidence supports this claim.”

This, I think, is important in the wider conversation concerning how we evaluate digital scholarship. We’ve used computers in archaeology for decades to try to justify or otherwise connect our leaps of logic and faith, spanning the gap between our data and the stories we’d like to tell. A digital archaeology that sat within the digital humanities would worry less about that, and concentrate more on discovery and generation, of ‘interesting way[s] of thinking about this’.

In a paper on Roman social networks and the hinterland of the city of Rome, I once argued (long before I’d ever heard the term digital humanities) that we should stop using GIS displaying North at the top of the map, that this was hindering our ability to see patterns in our data. I turned the map sideways – and it sent a murmur through the conference room as east-west patterns, previously not apparent, became evident. This, I suppose, is an example of deformation. Hey! I’m a digital humanist! But other digital work that I’ve been doing does not fall under this rubric of ‘deformation’.

My Travellersim simulation for instance uses agent based modeling to generate territories, and predict likely interaction spheres, from distributions of survey data. In essence, I’m not exploring but trying to argue that the model accounts for patterns in the data. This is more in line with what digital archaeology often does.

Archaeological Glitch Art, Bill Caraher

Bill Caraher, I suspect, has been reading many of the same things I have been lately, and has been thinking along similar lines. In a post on archaeological glitch art Bill has been changing file extensions to fiddle about in the insides of images of archaeological maps, then looking at them again as images:

“The idea of these last three images is to combine computer code and human codes to transform our computer mediate image of archaeological reality in unpredictable ways. The process is remarkably similar to analyzing the site via the GIS where we take the “natural” landscape and transform it into a series of symbols, lines, and text. By manipulating the code that produces these images in both random and patterned ways, we manipulate the meaning of the image and the way in which these images communicate information to the viewer. We problematize the process and manifestation of mediating between the experienced landscape and its representation as archaeological data.”

In the same way, Trevor uses augmented reality smartphone translation apps set to translate Spanish text into English, but pointed at non Spanish texts. It’s a bit like Mark Sample’s Hacking the Accident, where he uses an automatic dictionary substitution scheme (n+7, a favorite of the Oulipo group) to throw up interesting juxtapositions. A deformative digital archaeology could follow these examples. Accordingly, here’s my latest experiment along these lines.

Screen shot from the deformed Netlogo ‘Mimicry’ model

Let’s say we’re interested in the evolution of amphorae types in the Greco-Roman world. Let’s go to the Netlogo models library, and instead of building the ‘perfect’ archaeological model, let’s select one of their evolutionary models – Wilensky’s ‘Mimicry‘ model, which is about the evolution of Monarch and Viceroy butterflies swapping in ‘amphora’ for ‘moth’ everywhere in the code and supporting documentation, and ‘Greeks’ for ‘birds’.

In the original model code, we are told:

“Batesian mimicry is an evolutionary relationship in which a harmless species (the mimic) has evolved so that it looks very similar to a completely different species that isn’t harmless (the model). A classic example of Batesian mimicry is the similar appearance of monarch butterfly and viceroy moths. Monarchs and viceroys are unrelated species that are both colored similarly — bright orange with black patterns. Their colorations are so similar, in fact, that the two species are virtually indistinguishable from one another.

The classic explanation for this phenomenon is that monarchs taste desireable. Because monarchs eat milkweed, a plant full of toxins, they become essentially inedible to butterflies. Researchers have documented butterflies vomiting within minutes of eating monarch butterflies. The birds then remember the experience and avoid brightly colored orange butterfly/moth species. Viceroys, although perfectly edible, avoid predation if they are colored bright orange because birds can’t tell the difference.

This is what you get:

We have two types of amphorae here, which we are calling the ‘monarch’ type (type 1) and the ‘viceroy’ type (type 2).

This model simulates the evolution of monarchs and viceroys from distinguishable, differently colored types to indistinguishable mimics and models. At the simulation’s beginning there are 450 type 1s and type 2s distributed randomly across the world. The type 1s are all colored red, while the type 2s are all colored blue. They are also distinguishable (to the human observer only) by their shape: the letter “x” represents type 1s while the letter “o” represents type 2s. Seventy-five Greeks are also randomly distributed across the world.

When the model runs, the Greeks and amphorae move randomly across the world. When a Greek encounters a amphora it rejects the amphora, unless it has a memory that the amphora’s color is “desireable.” If a Greek consumes a monarch, it acquires a memory of the amphora’s color as desirable.

As amphorae are consumed, they are regenerated. Each turn, every amphora must pass two “tests” in order to reproduce. The first test is based on how many amphorae of that species already exist in the world. The carrying capacity of the world for each species is 225. The chances of regenerating are smaller the closer to 225 each population gets. The second test is simply a random test to keep regeneration in check (set to a 4% chance in this model). When a amphora does regenerate it either creates an offspring identical to itself or it creates a mutant. Mutant offspring are the same species but have a random color between blue and red, but ending in five (e.g. color equals 15, 25, 35, 45, 55, 65, 75, 85, 95, 105). Both monarchs and Viceroys have equal opportunities to regenerate mutants.

Greeks can remember up to MEMORY-SIZE desireable colors at a time. The default value is three. If a Greek has memories of three desireable colors and it encounters a monarch with a new desireable color, the Greek “forgets” its oldest memory and replaces it with the new one. Greeks also forget desireable colors after a certain amount of time.

And when we run the simulation? Well, we’ve decided that one kind of amphora is desireable, another kind is undesireable. The undesireable ones respond to (human) consumer pressure and change their color; over time they evolve to the same color. Obviously, we’re talking as if the amphorae themselves have agency. But why not? (and see Godsen, ‘What do objects want?’) That’s one interesting side effect of this deformation.

As I haven’t changed the code, so much as the labels, the original creator’s conclusions still seem apt:

Initially, the Greeks don’t have any memory, so both type 1 and type 2 are consumed equally. However, soon the Greeks “learn” that red is a desireable color and this protects most of the type 1s. As a result, the type 1 population makes a comeback toward carrying capacity while the type 2 population continues to decline. Notice also that as reproduction begins to replace consumed amphorae, some of the replacements are mutants and therefore randomly colored.

As the simulation progresses, Greeks continue to consume mostly amphorae that aren’t red. Occasionally, of course, a Greek “forgets” that red is desireable, but a forgetful Greek is immediately reminded when it consumes another red type 1. For the unlucky type 1 that did the reminding, being red was no advantage, but every other red amphora is safe from that Greek for a while longer. Type 1 (non-red) mutants are therefore apt to be consumed. Notice that throughout the simulation the average color of type 1 continues to be very close to its original value of 15. A few mutant type 1s are always being born with random colors, but they never become dominant, as they and their offspring have a slim chance for survival.

Meanwhile, as the simulation continues, type 2s continue to be consumed, but as enough time passes, the chances are good that some type 2s will give birth to red mutants. These amphorae and their offspring are likely to survive longer because they resemble the red type 1s. With a mutation rate of 5%, it is likely that their offspring will be red too. Soon most of the type 2 population is red. With its protected coloration, the type 2 population will return to carrying capacity.

The swapping of words makes for some interesting juxtapositions. ‘Protects’, from ‘consumption’? This kind of playful swapping is where the true potential of agent based modeling might lie, in its deformative capacity to make us look at our materials differently. Trying to simulate the past through ever more complicated models is a fool’s errand. A digital archaeology that sat in the digital humanities would use our computational power to force us to look at the materials differently, to think about them playfully, and to explore what these sometimes jarring deformations could mean.

—–

Godsen, Chris. 2005. ‘What do objects want?’ Journal of Archaeological Method and Theory 12.3 DOI: 10.1007/s10816-005-6928-x

Ramsay, Stephen. 2011. Reading Machines. Towards An Algorithmic Criticism. U of Illinois Press.

Wilensky, U. (1997). NetLogo Mimicry model. http://ccl.northwestern.edu/netlogo/models/Mimicry. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.