Digital Humanities is Archaeology

Caution: pot stirring ahead

I’m coming up on my first sabbatical. It’s been six years since I first came to Carleton – terrified – to interview for a position in the history department, in this thing, ‘digital humanities’. The previous eight years had been hard, hustling for contracts, short term jobs, precarious jobs, jobs that seemed a thousand miles away from what I had become expert in. I had had precisely one other academic interview prior to Carleton (six years earlier. Academia? I’d given up by then). Those eight years taught me much (and will be a post for another day, why I decided to give one more kick at the can).

The point of this morning’s reflection is to think about what it was that I was doing then that seemed appropriate-enough that I could spin my job application around it. At that time, it was agent based modeling.

During those previous eight years, I had had one academic year as a postdoc at the U Manitoba. The story of how I got that position is a story for another day, but essentially, it mirrors the xkcd cartoon Scott linked to the other day. I said ‘fuck it’. And I wrote an application that said, in essence, I’ve got all these networks; I want to reanimate them; agent modeling might be the ticket. (If you’ve ever spent any time in the world of stamped brick studies, this is NOT what we do…). So I did. And that’s what I had in hand when I applied to Carleton.

‘Agent modeling is digital humanities’, I said. Given that nobody else had much idea what DH was/is/could be, it worked. I then spent the next six years learning how to be a digital humanist. Learning all about what the literary historians are doing, learning about corpus linguistics, learning about natural language processing, learning learning learning. I could program in Netlogo; I learned a bit of python. I learned a bit of R. It seemed, for a long time, that my initial pitch to the department was wrong though. DH didn’t do the agent modeling schtick. Or at least, nobody I saw who called themselves a ‘digital humanist’. Maybe some digital archaeologists did (and are they DH? and how does DA differ from the use of computation in archaeology?)

But. I think there’s a change in the air.

I think, maybe, the digital humanities are starting to come around to what I’ve been arguing for over a decade, in my lonely little corners of the academy. Here’s some stuff I wrote in 2009 based on work I did in 2006 which was founded on archaeological work I did in 2001:

In any given social situation there are a number of behavioural options an individual may choose. The one chosen becomes “history,” the others become “counter-factual history.” As archaeologists, we find the traces of these individual decisions. In literature we read of Cicero’s decision to help his friend with a gift of money. What is the importance of the decision that Cicero did not make to not help his friend? How can we bridge the gap between the archaeological traces of an individual’s decision, and the option he or she chose not to pursue in order to understand the society that emerged from countless instances of individual decision-making? Compounding the problem is that the society that emerged influenced individual decision-making in a recursive, iterative fashion. The problem, simply stated, is one of facing up to complexity. A major tool for this problem is the agent based simulation.

[..]

[A]gent-based modeling […] requires modellers to make explicit their assumptions about how the world operates (Epstein). This is the same argument made by Bogost for the video game: it is an argument in code, a rhetoric for a particular view of the world. As historians, we make our own models every day when we conceive how a particular event occurred. The key difference is that the assumptions underlying our descriptions are often implicit.

The rules that we used to encode the model are behaviours derived from archaeology, from the discovered traces of individual interactions and the historical literature. Once the rules for agents in this model and others are encoded, the modeller initiates the simulation and lets the agents interact over and over again. As they interact, larger-scale behaviours – an artificial society – begins to emerge. In using an ABM, our central purpose is to generate the macro by describing the micro.

[…] It is worth repeating that agent-based modelling forces us to formalise our thoughts about the phenomenon under consideration. There is no room for fuzzy thinking. We make the argument in code. Doing so allows us to experiment with past and present human agents in way that could never be done in the real world. Some ABMs, for example, infect agents with a “disease” to determine how fast it spreads. An ABM allows us to connect individual interactions with globally emergent behaviours. It allows us to create data for statistical study that would be impossible to obtain from real-world phenomena

That’s a long quote; sorry. But.

Compare with what Sinclair & Rockwell write in their new book, Hermeneuticap41-42:

…we can say that computers force us to formalize what we know about texts and what we want to know. We have to formally represent a text – something which may seem easy, but which raises questions… Computing also forces us to write programs that formalize forms of analysis and ways of asking questions of a text. Finally, computing forces us to formalize how we want answers to our questions displayed for further reading and exploration. Formalization, not quantification, is the foundation of computer-assisted interpretation.

[…] In text analysis you make models, manipulate them, break them, and then talk about them. Counting things can be part of modeling, but is not an essential model of text analysis. Modeling is also part of the hermeneutical circle; there are formal models in the loop. […] thinking through modeling and formalization is itself a useful discipline that pushes you to understand your evidence differetnly, in greater depth, while challenging assuptions We might learn the most when the computer model fails to answer our questions.

The act of modeling becomes a parth disciplined by formalization, which frustrates notions of textual knowledge. When you fail at formalizing a claim, or when your model fails to answer questions, you learn something about what is demonstrably and quanitifiably there. Fromalizing enables interrogation. Others can engage with and interrogate your insights. Much humanities prose supports claims with quotations, providing an argument by association or with general statements aabout what is in the text – vagaries that cannot be tested by others except with more assertions and quotations. Formalization and modeling, by contrast, can be exposed openly in ways that provide new affordances for interaction between interpretations.

That’s a long quote; sorry. But.

Compare with what Piper writes in the inaugrual issue of Cultural Analytics:

One of the key concepts operative in computational research that has so far been missing from traditional studies of culture is that of modeling. A model is a metonymical tool – a miniature that represents a larger whole. But it is also recursive in that it can be modified in relationship to its “fit,” how well it represents this whole. There is a great deal of literature on the role of modeling in knowledge creation and this should become core reading for anyone undertaking cultural analytics. The more we think about our methods as models the further we will move from the confident claims of empiricism to the contingent ones of representation. Under certain conditions, it is true that (i.e. replicable and stable)…

That’s not as long a quote. I’m getting better. But.

Compare with Underwood’s abstract (and watch the video for) his talk on ‘Predicting the Past

We’re certainly comfortable searching and browsing [libraries], and we’re beginning to get used to the idea of mining patterns: we can visualise maps and networks and trends. On the other hand, interpreting the patterns we’ve discovered often remains a challenge. To address that problem, a number of literary scholars have begun to borrow methods of predictive modelling from social science. Instead of tracing a trend and then speculating about what it means, these scholars start with a specific question they want to understand — for instance, how firm is the boundary between fiction and biography? Or, how are men and women described differently in novels? The categories involved don’t have to be stable or binary. As long as you have sources of testimony that allow you to group texts, you can model the boundaries between the groups. Then you can test your models of the past by asking them to make blind predictions about unlabelled examples. Since the past already happened, the point of predicting it is not really to be right. Instead we trace the transformation of cultural categories by observing how our models work, and where they go wrong.

It feels like something is going on. It feels like there’s been a bit of a sea-change in what DH sees as its relationship to the wider world. I feel like there is an arc to my story now that makes sense, that where this field is going fits squarely in where I myself have come from. What is ‘digital humanities’?

It might be that DH is really a branch of archaeology.

Postscriptum

Here’s a thought:

If DH is archaeology in its use of modeling as a core method, and given that modeling inherently builds its theoretical perspectives into its core operations, then the only appropriate way of writing DH must be in simulation. Games. Playful interations.

Discuss.


BTW: There’s a rich literature in archaeology on modeling, on moving from the incomplete evidence to the rich stories we want to tell. All archaeological data is necessarily incomplete; it’s the foundational problem of archaeology. DH folks might want to give that literature a read. Recently, Ted Underwood posted on ‘the real problem with distant reading‘ and the objections folk raise concerning the complexity of human life if considered computationally. Ted comes around to essentially a ‘screw that’ position, and writes,

It’s okay to simplify the world in order to investigate a specific question. That’s what smart qualitative scholars do themselves, when they’re not busy giving impractical advice to their quantitative friends. Max Weber and Hannah Arendt didn’t make an impact on their respective fields — or on public life — by adding the maximum amount of nuance to everything, so their models could represent every aspect of reality at once, and also function as self-operating napkins.

The problems that literary scholars are finding in presenting their models and approaches to their (non-computational) peers have their parallels in archaeological debates from the 70s onwards; I think they might find useful material in those debates. Again: DH is archaeology.