edit June 6 – following on from collaboration with Stu Eve, we’ve got a version of this at
I want to develop an app that makes it difficult to move through the historically ‘thick’ places – think Zombie Run, but with a lot of noise when you are in a place that is historically dense with information. I want to ‘visualize’ history, but not bother with the usual ‘augmented reality’ malarky where we hold up a screen in front of our face. I want to hear the thickness, the discords, of history. I want to be arrested by the noise, and to stop still in my tracks, be forced to take my headphones off, and to really pay attention to my surroundings.
So here’s how that might work.
1. Find wikipedia articles about the place where you’re at. Happily, inkdroid.org has some code that does that, called ‘Ici’. Here’s the output from that for my office (on the Carleton campus):
2. I copied that page (so not the full wikipedia articles, just the opening bits displayed by Ici). Convert these wikipedia snippets into numbers. Let A=1, B=2, and so on. This site will do that:
3. Replace dashes with commas. Convert those numbers into music. Musical Algorithmns is your friend for that. I used the default settings, though I sped it up to 220 beats per minute. Listen for yourself here. There are a lot of wikipedia articles about the places around here; presumably if I did this on, say, my home village, the resulting music would be much less complex, sparse, quiet, slow. So if we increased the granularity, you’d start to get an acoustic soundscape of quiet/loud, pleasant/harsh sounds as you moved through space – a cost surface, a slope. Would it push you from the noisy areas to the quiet? Would you discover places you hadn’t known about? Would the quiet places begin to fill up as people discovered them?
Right now, each wikipedia article is played in succession. What I really need to do is feed the entirety of each article through the musical algorithm, and play them all at once. And I need a way to do all this automatically, and feed it to my smartphone. Maybe by building upon this tutorial from MIT’s App Inventor. Perhaps there’s someone out there who’d enjoy the challenge?
I mooted all this at the NCPH THATCamp last week – which prompted a great discussion about haptics, other ways of engaging the senses, for communicating public history. I hope to play at this over the summer, but it’s looking to be a very long summer of writing new courses, applying for tenure, y’know, stuff like that.
Edit April 26th – Stuart and I have been playing around with this idea this morning, and have been making some headway per his idea in the comments. Here’s a quick screengrab of it in action:
In July, I’m presenting work related to data mining an archaeological database, in this case, the Portable Antiquity Scheme.
I wondered, if I treated each district in the UK as a ‘document’, and the items recovered in its territory as the words, would I see any interesting or useful patterns if I ran some topic models?
To give you a sense of the scale of this data, there are over 160 000 individual records in the material I obtained from PAS. An individual record might include a ‘hoard’, so there are *well* over 160 000 individual objects. When you sort this material into broad chronological materials, you find:
Paleolithic: 305 records
Bronze Age: 2620
Early Medieval: 8421
Post Medieval: 27879
Blank cells: 1278
Quite a lot of material. So, after massaging the data, cleaning things up, I began to work with a very small subset of materials – records tagged ‘bronze age’ from 14 districts (104 records). This was merely an exploration, to see if there’s any meat to my intuitive belief that there should be some sort of latent structure. The 14 districts I selected (the first 14 when I sorted ‘Bronze Age’) are:
I put every record from Wokingham District into a single txt file, then every one from Winchester, until I was done (and I really need to automate that). Then I fed the text files through MALLET, using the JAVA Gui for this initial exploration (using the JAVA Gui’s default settings. In a more robust exploration, I would go direct from the command line, tweaking until I found the best number of topics, etc).
So here’s what I found.
List of Topics
1. alloy palstave mm copper green surface slight cast dark penannular
2. mouth sides loop dims looped corners armorican axeheads core cast
3. blade axehead prominent casting iron hoard intact uneven single narrow
4. age fragment late surfaces alan spear body faces head flanged
5. age socket collar sectioned alloy slightly ridge seams front square
6. record flint grey scraper antiquities dorsal tool angle black visible
7. bronze patina end stop made remains flat decoration found corroded
8. database central rectify working recording standards usual fall aware began
9. bronze copper flashes part side edge large ridges shallow top
10. socketed straight axe rounded complete horizontal moulded rectangular expanding upper
What do those topics mean? To a human, they are all variations on the description of the artefacts. Given that multiple humans described these artefacts in the first place, perhaps (and it depends too on the kind of guidance and rigour that the PAS uses in its data entry) these topics gather some of the blurriness of categorization, a way of bypassing the clumpers and the splitters amongst us. Obviously, some more thought about what these may mean is necessary. But onwards!
I brought the resultant ‘documents: topics, % contribution’ list into gephi for some visualization. Since it was a small dataset, I did no pruning. Topic 4 does the most lifting in this network. In its ‘module’, you find topics 9, 10, 3, 5 (coloured purple) and districts of Gravesham, Bromley, Dover, Canterbury, Test Valley, and New Forest. But how much weight does this visualization carry? Since it’s two-mode, and these metrics are really only appropriate for a one-mode graph, probably not much. So I collapsed this graph into a one-mode graph of district to district, based on weighted ties by topic.
The resultant graph is probably more useful for archaeology, for it ties areas together based on all of the material culture recorded in the database. At the recent SAA in Honolulu, in the Connected Past session, folks were constructing networks from artefacts using Brainerd Robinson coefficients. The methodology I’m trying ought to be compared with those studies (see for instance Barbara Mill’s et al recent article). I then ran modularity and betweeness statistics again. Why betweeness? If the ‘topics’ that emerge in this database reflect something within the underlying material culture, then interconnections between sites constructed from topics show some kind of flow (of ideas? culture? economics?), thus ‘between’ sites straddle the most important of those flows – in which case the most ‘between’ districts might be rather more important.
Remarkably (and this could be an artefact of the method, rather than the underlying data), I get next to no variation in betweeness – every district except for East Hamphsire, Ashford, and New Forest has the same score (and these three all have the same score too). Modularity finds two groups. Perhaps it’s an east/west dichotomy? I laid the network out with the nodes at their geographic locations (typically, the district council office). No east-west dichotomy. (Incidentally, you can now export to Google Earth, overlaying your network against pretty satellite pictures).
So… there seems to be something to it. The thing to do now is to do every record, every district, and every period, mapping out changes over time. In the interests of being able to assess this, though, I should perhaps stick to my knitting and just do the Roman period.
I’m presenting next week at the Society for American Archaeology Annual Meeting. I’m giving two papers. One argues for parsimonious models when we do agent based modeling. The other reverses the flow of archaeological network analysis and instead of finding nets in the archaeology, I use agent based models to generate networks that help me understand the archaeology. (The session is ‘Connected Past’.) Here is the draft of my talk, with all the usual caveats that that entails. Parts of it have been drawn from an unpublished piece that discusses this methodology and the results in much greater detail. It will appear…. eventually.
Scott Weingart has been an enormous help in all of this. You should follow his work.
My interests lie in the social networks surrounding primary resource extraction in the Roman world. The Roman epigraphy of stamped brick easily lends itself to network analysis. One string together, like pearls, individual landowners, estate names, individual brick makers, signa, brick fabrics, and locations. This leads to very complicated, multi-dimensional networks.
When I first started working with this material, I reduced this complexity by looking only at the humans, whom I tied together based on appearing in the same stamp type together. I called these ‘producer’ networks. I then looked at the ties implied by the shared use of fabrics, or the co-location of brick stamp types at various findspots, and called these ‘manufacturing’ networks.
I then sliced these networks up by reigning dynasty, and developed a story to account for their changing shapes over time.
This was in the late 1990s, and in terms of network theorists I had largely only Granovetter, Hanneman & Riddle, and Strogatz & Watts to go on. The story I told was little more than a just-so story, like how the Camel got its Hump.
I had the shape, I had points where I could hang the story, but I couldn’t account for how I got from the shape of the network in the Julio-Claudian period, to that of the Flavian, to that of the Antonines. I’ve done a lot of work on networks since then; now I want to know what generates these networks that we see archaeologically, in the first place.
In this talk today, I want to reverse the direction of my inquiry. We are all agreed that we can find networks in our archaeological materials. The problem, I think, for us, is to explain the network processes that produce these patterns, and then to use our understanding of those processes to narrow down the possible entangled human & thing interactions that could give rise to these possible processes.
We need to be able to understand the possible behaviour-spaces that could produce the networks we see, to tease out the inevitable from the contingent. We need to be able to rigorously explore the emergent or unintended consequences of the stories we tell. The only way I know how to do that systematically, is to encode those stories as computer code, to turn them from normal, archaeological storytelling rhetoric, to computational procedural rhetoric.
So this is what we did.
One story we tell about the Roman world, that might be useful for understanding things like the exploitation of land for building materials, is that its social economy functioned like a ‘bazaar’.
According to Peter Bang, the Roman economic system is best understood as a complex, agrarian tributary empire, of a kind similar to the Ottoman or Mughal (Bang 2006; 2008). Bang (2006: 72-9) draws attention to the concept of the bazaar. The bazaar was a complete social system that incorporated the small peddler with larger merchants, long distance trade, with a smearing of categories of role and scale. The bazaar emerged from the interplay of instability and fragmentation. The mechanisms developed to cope with these reproduced that same instability and fragmentation. Bang identifies four key mechanisms that did this: small parcels of capital (to combat risk, cf Skydsgaard 1976); little homogenization of products (agricultural output and quality varied year by year, and region by region as Pliny discusses in Naturalis Historia 12 and 18); opportunism; and social networks (80-4). As Bang demonstrates, these characteristics correspond well with the archaeology of the Roman economy and the picture we know from legal and other text.
Bang’s model of the bazaar (2008; 2006), and the role of social networks within that model, can be simulated computationally. What follows is a speculative attempt to do so, and should be couched in all appropriate caveats and warnings. The model simulates the extraction of various natural resources, where social connections may emerge between individuals as a consequence of the interplay of the environment, transaction costs, and the agent’s knowledge of the world. If the networks generated from the computational simulation of our models for the ancient economy correspond to those we see in the ancient evidence , we have a powerful tool for exploring antiquity, for playing with different ideas about how the ancient world worked (cf. Dibble 2006). Computation might be able to bridge our models and our evidence. In particular, I mean, ‘agent based modeling’.
Agent based modelling is an approach to simulation that focuses on the individual. In an agent based model, the agents or individuals are autonomous computing objects. They are their own programmes. They are allowed to interact within an environment (which frequently represents some real-world physical environment). Every agent has the same suite of variables but each agent’s individual combination of variables is unique (if it was a simulation of an ice-hockey game, every agent would have a ‘speed’ variable, and an ‘ability’ variable, and so the nature of every game would be unique). Agents can be aware of each other and the state of the world (or their location within it), depending on the needs of the simulation. It is a tool to simulate how we believe a particular phenomenon worked in the past. When we simulate, we are interrogating our own understandings and beliefs.
The model imagines a ‘world’ (‘gameboard’ would not be an inappropriate term) in which help is necessary to find and consume resources. The agents do not know when or where resources will appear or become exhausted. By accumulating resources, and ‘investing’ in improvements to make extraction easier, agents can accrue prestige. When agents get into ‘trouble’ (they run out of resources) they can examine their local area and become a ‘client’ of someone with more prestige than themselves. It is an exceedingly simple simulation, and a necessary simplification of Bang’s ‘Bazaar’ model, but one that captures the essence and exhibits subtle complexity in its results. The resulting networks can be imported into social network analysis software like Gephi.
It is always better to start with a simple simulation, even at the expense of fidelity to the phenomenon under consideration, on the grounds that it is easier to understand and interpret outputs. A simple model can always be made more complex when we understand what it is doing and why; a complex model is rather the inverse, its outcomes difficult to isolate and understand.
A criticism of computational simulation is that one only gets out of it what one puts in; that its results are tautological. This is to misunderstand what an agent based simulation does. In the model developed here, I put no information into the model about the ‘real world’, the archaeological information against which I measure the results. The model is meant to simulate my understanding of key elements of Bang’s formulation of the ‘Imperial Bazaar’. We measure whether or not this formulation is useful by matching its results against archaeological information which was never incorporated into the agents’ rules, procedures, or starting points. I never pre-specify the shape of the social networks that the agents will employ; rather, I allow them to generate their own social networks which I then measure against those known from archaeology. In this way, I start with the dynamic to produce static snapshots.
We sweep the ‘parameter space’ to understand how the simulation behaves; ie, the simulation is set to run multiple times with different variable settings. In this case, there are only two agent variables that we are interested in (having already pre-set the environment to reflect different kinds of resources), ‘transaction costs’ and ‘knowledge of the world’. Because we are ultimately interested in comparing the social networks produced by the model against a known network, the number of agents is set at 235, a number that reflects the networks known from archaeometric and epigraphic analysis of the South Etruria Collection of stamped Roman bricks (Graham 2006a).
What is particularly exciting about this kind of approach, to my mind, is that if you disagree with it, with my assumptions, with my encoded representation of how we as archaeologists believed the ancient world to have worked, you can simply download the code, make your own changes, and see for yourself. If you are presented with the results of a simulation that you cannot open the hood and examine its inner workings for yourself, you have no reason to believe those findings. Thus agent based modeling plays into open access issues as well.
So let us consider then some of the results of this model, this computational petri dish for generating social networks.For my archaeological networks, I looked at clustering coefficient and average path length as indicator metrics, (key elements of Watts’ small world formulation). We can tentatively identify a small-world then as one with a short average path length and a strong clustering coefficient, compared to a randomly connected network with the same number of actors and connections. Watts suggests that a small-world exists when the path lengths are similar but the clustering coefficient is an order of magnitude greater than in the equivalent random network (Watts 1999: 114).
In Roman economic history, discussions of the degree of market integration within and across the regions of the Empire could usefully be recast as a discussion of small-worlds. If small-worlds could be identified in the archaeology (or emerge as a consequence of a simulation of the economy), then we would have a powerful tool for exploring flows of power, information, and materials. Perhaps Rome’s structural growth – or lack thereof – could be understood in terms of the degree to which the imperial economy resembles a small-world (cf the papers in Manning and Morris 2005)?
The networks generated from the study of brick stamps are of course a proxy indicator at best. Not everyone (presumably) who made brick stamped it. That said, there are some combinations of settings that produce results broadly similar to those observed in stamp networks, in terms of their internal structure and the average path length between any two agents.
One such mimics a world where transaction costs are significant (but not prohibitive), and knowledge of the world is limited . The clustering coefficient and average path length observed for stamped bricks during the second century fall within the range of results for multiple runs with these settings. In the simulation, the rate at which individuals linked together into a network suggests that there was a constant demand for help and support. The world described by the model doesn’t sound quite like the world of the second century, the height of Rome’s power, that we think we know, suggesting something isn’t quite right, in either the model or our understandings. But how much of the world did brickmakers actually know, remembering that ‘knowledge of the world’ in the model is here limited to the location of new resources to exploit?
Agent based modeling also allow us to explore the consequences of things that didn’t happen. There were a number of simulated worlds that did not produce any clustering at all (and very little social network growth). Most of those runs occurred when the resource being simulated was coppiced woodland. This would suggest that the nature of the resource is such that social networks do not need to emerge to any great degree (for the most part, they are all dyadic pairs, as small groups of agents exploit the same patch of land over and over again). The implication is that some kinds of resources do not need to be tied into social networks to any great degree in order for them to be exploited successfully (these were also some of the longest model runs, another indicator of stability).
What are some of the implications of computationally searching for the networks characteristic of the Roman economy-as-bazaar? If, despite its flaws, this model correctly encapsulates something of the way the Roman economy worked, we have an idea of, and the ability to explore, some of the circumstances that promoted economic stability. It depends on the nature of the resource and the interplay with the degree of transaction costs and the agents’ knowledge of the world. In some situations, ‘patronage’ (as instantiated in the model) serves as a system for enabling continual extraction; in other situations, patronage does not seem to be a factor.
However, with that said, none of the model runs produced networks that had the classical signals of a small-world. This is rather interesting. If we have correctly modeled the way patronage works in the Roman world, and patronage is the key to understanding Rome (cf Verboven 2002), we should have expected that small-worlds would naturally emerge. This suggests that something is missing from the model – or our thinking about patronage is incorrect. We can begin to explore the conundrum by examining the argument made in the code of the simulation, especially in the way agents search for patrons. In the model, it is a local search. There is no way of creating those occasionally long-distance ties. We had initially imagined that the differences in the individual agents’ ‘vision’ would allow some agents to have a greater ability to know more about the world and thus choose from a wider range. In practice, those with greater ‘vision’ were able to find the best patches of resources, indeed, the variability in the distribution of resources allowed these individuals to squat on what was locally best. My ‘competition’ and prestige mechanisms seem to have promoted a kind of path dependence. Perhaps we should have instead included something like a ‘salutatio’, a way for the agents to assess patrons’ fitness or change patrons (cf Graham 2009; Garnsey and Woolf 1989: 154; Drummond 1989: 101; Wallace-Hadrill 1989b: 72-3). Even when models fail, their failures still throw useful light. This failure of my model suggests that we should focus on markets and fairs as not just economic mechanisms, but as social mechanisms that allow individuals to make the long distance links. A subsequent iteration of the model will include just this.
This model will come into its own once there is more and better network data drawn from archaeological, epigraphic, historical sources. This will allow the refining of both the set-up of the model and comparanda for the results. The model presented here is a very simple model, with obvious faults and limitations. Nevertheless, it does have the virtue of forcing us to think about how patronage, resource extraction, and social networks intersected in the Roman economy. It produces output that can be directly measured against archaeological data, unlike most models of the Roman economy. When one finds fault with the model (since every model is a simplification), and with the assumptions coded therein, he or she is invited to download the model and to modify it to better reflect his or her understandings. In this way, we develop a laboratory, a petri-dish, to test our beliefs about the Roman economy. We offer this model in that spirit.
[edited April 4th to make it less clumsy, and to fit in the 15 minute time frame]
I am reading Ian Hodder’s book, ‘Entangled: An Archaeology of the Relationship between Humans and Things’ Hodder writes that the tanglegram cannot be represented as a network, since a network doesn’t consider the nature of the relationships or nodes. This is not in fact the case. Representing these complex relationships as a network is quite possible, and allows the ‘tanglegram’ to actually become a object to query in its own right, rather than a suggestive illustration. I’ve uploaded the network data to Figshare:
I used NodeXL to enter the data. If there was a bidirectional tie, I made two entries: A -> B, B -> A. If it was only one way, I entered it with the directionality of the original tanglegram. I saved it as a .net file, opened it in gephi, and ran gephi’s statistics.
This was all rather rough and ready; because I was working from a blown-up photocopy of the original figure, and I’m trying to get ready for a trip, there could be errors. One would need Hodder’s original data to do this properly, but I offer it up here to show that it’s possible, and indeed worthwhile: why else would you bother drawing a tanglegram, if not to use it to help your analysis?
In the image below, I resize the nodes to represent betweenness centrality (which elements of the tanglegram are doing the heavy lifting?) and recolour it according to modularity. Modularity finds five groups (nodes listed in descending order of betweenness centrality):
Group 0: house, groundstone, burial, plaster, figurines, pigment, skins, painting, personal artefacts, animal heads, food storage, human heads, special food, human body parts, burials, storage rooms, bins
Group 1: hoard, chipped stone, sheep, mats, dung, wild animals, fields, bone, cereals, wooden object, weeds.
Group 2: food, hearth, fuel, ash, clay balls, oven, traps, wood
Group 3: clay, baskets, extraction pits, wetland, reeds, birds, dryland, marl, ditches, fish, clean water, landscape, field, eggs
Group 4: midden, dogs, colluvium, mortar, pen, mudbrick
Seems quite suggestive! For the files for yourself, please see:
Below is a draft of the first part of my talk for Scholarslab this week, at the University of Virginia. It needs to be whittled down, but I thought that those of you who can’t drop by on Thursday might enjoy this sneak peak.
Thursday, March 21 at 2:00pm
in Scholars’ Lab, 4th floor Alderman Library.
When I go to parties, people will ask me, ‘what do you do?’. I’ll say, I’m in the history department at Carleton. If they don’t walk away, sometimes they’ll follow that up with, ‘I love history! I always wanted to be an archaeologist!’, to which I’ll say, ‘So did I!’
My background is in Roman archaeology. Somewhere along the line, I became a ‘digital humanist’, so I am honoured to be here to speak with you today, here at the epicentre, where the digital humanities movement all began.
If the digital humanities were a zombie flick, somewhere in this room would be patient zero.
Somewhere along the line, I became interested in the fossilized traces of social networks that I could find in the archaeology. I became deeply interested – I’m still interested – in exploring those networks with social network analysis. But I became disenchanted with the whole affair, because all I could develop were static snapshots of the networks at different times. I couldn’t fill in the gaps. Worse, I couldn’t really explore what flowed over those networks, or how those networks intersected with broader social & physical environments.
It was this problem that got me interested in agent based modeling. At the time, I had just won a postdoc in Roman Archaeology at the University of Manitoba with Lea Stirling. When pressed about what I was actually doing, I would glibly respond, ‘Oh, just a bit of practical necromancy, raising the dead, you know how it is’. Lea would just laugh, and once said to me, ‘I have no idea what it is you’re doing, but it seems cool, so let’s see what happens next!’
How amazing to meet someone with the confidence to dance out on a limb like that!
But there was truth in that glib response. It really is a form of practical necromancy, and the connections with actual necromancy and technologies of death is a bit more profound than I first considered.
So today, let me take you through a bit of the deep history of divination, necromancy, and talking with the dead; then we’ll consider modern simulation technologies as a form of divination in the same mold; and then I’ll discuss how we can use this power for good instead of evil, of how it fits into the oft-quote digital humanities ethos of ‘hacking as a way of knowing’ (which is rather like experimental archaeology, when you think about it), and how I’m able to generate a probabilistic historiography through this technique.
And like all good necromancers, it’s important to test things out on unwilling victims, so I would also like to thank the students of HIST3812 who’ve had all of the ideas road-tested on them earlier this term.
Zombies clearly fill a niche in modern western culture. The president of the University of Toronto recently spoke about ‘zombie ideas’ that despite our best efforts, persist, infect administrators, politicians, and students alike, trying to eat the brains of university education.
Zombies emerge in popular culture in times of angst, fear, and uncertainty. If hollywood has taught us anything, it’s that Zombies are bad news. Sometimes the zombies are formerly dead humans; sometimes they are humans who have been transformed. Sometimes we deliberately create a zombie. The zombie can be controlled, and made to do useful work; zombie as a kind of slavery. More often, the zombies break loose, or are the result of interfering with things humanity was wont not too; apocalypse beckons. But sometimes, like ‘Fido’, a zombie can be useful, can be harnessed, and somehow, be more human than the humans. [Fido]
If you’d like to raise the dead yourself, the answer is always just a click away [ehow].
There are other uses for the restless dead. Before our current fixation with apocalypse, the restless dead could be useful for keeping the world from ending.
In video games, we call this ‘the problem space’ – what is it that a particular simulation or interaction is trying to achieve? For humanity, at a cosmological level, the response to that problem is through necromancy and divination.
I’m generalizing horribly, of course, and the anthropologists in the audience are probably gritting their teeth. Nevertheless, when we look at the deep history and archaeology of many peoples, a lot can be tied to this problem of keeping the world from ending. A solution to the problem was to converse with those who had gone before, those who were currently inhabiting another realm. Shamanism was one such response. The agony of shamanism ties well into subsequent elaborations such as the ball games of mesoamerica, or other ‘game’ like experiences. The ritualized agony of the athlete was one portal into recreating the cosmogonies and cosmologies of a people, thus keeping the world going.
The bull-leaping game at Knossos is perhaps one example of this, according to some commentators. Some have seen in the plan of the middle minoan phase of this palace (towards the end of the 2nd millenium BC) a replication in architecture of a broader cosmology, that its very layout reflects the way the Minoans saw the world (this is partly also because this plan seems to replicate in other Minoan centres around the Aegean). Jeffrey Soles, pointing to the architectural play of light and shadow throughout the various levels of Knossos argues that this maze-like structure was all part of the ecstatic journey, and ties shamanism directly to the agonies of sport & game in this location. We don’t have the Minoans’ own stories, of course, but we do have these frescoes of bull-leaping, and other paraphernalia which tie in nicely with the later dark-age myths of Greece
So I’m making a connection here between the way a people see the world working, and their games & rituals. I’m arguing that the deep history of games is a simulation of how the world works.
This carries through to more recent periods as well. Herodotus wrote about the coming of the Etruscans to Italy: “In the reign of Atys son of Menes there was a great scarcity of food in all Lydia. For a while the Lydians bore this with patience; but soon, when the famine continued, they looked for remedies, and various plans were suggested. It was then that they invented the games of dice, knucklebones, and ball, and all the other games of pastime, except for checkers, which the Lydians do not claim to have invented. Then, using their discovery to forget all about the famine, they would play every other day, all day, so that they would not have to eat… This was their way of life for eighteen years. Since the famine still did not end, however, but grew worse, the king at last divided the people into two groups and made them draw lots, so that one should stay and the other leave the country’.
Here I think Herodotus misses the import of the games: not as a pasttime, but as a way of trying to control, predict, solve, or otherwise intercede with the divine, to resolve the famine. In later Etruscan and Roman society, gladiatorial games for instance were not about entertainment but rather about cleansing society of disruptive elements, about bringing everything into balance again, hence the elaborate theatre of death that developed.
The specialist never disappears though, the one who has that special connection with the other side and intercedes for broader society as it navigates that original problem space. These were the magicians and priests. But there is an important distinction here. The priest is passive in reading signs, portents, and omens. Religion is revealed, at its proper time and place, through proper observation of the rituals. The magician is active – he (and she) compels the numinous to reveal itself, the spirits are dragged into this realm; it is the magician’s skill and knowledge which causes the future to unfurl before her eye.
The priest was holy, the magician was unholy.
Straddling this divide is the Oracle. The oracle has both elements of revelation and compulsion. Any decent oracle worth its salt would not give a straight-up answer, either, but rather required layers of revelation and interpretation. At Delphi, the God spoke to the Pythia, the priestess, who sat on the stool over the crack in the earth. When the god spoke, the fumes from below would overcome her, causing her to babble and writhe uncontrollably. Priests would then ‘interpret’ the prophecy, in form of a riddle.
Why riddles? Riddles are ancient. They appear on cuneiform texts. Even Gollum knew what a true riddle should look like – a kind of lyric poem asking a question that guards the right answer in hints and wordplay.
‘I tremble at each breath of air/ And yet can heaviest burders bear. [implicit question being asked is who am I? – water]
We could not get away from a discussion of riddles in the digital humanities without of course mentioning the I-ching. It’s a collection of texts that, depending on dice throws, get combined and read in particular ways. Because this is essentially a number of yes-or-no answers, the book can be easily coded onto a computer or represented mechanically. In which case, it’s not really a ‘book’ at all, but a machine for producing riddles.
Ruth Wehlau writes, “Riddlers, like poets, imitate God by creating their own cosmos; they recreate through words, making familiar objects into something completely new, rearranging the parts of pieces of things to produce creatures with strange combinations of arms, legs, eyes and mouths. In this transformed world, a distorted mirror of the real world, the riddler is in control, but the reader has the ability to break the code and solve the mystery (wehlau 1997)
Riddles & divination are related, and are dangerous. But they also create a simulation, of how the world can come to be, of how it can be controlled.
One can almost see the impetus for necromancy, when living in a world described by riddles. Saul visits the Witch of Endor; Oddyseus goes straight to the source.
…and Professor Hix prefers the term ‘post mortem communications’. However you spin it, though, the element of compulsion, of speaking with the dead, marks it out as a transgression; necromancers and those who seek their aid never end well.
It remains true today, that those who practice simulation, are similarly held in dubious regard. If that was not the case, tongue in cheek articles titles such as this would not be necessary.
I am making the argument that modern computational simulation, especially in the humanities, is more akin to necromancy than it is to divination, for all of these reasons.
But it’s also the fact that we do our simulation through computation itself that marks this out as a kind of necromancy.
The history of the modern digital computer is tied up with the need to accurately simulate the yields of atomic bombs, of blast zones, and potential fallout, of death and war. Modern technoculture has its roots in the need to accurately model the outcome of nuclear war, an inversion of the age old problem space, ‘how can we keep the world from ending’ through the doctrines of mutually assured destruction.
The playfulness of those scientists, and the acceleration of hardware technology lead to video games, but that’s a talk for another day (and indeed, has been recently well treated by Rob MacDougall of Western University).
‘But wait! Are you implying that you can simulate humans just as you could individual bits of uranium and atoms, and so on, like the nuclear physicists?’ No, I’m not saying that, but it’s not for nothing that Isaac Asimov gave the world Hari Seldon & the idea of ‘psychohistory’ in the 1950s. As Wikipedia so ably puts it, “Psychohistory is a fictional science in Isaac Asimov’s Foundation universe which combines history, sociology, etc., and mathematical statistics to make general predictions about the future behavior of very large groups of people, such as the Galactic Empire.”
Even if you could do Seldon’s psychohistorical approach, it’s predicated on a population of an entire galaxy. One planetfull, or one empire-full, or one region-full, of people just isn’t enough. Remember, this is a talk on ‘practical’ necromancy, not science-fiction.
Well what about so-called ‘cliodynamics’? Cliodynamics looks for recurring patterns in aggregate statistics of human culture. It may well find such patterns, but it doesn’t really have anything to say about ‘why’ such patterns might emerge. Both psycohistory and cliodynamics are concerned with large aggregates of people. As an archaeologist, all I ever find are the traces of individuals, of individual decisions in the past. It always requires some sort of leap to jump from these individual traces to something larger like ‘the group’ or ‘the state’. A Roman aqueduct is, at base, still the result of many individual actions.
A practical necromancy therefore is a simulation of the individual.
There are many objections to simulation of human beings, rather than things like atoms, nuclear bombs, or the weather. Our simulations can only do what we program them to do. So they are only simulations of how we believe the world works (ah! Cosmology!). In some cases, like weather, our beliefs and reality match quite well, at least for a few days, and we know much about how the variables intersect. But, as complexity theory tells us, starting conditions strongly affect how things transpire. Therefore we forecast from multiple runs with slightly different starting conditions. That’s what a 10% chance of rain really means: We ran the simulation 100 times, and in 10 of them, rain emerged.
And humans are a whole lot more complex than the water cycle. In the case of humans, we don’t know all the variables; we don’t know how free will works; we don’t know how a given individual will react; we don’t understand how individuals and society influence each other. We do have theories though.
This isn’t a bug, it’s a feature. The direction of simulation is misplaced. We cannot really simulate the future, except in extremely circumscribed situations, such as pedestrian flow. So let us not simulate the future, as humanists. Let us create some zombies, and see how they interact. Let our zombies represent individuals in the past. Give these zombies rules for interacting that represent our best beliefs, our best stories, of how some aspect of the past worked. Let them interact. The resulting range of possible outcomes becomes a kind of probabilistic historiography. We end up with not just a story about the past, but also about other possible pasts that could have happened if our initial story we are telling about how individuals in the past acted is true, for a given value of true.
We create simulacra, zombies, empty husks representing past actors. We give them rules to be interpreted given local conditions. We set them in motion from various starting positions. We watch what emerges, and thus can sweep the entire behavior space, the entire realm of possible outcomes given this understanding. We map what did occur (as best as we understand it) against the predictions of the model. For the archaeologist, for the historian, the strength of agent based modeling is that it allows us to explore the unintended consequences inherent in the stories we tell about the past. This isn’t easy. But it can be done. And compared to actually raising the dead, it is indeed practical.
[and here begins part II, which runs through some of my published ABMS, what they do, why they do it. All of this has to fit within an hour, so I need to do some trimming.]
[Postscriptum, March 23: the image of the book of random digits came from Mark Sample's 'An Account of Randomness in Literary Computing, & was meant to remind me to talk about some of the things Mark brought up. As it happens, I didn't do that when I presented the other day, but you really should go read his post.]
This week my HIST2809 students are encountering digital history, as part of their ‘Historian’s Craft’ class (an introduction to various tools & methods). As part of the upcoming assignment, I’m having them run some history websites through Voyant, as a way of sussing out how these websites craft a particular historical consciousness. Each week, there’s a two-hour lecture and one hour of tutorial where the students lead discussions given the lecture & assigned readings. For this week, I want the students to explore different flavours of Digital History – here are the readings:
- “Interchange: The Promise of Digital History” The Journal of American History. 95 (2008).
- van Dijck, J. (2010). Search engines and the production of academic knowledge. International Journal of Cultural Studies, 13(6), 574 -592. doi:10.1177/1367877910376582
- Kirschenbaum, Matthew. “The Remaking of Reading: Text Mining and the Digital Humanities”
- Graham, Shawn and Rob Blades Mining the Open Web with ‘Looted Heritage’.
- Kee, K., S. Graham, et al, “Towards a Theory of Good History through Good Gaming”. Canadian Historical Review 90. No.2 (2009): 303-326.
“Possible discussion questions: How is digital history different? In ten years, will there still be something called ‘digital history’ or will we all history be digital? Is there space for writing history through games or simulations? How should historians cope with that? What kind of logical fallacies would such approaches be open to?”
To help the TAs bring the students up to speed with using Voyant, I’ve suggested to them that they might find it fun/interesting/useful/annoying to run one of those papers through Voyant. Here’s a link to the ‘Interchange’ article, loaded into Voyant:
The TAs could put that up on the screen, click on various words in the word cloud, to see how the word is used over the course of a single article (though in this case, there are several academics speaking, so the patterns are in part author-related). Click on ‘scholarship’ in the word cloud, and you get a graph of its usage on the right – the highest point is clickable (‘segment six’). Click on that, and the relevant bit of text appears in the middle, as Bill Turkel talks about the extent to which historical scholarship should be free. On the bottom left, if you click on ‘words in the entire corpus’, you can select ‘access’ and ‘scholarship’, which will put both of them on the graph
and you’ll see that the two words move in perfect tandem, so the discussion in here is all about digital tools opening access to scholarship – except in segment 8. The question would then become, why?
….so by doing this exercise, the students should get a sense of how looking at macroscopic patterns involves jumping back to the close reading we’re normally familiar with, then back out again, in an iterative process, generating new questions all along the way. An hour is a short period of time, really, but I think this would be a valuable exercise.
(I have of course made screen capture videos walking the students through the various knobs and dials of Voyant. This is a required course here at Carleton. 95 students are enrolled. 35 come to every lecture. Approximately 50 come to the tutorials. Roughly half the class never comes…. in protest that it’s a requirement? apathy? thinking they know how to write an essay so what could I possibly teach them? That’s a question for another day, but I’m fairly certain that the next assignment, as it requires careful use of Voyant, is going to be a helluva surprise for that fraction.”
At my university, we’ve been asked to consider discipline-specific language for new tenure & promotion guidelines. I’ve been writing a response to our chair, and I thought, in keeping with how I regard this problem, it would be a good idea to share these thoughts.
The 1.4 edition of the Journal of Digital Humanities wrestles with the problem of evaluating digital scholarship for tenure http://journalofdigitalhumanities.org/volumes/ (or download as pdf: http://journalofdigitalhumanities.org/files/jdh_1_4.pdf )
Moving Goalposts & Scholarship as Processes
As far as discipline specific guidelines are concerned, from my perspective, is the problem that the goalposts are always going to be shifting. What was fairly technically demanding becomes easier with time, and so the focus shifts from ‘can we do x’ to ‘what are the implications of x for y’, or as Bethany Nowviskie put it, a shift from 18th century ‘Lunaticks‘ who lay the groundwork for 19th century science and industrialization. Another problem is that in digital work, the lone scholar is very much the outlier. To achieve anything worthwhile takes a team – and who gets to be first author does not necessarily reflect the way the work was divied up or undertaken. We should resist trying to shoehorn digital work into boxes meant for a different medium. Nowiskie writes,
“The danger here … is that T&P committees faced with the work of a digital humanities scholar will instigate a search for print equivalencies — aiming to map every project that is presented to them, to some other completed, unary and generally privately-created object (like an article, an edition, or a monograph). That mapping would be hard enough in cases where it is actually appropriate “
She goes on to say,
“…the new responsibility of tenure and promotion committees [is] to assess quality in digital humanities work — not in terms of product or output — but as embodied in an evolving and continuous series of transformative processes.”
This was the gist of Bill Turkel’s address to the Underhill Graduate Students Colloquium on ‘doing history in real time’ – that the unique value, in an increasingly digital world, of formal academic knowledge is not about things per se, but rather about method. You can look up any fact in the world in seconds. But learning how to think, how to query, how to judge between competing stories – that’s what we bring. That then is the problem for assessing digital work as part of tenure and promotion: how does this work change the process?
That suggests a hierarchy too, of importance. Merely putting things online, while important, is not necessarily transformative unless that kind of material has never been digitized before. Then the conversation also becomes about how that work was done, the decisions made, the relationship between the digital object and the physical one. I have a student working on a project, for instance, to put together an online exhibition related to Black History in Canada. This is important, but the exhibition itself is not transformative. The real scholarship, the real transformation, happens when she starts exploring those materials through text analysis, putting a macroscopic lens on the whole corpus of materials that she has collected.
Digital Work is Public Work
The other important point about process is that digital work always (99.9 times out of 100; my early agent modeling work had no internet presence, for instance) has a public, outward looking face. Platforms like blogs allow for public engagement with our work – so digital work is a kind of public humanities. The structure of the internet, of how its algorithmns find and construct knowledge and serve that up to us via Google, is such that work that is valuable and of interest creates a bigger noise in a positive feedback loop. The best digital work is done in public. ‘Public’ should not be a dirty word along the lines of ‘popular’. The internet looks different to each person who goes online (and our algorithmns make sure that each person sees a personalized internet, because that’s how one makes money online), so hits on a blog post are not random meaningless clicks but rather an engagement with a broader community. As far as academic blogging goes, that broader community is other academics and students. Print journals & peer reviewed articles are just one way of engaging with our chosen communities. With post-publication models of peer review like Digital Humanities Now and the Journal of Digital Humanities (models that are making inroads in other disciplines), we should treat these on an equal footing with the more familiar models. I’d argue that post-publication peer review is a greater indicator of significance and value that a regular, two blind reviewers into print model.
I’d like to see language then that regarded digital work, or work in media other than print, on an equal footing with the more familiar forms. That is, as things that do not have equivalencies to what we traditionally expect and thus must be taken on their own terms. I appreciate that I’m pretty much the only person in this department that any of this might apply to, for the time being. I would hate to see my work on topic modeling though get considered as ‘service’. Figuring out how to apply natural language processing to vast corpi of historical materials, figuring out the ways the code force particular worldviews, hide others, and writing all of this up as a ‘how-to’ guide is indeed research. It’s akin to figuring out how gene-sequencing works, its limitations, etc, which needs to be well understood before a biologist can use it to link modern humans to Neanderthals. We understand both of those activities as research, in biology, but we’d only understand the second as research if the example was the limits, potentials of topic modeling / discourses in the political thought of the 18th century. I bring this up, because of Sean Takats experience at George Mason:
Project Management & Project Outputs
In that particular case, Takats was also managing major development projects to develop various tools and approaches. He writes,
” I want to focus on the committee’s disregard for project management, because it’s here I think that we find evidence of a much broader communication breakdown between DH and just-H, despite the best efforts to develop reasonable standards for evaluating digital scholarship. Although the committee’s letter effectively excludes “project management” from consideration as research, I would argue that it’s actually the cornerstone of all successful research. It’s project management that transforms a dissertation prospectus into a thesis, and it’s certainly project management that shepherds a monograph from proposal to published book. Fellow humanists, I have some news for you: you’re all project managers, even if you only direct a staff of one.”
Which leads me to my next point. Digital work creates all sorts of outputs, that are of use at many different stages to other researchers. These outputs should be considered as valuable publications in their own right. An agent based simulation of emergent social structures in the early Iron Age makes an argument in code about how the Roman world worked. If I published a discussion of the results of such a model, that is fine; but if I don’t make that code available for someone else to critique, extend, or transform, I am being academically dishonest. The time it takes to build a model that works, is valid, that simulates something important, and the process it takes to build such a model, is considerable. The data that such a model produces is valuable for others looking to re-build a model of the same phenomena in another platform (which is crucial to validating the truth-content of models). All of these sorts of outputs can be made available online in digital archives built for the purpose of long term storage. The number of times such models are downloaded or discussed online can often be measured; these measures should also be taken into account as a kind of citation (see
Experimentation and Risk Taking
Finally, I think that work that is experimental, that discusses what didn’t work, should be recognized and celebrated. Todd Presner writes, (
” Digital projects in the Humanities, Social Sciences, and Arts share with experimental practices in the Sciences a willingness to be open about iteration and negative results. As such, experimentation and trial-and-error are inherent parts of digital research and must be recognized to carry risk. The processes of experimentation can be documented and prove to be essential in the long-term development process of an idea or project. White papers, sets of best practices, new design environments, and publications can result from such projects and these should be considered in the review process. Experimentation and risk-taking in scholarship represent the best of what the university, in all its many disciplines, has to offer society. To treat scholarship that takes on risk and the challenge of experimentation as an activity of secondary (or no) value for promotion and advancement, can only serve to reduce innovation, reward mediocrity, and retard the development of research.”
One of my blog posts, ‘How I Lost the Crowd‘, discusses how my one project got hacked. That piece was read by some 400 people shortly after it was posted – and it later found its way into various digital history syllabi ( for instance here. This post has been read over 700 times in the past 10 months. Failing in public is where research and teaching are the same side of the same coin (he said, to mangle a metaphor).
So what should one look for?
Work that is transformative; where multi-authored work is valued as much as the single-author opus; work that is outward facing and is recognized by others through linking, reposting, sharing (and other so-called ‘alt-metrics; cf
/ for one attempt to pull these all together); data-as-publication; code-as-publication; experiments and risktaking and open discussion of what does and what does not work; software development & project management should be recognized as research; and any work that lays the groundwork for others to see further – the humble ‘how to’ (our lunatick moment; see for instance
For explicit guidelines on how to evaluate digital work, see Rockwell,
Considering any digital work, Rockwell suggests the following questions:
- Is it accessible to the community of study?
- Did the creator get competitive funding? Have they tried to apply?
- Have there been any expert consultations? Has this been shown to others for expert opinion?
- Has the work been reviewed? Can it be submitted for peer review? (things like Digital Humanities Now, & JDH are crucial here)
- Has the work been presented at conferences?
- Have papers or reports about the project been published? (whether online or print, born-digital or otherwise is not the issue here)
- Do others link to it? Does it link out well?
- If it is an instructional project, has it been assessed appropriately?
- Is there a deposit plan? Will it be accessible over the longer term? Will the library take it?
I’m not saying that we should build this checklist into any tenure and promotion language; rather I’m offering it here to suggest that any such language, if it broadly considers such things, will probably be ok, in the hopes of finding an acceptable middle ground between the box-tickers and the non-boxtickers. Rockwell offers some best practices for carrying out digital work, that speak to these questions:
- Appropriate content (What was digitized?)
- Digitization to archival standards (Are images saved to museum or archival standards?)
- Encoding (Does it use appropriate markup like XML or follow TEI guidelines?)
- Enrichment (Has the data been annotated, linked, and structured appropriately?)
- Technical Design (Is the delivery system robust, appropriate, and documented?)
- Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?)
- Online Publishing (Is it published from a reliable provider? Is it published under a digital imprint?)
- Demonstration (Has it been shown to others?)
- Linking (Does it connect well with other projects?)
- Learning (Is it used in a course? Does it support pedagogical objectives? Has it been assessed?)
This is of course a thinking-out-loud exercise, and will no doubt change. Thoughts?
I’m addressing the Underhill Graduate Students’ Colloquium tomorrow, here in the history department at Carleton U. Below are my slides for ‘Living the Life Electric: On Becoming a Digital Humanist’
update March 7: here are my speaking notes. These give a rough sense of what I intend to talk about at various points. Bolded titles are the titles of slides. Not every slide is listed, as some speak more or less for themselves.
I wanted to be an archaeologist - I graduated in 2002.
‘Digital Humanities’ wasn’t coined until 2004.
It emerges from ‘humanities computing’, which has been around since the 1940s.
In fact, computing wouldn’t be the way it is today without the Humanities, and the Jesuit, Father Busa.
Eastern Canada’s Only Stamped Brick Specialist -Roman archaeology
Eastern Canada’s only Stamped Brick Specialist, probably
….things were pretty lean in 2003…
Life from a suitcase
Comin’ Home Again
Youth development grant to study cultural heritage of my home township
Also a small teaching excavation based in Shawville
Which led to a teaching gig at the local high school.
A Year of Living Secondarily
What was it about my academic work that I really enjoyed?
Possibilities of Simulation
Random Chances and the virtues of ‘What the Hell’
Meanwhile, I enter business – 3 different startups, one of which has survived (so far!)
Heritage education – learned how to install my own software, LMS
Trying to monetize the information I uncovered in my cultural heritage study
Coronation Hall Cider Mills
What are the digital humanities – think about it: modern computers were developed in order to allow us to map, forecast, the consequences of massive annihilation and death. Simulation is rooted in the desire to predict future death counts. My interest emerged from trying to simulate my own understandings of the past, to understand the unintended consequences of my understandings, to put some sort of order on the necessarily incomplete materials I was looking at. I call it ‘practical necromancy’
Do your work in public blog was originally intended to chronicle my work on simulation, but it has become very much the driver of my online identity, the calling card that others see when they intersect my work – and because it’s been up for so long, with a sustained focus, it creates a very strong signal which our algorithms, Google, pick up. This is how academics can push the public discourse: interfere with the world’s extended mind, their entangled consciousness of cyberspace & meatspace.
Allows you to develop your ideas
Forces you to write in small chunks
Exposes your work to potential audiences
My blog posts have been cited in others’ academic monographs
Has improved the readership of my published work
A quarter million page reads over the last six years.
My book: maybe 40 copies, if I’m lucky.
Basic Word Counts
digital 1082 research 650 university 577 experience 499 library 393 humanities 386
History: 177 times
Broadly, not useful or surprising. But consider the structure of word use…
Group 1: gives you a sense of technical skills, but for the most part not the kinds of analyses that one would use that for. That’s an important distinction. The analysis should drive the skill set, not the other way around (a man with a hammer, everything looks like a nail)
Group 2: European centres!
Group 3: Canada!
Job adverts – to – topics. Six broad groups based on how the adverts share particular discourses. Gives a sense of where academic departments think this field is going. If I’d done this according to individual researcher’s blogs, or the ‘about’ pages for different centres, you’d get a very different picture – game studies, for instance.
Important point: I wanted to show you how you can begin to approach large masses of material, and extract insights, suss out, underlying structures of ideas. This is going to be big in the future, as more and more data about our every waking moment gets recorded. Google Glass? It’s not about the user: it’s about everything the user sees, which’ll get recorded in the googleplex. Governments. Marketers. University Administrations. Learn to extract signals from this noise, and you’ll never go hungry again.
Keep in mind that in 1994 I wrote that the internet would never be useful for academics. My ability to predict the future is thus suspect.
So how to join this brave new world? Twitter, etc.
In this case, the two mode network of jobs to top consituent topics provides much more clarity than the graph I posted at the end of part 2, the one-mode jobs-to-jobs via shared topics. I used the java gui for MALLET, which arranges the output in a very nice hyperlinked folder, which you may explore here. You can grab the CSV and the Gephi files from this directory.