Professor Graham’s Museum of Eldritch Antiquities

The more I study computational creativity the more I’ve come to the conclusion that whether something is perceived as creative or not says a lot about us as humans and less about our algorithms. Or to introduce another idiom: creativity is in the eye of the beholder


At night, the statues would come alive Professor Graham’s Museum of Eldritch Antiquities. “Come – come!” he gestures. You enter a room; the darkness in the air shimmers…

As you walk along, you become aware of a faint clicking noise. “Now, in this room – please don’t get too close to the glass, ha ha, it’s only one pane thick – we have some lovely Greek kraters…”
His voice drops to a whisper. “best to just keep on walking past the next gallery. Don’t make eye contact; whatever you do.’
It turns a baleful…eye? towards you. The cold certain knowledge drops into your brain. There is no escape. A noise from the other gallery distracts it momentarily, and your feet decide to flee. “Wait!” he yells, “You can only exit through the gift shoppe! It is most curious!”
Bewildered, you run down one evershifting corridor into another, until you crash into a dead-end wall…. and somehow, push through to the outside world. Panting, you stare up at the monstrous edifice…
At least you got a postcard from Professor Graham’s Museum of Eldritch Antiquities. Looking closely, you see it was painted by someone W someone Bl
You squint at the words at the bottom, trying to frame your lips around the unfamiliar syllabus, momentarily forgetting they’re likely cursèd. It is the last mistake you make. /fin.
generative art. yeah, the computer pumps out the pixels, but figuring out how to _drive_ the computer to somewhere interesting, somewhere you want to go: there’s the art.
The top image in each pair was the starting place for a VQGAN+CLIP generative experiment; the bottom image is a still from the resulting movie. You can see the way the images emerge starting at this post. I’m exploring the spaces that the machine(s) ‘know’. Below, a simple prompt, ‘an archaeologist in the field’, drawing on the wikiart image model.
No conclusions in this post; experiments ongoing.

A Stack of Text Files + Obsidian Interface = Personal Knowledge Management System

(See my previous post about Obsidian here. This post is written for my students, except for the bit at the end)

In recent years, I’ve found that a lot of my research materials are all online. Everything I read, everything I study. I use Zotero to handle bibliography and to push pdfs to an iPad with zotfile; then I annotate as I read on the device, and eventually, retrieve the annotations back into Zotero.

I read on a browser on my work computer too; I use with a private group where I’m the only member to annotate websites and pdfs in the browser when I can’t be bothered to send them to the ipad.

I use a notebook where I scribble out ideas and page numbers.

All of this material is scattered around my devices. I’ve long admired the work of Caleb McDaniel, and his open notebook history. His research lives online, in a kind of wiki. Links between ideas and observations can be made that way and eventually, ideas can emerge out of the notes themselves by virtue of the way they’re connected. This is the idea behind what is called the Zettelkasten Method (‘slip box’, where each idea, each observation is kept on a discrete slip of paper numbered in such a way that connections can be formed):

A Zettelkasten is a personal tool for thinking and writing. It has hypertextual features to make a web of thought possible. The difference to other systems is that you create a web of thoughts instead of notes of arbitrary size and form, and emphasize connection, not a collection.

On a computer, such a thing can be created out of a folder of simple text files. Everything I observe, every idea that I have: one idea, one file. Then I use the free ‘second brain’ software, Obsidian to sit ‘on top’ of that stack of files to enable easy linking and discoverability. Obsidian helps you identify connections, backlinks and outgoing links, between your notes. It visualizes them as a graph too, which can be helpful. Obsidian works as a personal knowledge base; seeing your notes develop and begining to interlink – and being able to search those patterns – supercharges your note taking abilities and the value of your reading. With time, you’ll start to see connections and patterns in your thoughts. You’re building up a personal knowledge base that becomes more valuable the more you use it. It can be extended with all sorts of plugins, written in javascript (since Obsidian is an electron app). Some of the plugins I use are Dataview (which lets you create new notes that query other notes’ metadata – keeping track of all notes relevant to a particular project, for instance) and Journey, which finds the shortest path between two notes (and you can then copy that path into a new note that embeds the contents of the notes along the journey: boom, instant article outline!)

I haven’t been playing with Obsidian for too long, but I do have a folder with over 400 separate markdown notes in it, the legacy of previous experiments with zettel-based systems. But so far, Obsidian seems like a winner. When I get an idea, I search my vault for relevant notes, and then it is easy to embed them into a new document (type [[ and start writing; any existing notes with that tile are found; close with ]]. Then put a ! in front of it: the note is embedded. Add a ^ after the note title, and you can embed specific blocks or paragraphs from the source note). I write around the existing notes, and then export to word for final polishing, citations (copy and paste; or install the pandoc plugin). This accelerates my writing considerably, and helps me pull my research together into coherent form. (Of course, since I’ve developed these notes across several different systems in my perennial search for the notetaking system to rule them all, my metadata usage isn’t consistent, which hampers things a bit).

Obsidian calls a folder with markdown files in it a ‘vault’. This vault can live in your icloud folder, your gdrive folder, your dropbox folder. You can initialize it with Git and push copies to a private git repo. Since the files are locally store, the whole thing is very fast, and you’re not locked in to using Obsidian if you decide something else would be better. I’ve started making a number of what I am calling ‘starter’ vaults for my students, to use in my upcoming courses. Since my courses this year are mostly methods-based, I want the students to use Obsidian as a kind of empty lab notebook. My starter vaults are configured with a few basic templates, some basic important course information, and a script for retrieving their Hypothesis annotations:

And for the ambitious student, a vault configured with every plugin and template I could find that I figured would be helpful for a student at

These include:

  • ocr templater extracts text from images and inserts into a new note
  • extract notes from hypothesis
  • citations from zotero bib
  • with mdnotes installed in zotero, you can export your zotero notes to this vault
  • with refactor commands, you can break big notes into atomic notes
  • with the ‘related’ templater, you can use text analysis to find notes that are similar, to enable linking
  • open the backlink pane, and use backlinks and outgoing links to find potential connections with other notes (compares the text of the note to the title of notes, finding matching strings for potential linking)
  • use the journey plugin to find paths through the vault, chains of ideas; these chains can be inserted into new notes
  • use transclusion (embedding) to create long overview notes that pull your ideas/observations from atomic notes into something that can become the core for a paper or article
  • queries to see how long you’ve got left for a writing project, and to see which resources you’ve used for what project
  • kanban boards for project management

Finally, I am also using an Obsidian vault configured with Dataview to try to improve my project management. I have a few research projects with several moving parts, plus responsibilities coordinating a graduate programme and a minor programme. This vault looks like this:

On the left, the ‘preview’ (non markdown) version of the index note that links out to several project notes. At the bottom is a table that pulls together notes on each of my graduate students with things that I need to keep track of. In the right hand pane, another note is open, showing the markdown code block that invokes the dataview plugin. The plugin searches through the metadata of each note about my individual students, pulling out those students who are formally working on my various projects.

Anyway, so far, so good. Give a play! Download it, and open the demo vault to get a sense of whether or not this approach to note-keeping works for you.

Hands On with ‘Hands-On Data Visualization’

I was pleased to receive a physical copy of Jack Dougherty and Ilya Ilyankou’s Hands On Data Visualization: Interactive Storytelling from Spreadsheets to Code not long ago. The complete online open access version is available behind this link. 

I’ve worked with Jack before, contributing essays to some of the volumes he’s edited on writing digital history or otherwise crafting for the web with students.

The Hands On Data Visualization (henceforth, HODV) book continues Jack’s work making digital history methods accessible to the widest variety of people. That’s one of the key strengths of this book; it addresses those students who are interested in finding and visualizing patterns in the past but who do not, as yet, have the experience or confidence to ‘open the hood’ and get deep into the coding aspects of digital history. I love and frequently use, refer to, and assign, tutorials from The Programming Historian; but there is so much material there, I find my students often get overwhelmed and find it hard to get started. Of course, that says more about my teaching and pedagogical scaffolding than perhaps I am comfortable with sharing. HODV I think will serve as an on-ramp for these students because it builds on the things that they already know, familiar point-and-click GUIs and so on, but much more important is the way it scaffolds why and how a student, from any discipline, might want to get into data visualization. (And of course, once you can visualize some data, you end up wanting to build more complex explorations, or ask deeper questions.)

Let’s talk about the scaffolding then. The book opens with a series of ‘foundational skills’, most important amongst them being ‘sketching out the data story‘. I love this; starting with pen and paper, the authors guide the student through an exercise moving from problem, to question, to eventual visualization; this exercise bookends the entire volume; the final chapter emphasizes that:

The goal of data visualization is not simply to make pictures about numbers, but also to craft a truthful narrative that convinces readers how and why your interpretation matters….tell your audience what you found that’s interesting in the data, show them the visual evidence to support your argument, and remind us why it matters. In three words: tell—show—why. Whatever you do, avoid the bad habit of showing lots of pictures and leaving it up to the audience to guess what it all means. Because we rely on you, the storyteller, to guide us on a journey through the data and what aspects deserve our attention. Describe the forest, not every tree, but point out a few special trees as examples to help us understand how different parts of the forest stand out.

The focus throughout is on truthfulness and transparency and why it matters. We move from part one, the foundational skills (from mockups, to finding, organizing and data wrangling the data) to building a wide variety of visualizations, charts, maps, and tables and getting these things online at no cost in part two. Part three explores some slightly more complicated visualizations relying on templates that sometimes involve a wee bit of gentle coding, but are laid out and illustrated clearly. This section is the one I’ve directed my own students to the most, as many of them are public history students interested in map making, and this section is one of the best resources on the web I think at the moment for building custom map visualizations (and geocoding, &tc.) I think students navigating this material will be reassured and able to adapt when these various platforms/templates etc change slightly (as they always do), given how carefully the various steps are documented and how they interrelate; this enables the student to see how to adapt to the new circumstances I would think. In my own writing-of-tutorials, I rely too much on writing out the steps without providing enough illustrated materials even though my gang like the reassurance of a screen that matches what the person in charge says should happen (my reasoning about not providing too much illustrative materials is that I’m also trying to teach my students how to identify gaps in their knowledge versus gaps in communicating, and how to roll with things – you can judge for yourself how well that works, see eg . But I digress).

The final section deals with truthfulness, with sections on ‘how to lie with charts’ and ‘how to lie with maps’, a tongue in cheek set of headings dedicated to helping students recognize and reduce the biases that using these various tools can introduce (whether intentionally or accidentally). The final chapter involves storyboarding, to get that truthful narrative out there on the web, tying us back to chapter one and trying to solve the initial problem we identified. I really appreciate the storyboarding materials; that’s something I want to try more of with my gang.

I’ve spent a lot of years trying to build up the digital skills of my history students, writing many tutorials, spending many hours one-on-one talking students through their projects, goals, and what they need to learn to achieve them. HODV fills an important gap between the dedicated tutorials for academics who know what it is they are after and have a fair degree of digital literacy, and folks who are just starting out, who might be overwhelmed by the wide variety of materials/tutorials/walk-throughs they can find online. HODV helps draw the student into the cultures of data visualization, equipping them with the lingo and the baseline knowledge that will empower them to push their visualizations and analyses further. Make sure also to check out the appendix on ‘common problems’, which gives a series of strategies to deal with the kinds of bugs we encounter most often.

My teaching schedule for the next little while is set, but I could image using HODV as a core text for a class on ‘visualizing history’ at say the second year level. Then, I would rejig my third year ‘crafting digital history’ course to explicitly build on the skills HODV teaches, focussing more on more complex coding challenges (machine vision for historians, or NLP, topic models, text analysis). Then, my fourth year seminar on digital humanities in museums would function as a kind of capstone (the course works with undigitized collections, eventually publishing on the web with APIs, or doing reproducible research on already exposed collections).

Anyway, check it out for yourself at (the website is built with bookdown and R Studio; that’s something I’d like to teach my students too! Happily, there’s an appendix that sketches the process out, so a good place to start.) The physical book can be had at all the usual places. I don’t know what kinds of plans Jack and Ilya have for updating the book, but I expect the online version will probably be kept fresh, and will become a frequent stop for many of us.


Law & The Buying or Selling of Human Remains

Tanya Marsh compiled the relevant US laws surrounding human remains in ‘The Law of Human Remains’, 2016. I tried to gain a distant read of this body of law by topic modeling, term frequency – inverse distribution frequency, and pairwise comparison of the cosine distance between the laws. This is only possible due to the care and consistency, and regularity with which Marsh compiled the various laws. I also added in relevant Canadian laws to my text corpus.

For the topic model, I took two approaches. The input documents are individual text files summarising each state’s laws. Then I created a 23 topic topic model based first on the unigrams (individual words) and then bigrams (pairs of words).

For the unigram topic model, these are the topics and their probabilities:

1 body person burial permit dead remains death funeral disinterment director 0.356792543
2 act body person offence burial permit death liable fine cemetery 0.113741585
3 person burial cemetery monument tomb structure guilty remains class removes 0.102140763
4 person body licence specimen purpose crime possession offence deceased anatomical 0.047599170
5 remains funerary object native violation objects profit individual indian title 0.042126856
6 code corpse offense commits tex ilcs treats conduct supervision offensive 0.038992423
7 corpse intentionally site reburial medical admin sexual report coroner examiner 0.032800544
8 disturb destroy ground unmarked regs skeletal memorial knowingly material kin 0.032624217
9 dollars imprisonment thousand fine punished hundred exceeding fined duly conviction 0.030167542
10 death art burial ashes burials civil commune lands container dissection 0.029045414
11 remains disposition person act funeral heritage object operator deceased cremated 0.026407496
12 vehicle rev procession import export sites unmarked skeletal site historic 0.024186852
13 communicable metal disease casket sealed encased stillbirth coffin embalmed lined 0.021603532
14 cemetery corporation thereof monument purpose remove provided notice owner county 0.019534437
15 stat ann entombed intentionally excavates thereof stolen interred surviving directed 0.016960718
16 products criminal importation elements provisional cadavers tissues article organs including 0.016726574
17 services funeral business minister public paragraph prescribed information director person 0.016712443
18 town ch gen clerk agent board rsa city designated registered 0.012288657
19 church cons lot society trustees belonging company weeks deaths association 0.011458630
20 category violates punishable ii offences procedure vehicle field provincial shipment 0.008089604

When we look at the laws this way, you can see that the vast majority of law is related to regulating the funeral industry (topic 1), the regulation and care of cemeteries (topic 2, topic 3), and then various offenses against corpses (including specific mention of Indigenous remains; topics 4, 4, 5, 7; there is a surprising number of statues against necrophilia). Some topics deal with interfering with corpses for tissue and implantation (topic 16). Some topics deal with forms of memorialization (topic 10, topic 11).

If we take pairs of words as our ‘tokens’ for calculating the topic model, these are the topics and their probabilities:

  1. funeral director dead body transit permit burial transit final disposition local registrar licensed funeral death occurred common carrier burial ground 0.16873108
  2. burial permit statistics act vital statistics death occurred health officer anatomy act death occurs religious service common carrier act offence 0.06060749
  3. burial permit summary conviction coroners act statistics act vital statistics archaeological object act offence palaeontological object public health chief coroner 0.05312818
  4. funerary object native indian monument gravestone mutilate deface fence railing tomb monument authorized agent duly authorized willfully destroy destroy mutilate 0.05259857
  5. funerary objects burial site registration district responsible person family sensibilities disposition transit interment site sepulcher grave ordinary family person knowingly 0.05135363
  6. cons stat person commits health safety code ann penal code safety code legal authority cemetery company historic burial authority knowingly 0.05095100
  7. thousand dollars burial remains surviving spouse burial furniture county jail means including original interment pretended lien skeletal burial disturb vandalize 0.05092606
  8. deceased person relevant material anatomy licence religious worship united kingdom ii relevant mortem examination post mortem summary conviction anatomical examination 0.04980786
  9. knowingly sells tomb grave marked burial degree felony sells purchases sexual penetration dead fetus subsequent violation aids incites removing abandoning 0.04823685
  10. tomb monument monument gravestone offences procedure procedure act provincial offences defaces injures mutilates defaces skeletal remains burial artifacts destroys mutilates 0.04754577
  11. burial grounds gross misdemeanor disposition permit historic preservation unlawfully removed conviction thereof grave artifact fence wall grave vault private lands 0.04716950
  12. stat ann dead body disinters removes intentionally excavates deceased person grave tomb disinterment removal excavates disinters dead person thousand dollars 0.04301240
  13. stat ann rev stat unmarked burial skeletal remains burial site funeral procession burial sites burial artifacts admin regs funeral home 0.04141044
  14. cremated remains death certificate grand ducal mortal remains sexual intercourse article chapter level felony title article civil registrar grave rights 0.04090582
  15. medical examiner final disposition admin code burial site cataloged burial code dhs death occurred cremation permit sexual contact death certificate 0.03936073
  16. burial permit designated agent historic resources responsible person disinterment permit medical examiner chief medical palaeontological resource historic resource lineal descendant 0.03757965
  17. cemetery corporation dead body local health awaiting burial pub health profit corp religious corporation tombstones monuments attached thereto burial removal 0.03335480
  18. cremated remains funeral provider heritage object public health funeral services damage excavate provincial heritage tissue gift gift act damage desecrate 0.03295103
  19. coffin casket tomb monument airtight metal disinter individual individual remains lined burial private burying historic preservation enforcement officer metal lined 0.02902297
  20. funeral services services business public health health director business licensee national public services provider transportation services classified heritage funeral operations 0.02134617

You see the same general relative proportions, but the bigrams give a bit more clarity to the topic (read each list as pairs of words. Either way you cut it, there’s not much language given over to dealing with buying or selling of the dead, and a lot more space given over to regulating the funeral industry and graveyards.

Calculating tf-idf gives a sense of what differentiates the different jurisdictions, since it will pull out words that are comparatively rare in the complete body of text but prominent in a single document. I’m having trouble getting the visualizations to lay out cleanly (text overlaps; darn R). In terms of comparing the cosine similarity of texts, there’s some interesting patterns there; here’s a sample:

1 Iowa Michigan 0.5987066
2 Michigan Iowa 0.5987066
3 Florida Michigan 0.5116013
4 Michigan Florida 0.5116013
5 Iowa Florida 0.5100568
6 Florida Iowa 0.5100568
7 District of Columbia Georgia 0.4800154
8 Georgia District of Columbia 0.4800154
9 Mississippi Georgia 0.4771568
10 Georgia Mississippi 0.4771568

…that is to say: Iowa & Michigan are about 60% similar; Florida and Michigan are about 51% similar; and so on. I had done this to see what the outliers are; I tried representing these relationships as a network:

So… what *are* the laws around buying and selling human remains? I went on an epic twitter thread yesterday as I read through Marsh 2016. Thread starts here:

And I managed to break the thread; it resumes with this one:

All of this will be summarised and discussed in our book about the human remains trade, in due course.

A museum bot

I wanted to build a bot, inspired by some of my students who made a jupyter notebook that pulls in a random object from the Canadian Science and Technology Museum’s open data, displaying all associated information.

The museum’s data is online as a csv file to download (go here to find it: ). Which is great; but not easy to integrate – no API.

Build an API for the data

So, I used Simon Willison’s Datasette package to take the csv table, turn it into a sqlite database, and then push it online –

First I installed sqlite-utils and datasette using homebrew:

brew install sqlite-utils datasette

then I turned the csv into sql:

sqlite-utils insert cstm.db artefacts cstmc-CSV-en.csv --csv

I installed the commandline tools for vercel, where my museum data api will live, with

npm i -g vercel

vercel login

then I pushed the data online with datasette; datasette wraps the database in all its datasette goodness:

datasette publish vercel cstm.db --project=cstm-artefacts

You can see the results for yourself at (click on ‘artefacts’).

Now, a few days ago, Dan Pett posted the code for a bot he made that tweets out pics & data from the Portable Antiquities Scheme database – see his repo at I figured it should be easy enough to adapt his code, especially since my new api will return data as json.

Build a Bot with R

So I fired up RStudio on my machine, and began experimenting. The core of my code runs an sql query on the API looking for a random object where ideally the general description and thumbnail fields are not null. Then it parses out the information I want, and builds a tweet:


search <- paste0('*+FROM+artefacts+WHERE+NOT+GeneralDescription+IS+NULL+AND+NOT+thumbnail+IS+NULL+ORDER+BY+RANDOM%28%29+LIMIT+1%3B')
randomFinds <- fromJSON(search)
## grab the info, put it into a dataframe
df <-$rows)
artifactNumber <- df$V1
generalDescription <- df$V3
contextFunction <- df$V17
thumbnail <- df$V36

## write a tweet
tweet <- paste(artifactNumber,generalDescription,contextFunction, sep=' ')

## thank god the images have a sensible naming convention;
## grab the image data
imagedir <- randomFinds$results$imagedir
image <- paste0(artifactNumber,'.aa.cs.thumb.png')
imageUrl <- paste0('', URLencode(image))

## but sometimes despire my sql, I get results where there's an issue with the thumbnail
## so we'll test to see if there is an error, and if there is, we'll set up a 
## an image of the Museum's lighthouse, to signal that well, we're a bit lost here
if (http_error(imageUrl)){
  imageUrl <- paste0('')
  tweet <- paste(artifactNumber,generalDescription,contextFunction, "no image available", sep=' ')

## then we download the image so that we can upload it within the tweet
temp_file <- tempfile()
download.file(imageUrl, temp_file)

So all that will construct our tweet.


The next issue is setting up a bot on twitter, and getting it to… tweet. You have to make a new account, verify it, and then go to and create a new app. Once you’ve done that, find the consumer key, the consumer secret, the access token, and the access secret. Then, make a few posts from the new account as well just to make it appear like your account is a going concern. Now, back in our script, I add the following to authenticate with twitter:

findsbot_token <- rtweet::create_token(
  consumer_key = "THE-KEY-GOES-HERE",
  consumer_secret = "THE-SECRET-GOES-HERE",
  access_token = "THE-TOKEN-GOES-HERE",
  access_secret = "THE-ACCESS-SECRET-GOES-HERE"

# post the tweet
  status = tweet,
  media = temp_file,
  token = findsbot_token

And, if all goes according to plan, you’ll get a “your tweet has been posted!” message.

Getting the authentication to work for me took a lot longer than I care to admit; the hassel was all on the site because I couldn’t find the right damned placed to click.


Anyway, a bot that tweets when I run code on my machine is cool, but I’d rather the thing just ran on its own. Good thing I have Dan on speed-dial.

It turns out you can use Github Actions to run the script periodically. I created a new public repo (Github actions for private repos cost $) with the intention of putting my bot.R script in it. It is a very bad idea to put secret tokens in plain text on a public repo. So we’ll use the ‘secrets’ settings for the repo to store this info, and then modify the code to pull that info from there. Actually, let’s modify the code first. Change the create_token to look like this:

findsbot_token <- rtweet::create_token(
  app = "objectbot",
  consumer_key =    Sys.getenv("TWITTER_CONSUMER_API_KEY"),
  consumer_secret = Sys.getenv("TWITTER_CONSUMER_API_SECRET"),
  access_token =    Sys.getenv("TWITTER_ACCESS_TOKEN"),
  access_secret =   Sys.getenv("TWITTER_ACCESS_TOKEN_SECRET")

Save, and then commit to your repo. Then, click on the cogwheel for your repo, and select ‘Secrets’ from the menu on the left. Create a new secret, call it TWITTER_CONSUMER_API_KEY and then paste in the relevant info, and save. Do this for the other three items.

One thing left to do. Create a new file, and give it the file name .github\workflows\bot.yml ; here’s what should go inside it:

name: findsbot

    - cron: '0 */6 * * *'
        description: 'Log level'
        required: true
        default: 'warning'
        description: 'Run findsbot manually'
    runs-on: macOS-latest
      - uses: actions/checkout@v2
      - uses: r-lib/actions/setup-r@master
      - name: Install rtweet package
        run: Rscript -e 'install.packages("rtweet", dependencies = TRUE)'
      - name: Install httr package
        run: Rscript -e 'install.packages("httr", dependencies = TRUE)'
      - name: Install jsonlite package
        run: Rscript -e 'install.packages("jsonlite", dependencies = TRUE)'
      - name: Install digest package
        run: Rscript -e 'install.packages("digest", dependencies = TRUE)'
      - name: Create and post tweet
        run: Rscript bot.R

If you didn’t call your script bot.R then you’d change that last line accordingly. Commit your changes. Ta da!

The line that says ‘cron: ‘0 */6 * * *’ is the actual schedule. You can decipher that with this:

which comes from here: . If you want to test your workflow, click on the ‘actions’ link at the top of your repo, then on ‘findsbot’. If all goes according to plan, you’ll soon see a new tweet. If not, you can click on the log file to see where things broke. Here’s my repo, fyi

So to reiterate – we found a whole bunch of open data; we got it online in a format that we can query; we wrote a script to query it, and build and post a tweet from the results; we’ve used github actions to automate the whole thing.

Oh, here’s my bot, by the way:

Time for a drink of your choice.


Individual objects are online, and the path to them can be built from the artefact number, as Steve Leahy pointed out to me: Just slap that number after the php?id=. So, I added that to the text of the tweet. But this also sometimes causes the thing to fail because of the character length. I’m sure I could probably test for tweet length and then swap in alternative text as appropriate, but one thing at least is easy to implement in R – the use of an url shortener. Thus:


liveLink <- paste0('', artifactNumber)
shortlink <- isgd_LinksShorten(longUrl = liveLink)

tweet <- paste(artifactNumber,generalDescription,contextFunction,shortlink, sep=' ')

Which works well. Then, to make sure this works with Github actions, you have to install urlshorteneR with this line in your yaml:

   - name: Install urlshorteneR package
        run: Rscript -e 'install.packages("urlshorteneR", dependencies = TRUE)'

ta da!

Why I Will Never Use My University’s LMS Again

There is a new LMS coming to Carleton. The switch has been flipped. We’re moving from Moodle to Brightspace. All of the things that we used to do for ourselves now depends on an office somewhere in Kitchener-Waterloo (24 hour support!).

This post is not a long reasoned argument about the use or not of LMS in higher ed. It’s just what I’m feeling right now about our particular circumstances. I’m imagining where things might lead. Let me share my worries.

I would be delighted to be wrong about all of this.

Every higher ed institution in Ottawa now uses Brightspace. And because of historical agreements, many students and programs end up taking classes across the different institutions. Can you imagine the pressure towards a ‘standardized’ experience that this would create? “It’s too confusing to students to navigate all these different course designs!” it will be said. It is said. I’ve heard it said.

So we’ll be encouraged to use the resources of our education support people to design our courses. At which point, it’d be good to check the fine print. Your course is your course, of course, unless you are contracted to design and teach a course, or perhaps you’ve used too many institutional resources to build it… at a certain point, your course might not be yours after all. Teaching from beyond the grave, anyone?

But we’ve got standardization. Multiple choice, short answer, essays, that’s standard. Easy to roll out. Everyone understands that game. You can’t really experiment or try to ungrade or empower your students, when everything’s standard. Not particularly good pedagogy; not really higher education, but by god it’s easy to churn out course shells fast. But oof, now there’s all this cheating. Better get eproctoring. Better get plagiarism detection. Examine how we got to this point where cheating is a rationale response to a series of hoops that carry little pedagogical value? Perish the thought. And don’t point out the problems with these ‘solutions’, lest someone hit you with a SLAPP. Imagine if the money spent on contracts for all this was put into providing stability for contingent lecturers – you want to make a difference for student experience? That’s where I’d spend the money.

This fall, we move back to the classroom for the smaller classes; or at least, that’s the plan. But all the big classes – or the classes that could be made bigger – well, keep an eye out…

So – I won’t use the LMS because it reifies a model of teaching I don’t believe is good pedagogy. While I still have the position and privilege to resist, I will. The more one uses an institutional LMS (or is compelled to use it), the more all of our freedom to teach using a different model – exposing students to other ways of learning, to ungrade, to turn things inside out – is eroded. I keep control of my teaching materials by making them open on the web; I keep control by giving it all away. It’s out there, on my own terms, and for good or for ill, people know what I’ve done. It pushes in ways that are uncomfortable, it makes space for things that don’t work out and that makes students extremely uncomfortable: I want them to try things that just might not come together within a seminar. But I don’t grade the thing, I work with the student to understand their process. This ain’t standard. It doesn’t scale. By design. (I once argued in a meeting that a class of 600 students was unethical. The instructor for that class was present. You can imagine how that went over).

But the pressure is mounting.

Consider the scenario- All those rich juicy data points that come from using the LMS. ‘We’ use those for their own good! We can see how many times they log into the LMS, correlate that with their GPA, cross reference with their demographic profile! But woops, Dr. Graham’s class doesn’t use the lms… that sure messes up those students’ analytics profiles, right? They’d be unfairly marked as ‘at risk’ (or some other consequence) because they don’t have as many ‘touch’ points as the others. That’s just one scenario. Others can be imagined.

Look, I worked in for-profit online education. These things exist. At where I worked, they also used the same tools to turn the gaze onto the faculty. Not enough points of contact with the system in a defined time frame? You got the ax.

But also: monoculture. No ecosystem survives monoculture. If everything’s standardized, nothing’s special, so why do we have four higher ed institutions in Ottawa anyway…

My ability to predict the future has always been poor, so I look forward to being proven wrong about all of this. But right now…


My Opening Remarks for HeritageJam 2021

After I remembered to unmute my mic (d’oh) this is what I said…

Welcome to HeritageJam 2021! This is the fourth iteration of the jam, and the first to be located outside of the UK, if you will accept that my basement office where I am now sitting constitutes the location of the jam. I’m Shawn Graham, and I’m at Carleton University in Ottawa, Canada. My colleagues on the Jam are Sierra McKinney and Katherine Cook, from the University of Montreal, and Stuart Eve from Bournemouth University and L.P. Archaeology, and I am so grateful that they are on board for this mad enterprise! This jam would not be possible without funding from Carleton University and the Social Sciences and Humanities Research Council of Canada.

I want to begin by acknowledging that where I live and work is on the unceded traditional lands of Algonquin Anishnaabeg. It is customary now at Carleton to begin events by making that land acknowledgement; but it seems to me that too often we just then continue on to do what we were going to do anyway. So one of my goals for this year’s Heritagejam is that we keep that land acknowledgement uppermost in our minds as we craft, create, and explore. The theme for this year’s HeritageJam is ‘sensation’, so one way to think about that theme is in the context of the land or territory you are in or on whose territory your work depends. What sensations in us, or in the public, should a land acknowledgement generate? How can we make that acknowledgement meaningful wherever we are, and however we might interpret ‘sensation?’ Sensations can be troubling; they can be enchanting. Perhaps we encounter sensations when we are confronted by eruptions of deep time in the present: how can we convey that sensation?

Today you will have an opportunity to meet the entire HeritageJam team, to hear more about how the Jam will unfold, gain inspiration and encouragement from past examples of work, and to meet other jammers with whom you are welcome to collaborate. I have participated in each iteration of the Jam, and what excited me then – and continues to excite me now – is that this opportunity to be wholly creative for the sake of thinking differently about heritage refreshes me; it re-invigorates me and reminds me that there are so many ways other than essays, articles, and monographs to engage the past. Each time I’ve participated in the jam, it has redirected me into new avenues that simply enrich my daily life.

This is the first edition of the Heritagejam that is completely virtual. In years past, there has been an in-person two day event where folks would come together, break into teams, and over the course of the two days make something – sometimes, it was completely fully-formed; othertimes, it was more like a design or prototype. There was always a virtual component where people working remotely could produce something in the month leading up to the in-person jam

I’m going to share my screen now. Here’s the 2014 jam, the first of its name. One of my favourite entries is this comic, by Nela Scholma-Mason; as you explore the entries, take a look at each entry’s paradata document. If data are things we study/use, and metadata describe the data, then paradata describe our process. Scholma-Mason’s paradata is a wonderful piece of zine making in itself! Another favourite of mine from the first jam is ‘Buried’, by Tara Copplestone and Luke Botham, a piece of interactive literature made with the Twine text game engine.  (‘Buried’ is available on the Internet Archive at

In the 2015 jam, take a look at Jens Nortoff’s sketch. It’s a quick thing he put together while out in the field; Heritagejam entries don’t have to be ‘digital’. In the 2017 jam Andrew Reinhard invented a deck builder game around the archaeological idea of ‘assemblage’.

So you can do just about anything you set your mind to; it is entirely ok and appropriate to submit a _design_ idea, a mock up, a wireframe, a powerpoint that uses found imagery to give us an idea of what you have in mind. Just take a look at the rules page, and get in touch with us. EVERY entry must include a paradata document. We follow the London Charter which calls paradata the “documentation of the evaluative, analytical, deductive, interpretative and creative decisions made in the course of … visualisation” to allow a clear understanding of how the visualisation came into being.”. Your paradata can be in whatever format you’d like to, although you’ll probably find that a page or two of text is most straightforward.

Now, to help you get started, I’m going to ask Sierra, Katherine, and Stuart to say a few words about their own creative processes and their experiences.

(Stu was working in the field and was not able to join, as the platform we were using it turned out did not support mobile, which was my fault for not checking first)

Now, you’ll notice that there are different areas marked out in this room according to the canonical western senses. Feel free to move around into an area that captures something about what you might be interested in exploring with regard to our theme, ‘sensation’. When your avatars are in close proximity, you will be able to see and speak to each other. Introduce yourself and perhaps begin by wondering what ‘sensation’ might mean in terms of land acknowledgements. Sierra, Katherine, Stuart and I will circulate; after about 20 – 30 minutes we’ll wind up the session.

At this point, a kind of unconference took place, with conversations taking place mostly in the ‘hearing’ and ‘taste’ circles. It’ll be interesting to see what emerges at the end of the month!

Thank you everyone! Our time for today is now up, but I am grateful that you were willing to spend it with us; I hope you’ve found new friends and collaborators here, and I encourage you to use our HeritageJam discord server for companionship while you heritagejam! Your creations and paradata can be submitted to our heritagejam email, and you can always contact me or ask for help in the discord as you need it.

HeritageJam 2021 is Go: The Sensation of the Past

HeritageJam 2021 is go!

How the past is conceptualised – how it is presented graphically, acoustically, haptically, olfactorily, vocally, and in other performative capacities – has a significant impact upon people’s understanding of themselves and the world around them. It is fundamental to influencing the degree of importance that individuals and communities assign to their environment, and how they care for that environment in the present and build upon it in the future. The artistry and enquiry that are invested into this creative work have known effects not only on public perception but on the whole trajectory of heritage study and practice – from research to policy-making to protection and conservation. The Heritage Jam is about showcasing the presentation of the past, and drawing together the many people invested in such presentation.

The Heritage Jam begun in 2014 at the University of York in the UK as a way to bring people together to design and create forward-thinking pieces of heritage visualisation in a short space of time. This year, it’s hosted by Shawn Graham at Carleton U, Katherine Cook at the Universite de Montreal, and Stuart Eve of L-P Archaeology plc.

The Heritage Jam is open to anyone interested in the way heritage is visualised: we call to artists, animators, game designers, programmers, archaeologists, historians, conservators, museum professionals, heritage practitioners, and any interested members of the public to join forces and collaborate. The outcomes of the Jam are hugely varied – ranging from fine art pieces, 3D models and games through to stories, sketches and videos. The only limits on creation are the theme, time and your imagination!

The Jam will take place entirely online over the month of April, 2021. Submissions will be due at Midnight (eastern), on April 30th.

We will host a kick-off event on March 31st at 1 pm eastern. Make sure to register your intent to participate in the jam via the sign-up form. We will also host a Discord server where you can casually drop in to have some company while you jam. Registered participants will be sent the invitation links for the kick-off event and the discord.

The theme for this year is ‘sensation’. We’ve all had to endure multiple lockdowns and isolation as a result of the COVID-19 pandemic. As the snow melts here in Ottawa, and we begin to feel the sun again on our faces, our senses perhaps are overwhelmed… what does ‘sensation’ mean to you in terms of heritage, history, and archaeology, as we approach another summer? (Some resources are available here to help you get started; for inspiration, see past HeritageJam entries!)

Entries are welcome in either English or French; there will be separate awards in both languages. Winning entries and Runners-Up will be invited to publish their work in Epoiesen: A Journal for Creative Engagement in History and Archaeology. A submission page for your entries will be made available on this site towards the end of April.

Even if you intend to create something as part of a team, please complete the sign-up sheet as an individual (so that we can send invitation links and so on to you); when you submit your entry, you will be able to indicate whether or not it’s part of a team entry (and who the team members are!) then. Even if you don’t sign up for the kick-off you can still participate in the jam – just submit your entry on April 30th! Make sure to also tweet about it using the #thj2021 tag.

The Dig: We Know Where the Bodies are Buried

Andrew Reinhard and I have been at it again. We wondered what the archaeology of Sutton Hoo might sound like. There are a lot of ways one could’ve approached this. We could’ve tried to recreate a soundscape – of the moment of the ship burial, or the moment of its excavation, for instance. We might have found tabular data from the various excavations and projects and maybe mapped the differing amounts of different kinds of artefacts by period they date to – or days they were found – or locations found in the earth (x,y,z making a chord, maybe). Maybe there is geophysics data (magnetometry, georadar, etc) and we could’ve approached things a la Soundmarks.

We instead looked at one piece of the public archaeology literature around Sutton Hoo – ‘The Sutton Hoo ship-burial : a handbook by
R. L. S. Bruce-Mitford as reproduced on the Internet Archive. I copied the text, and then divided it up into ‘documents’ of one page each. These I fed into a topic modeling routine I use in my teaching (written in R; see the course website). A topic model is a way of asking the machine, ‘if there are 15 topics in this corpus of material, what are they about?’. The machine will duly decompose the material, looking at statistical patterns of word use both in the ‘documents’ (here, individual pages) to try to sort those patterns into 15 buckets of words which we as the humans involved can then look at and say, ‘oh yes, that’s clearly a topic about English myth-history’. The result was this:


Notice how each chunk adds up to 1. I then took the underlying proportions for each chunk for four separate topics that seemed interesting: ‘coins date time hoard merovingian’, ‘sutton hoo swedish jewellery’ ‘gold plate figure purse buckle’ and ‘burial pagan grave christian east’. Those raw numbers, ranging between 0 and 1 (ie, the proportion each topic goes towards forming those chunks of writing) I multiply by 100 and then scale against 1 – 88 for the 88 key piano keyboard. Think of each topic as now a voice in a choir, each one singing their note on the beat. Muscially, a bit boring, but to the intellect, interesting; Andrew and I are still working with that data (mapping to instruments, remixing to bring out particular themes and so on). I am also interested in coding music, though I am very bad at it; I turned to Sam Aaron’s Sonic Pi live-music-coding synth. Building on some sample code I wrote a little piece that kinda looks like this:

with_fx :reverb do
   in_thread do
    loop do
     notes = (ring 20,50,21,50  etc etc: these are the proportions of the different topics for the first topic)
     notes2 = (ring 6, 14, 59 etc etc)
     notes3 etc 
     notes4 etc
     use_synth :piano
     play notes, release: 0.1, amp: rand, pan: rrand(-1, 1)
     play notes2, release: 0.1, amp: rand,pan: rrand(-1, 1)
     play notes etc
     sleep 0.25

with_fx :wobble, phase: 2 do |w|
  with_fx :echo, mix: 0.6 do
    loop do
      sample :drum_heavy_kick
      sample :bass_hit_c, rate: 0.8, amp: 0.4
      sleep 1

and then I let that play; because it’s a live coding synth, you can make changes on the fly and layer those changes as you play. So not just sonification, a kind of digital instrument and performance. It’s not just the data you’re hearing, it’s my coding choices and my performance ability. I sent the result to Andrew and he immediately saw how the emotional impact of that music matched the latent horror of the film, and recut the trailer appropriately. Below, you can here the result (and if dmca takes down the video, you can also see it on Twitter. This reconstitution of Bruce-Mitford’s writing, a kind of digital body horror on a corpus of thought perhaps. The archaeological uncanny always eventually emerges.


PS: Youtube hit me with a copyright infringement the instant I uploaded that video. If it doesn’t play, you might be able to see it here:

From Hypothesis Annotation to Obsidian Note

Obsidian is a really nice interface for keeping zettlekasten-style notes (in individual markdown files, in a folder or ‘vault’). Hypothesis is a really nice interface for annotation on the web. Wouldn’t it be nice to be able to drop your annotations as unique files into your vault?

Well, this might work.

First, get ‘Hypexport’ from . Install it with

pip3 install --user git+

Then, create a new text file; call it and put into it your Hypothesis username and your developer token (which is underneath your username when you have hypothesis open) like so:

username = "USERNAME"
token = "TOKEN"

Now, you can grab all of your annotations with:

python3 -m hypexport.export --secrets > annotations.json

Now we need to turn that json into markdown. Incidentally, if you want to turn it into a csv, get jq and run something like this

jq -r '.annotations[] | [.text, .tags, .updated, .uri] | @csv' annotations.json > annotations.csv

So, here’s a json to markdown script: . Pip install that, but then find where it’s located on your machine (search for and change this line

data ='ascii', 'ignore')

to just

data =

and then run 

torsimany annotations.json

at the command prompt, and after a bit you’ll have a file called annotations.markdown.

Last thing – we want to split that up into separate markdown files, to drop into the obsidian vault. cpslit, split, awk, etc, all of those things will probably work; here’s some perl. Copy it into a text file, save with .pl, and if you’re on a mac, run

chmod +x

so you can run it. (Sourced from stackoverflow):


undef $/;
$_ = <>;
$n = 0;

for $match (split(/(?=### Title)/)) {
      open(O, '>temp' . ++$n);
      print O $match;

then run

./ annotations.markdown

and you’ll have a whoooole lotta files you can drop into your obsidian vault. Ta da!

Now, you’ll have to add the .md file extension, which can be done as a batch with this one liner on a mac:

for file in *; do mv "$file" "${file%}.md"; done

It’d be nice to have the correct file extension done in my split script, but whatever. Above, a portion of one of my recent annotations, exported and then turned into markdown through the above process.

Now, hypothesis allows any user to annotate in the public stream; Here’s a Zotero-Hypothesis importer that works through your zotero library, and then checks whether there are any public annotations for a piece, and saves them to a zotero note:

I haven’t tried it out, but if it works, and once your notes are in Zotero, you can use zotero-mdnotes to push ’em all into your Obsidian vault. Talk about distributed knowledge generation!

My own travel ban

So, back when it all began, I wrote ‘Why I’m Not Travelling to the US‘. Over the past four years, I went to 0 conferences in the US (of course, the pandemic these last 10 months helped with that too). I went to the states 3 times during that period to work with students at Muhlenberg, Drew, and U Mass Amherst.

So, no conference/self promotion travel, limited travel to support students.