I was going to actually release this as an actual album, but I looked into the costs and it was a wee bit too pricey. So instead, let’s pretend this post is shiny vinyl, and you’re about to read the liner notes and have a listen.
~o0o~
Track 1. Listening to Watling Street
To hear the discordant notes as well as the pleasing ones, and to use these to understand something of the unseen experience of the Roman world: that is my goal. Space in the Roman world was most often represented as a series of places-that-come next; traveling along these two-dimensional paths replete with meanings was a sequence of views, sounds, and associations. In Listening to Watling Street, I take the simple counts of numbers of epigraphs in the Inscriptions of Roman Britain website discovered in the modern districts that correspond with the Antonine Itinerary along Watling street. I compare these to the total number of inscriptions for that county. The algorithm then selects instruments, tones, and durations according to a kind of auction based on my counts, and stitches them into a song. As we listen to this song, we hear crescendoes and dimuendoes that reflect a kind of place-based shouting: here are the places that are advertising their Romanness, that have an expectation to be heard(Roman inscriptions quite literally speak to the reader); as Western listeners, we have also learned to interpret such musical dynamics as implying movement (emotional, phsycial) or importance. The same itinerary can then be repeated using different base data – coins from the Portable Antiquities Scheme database, for instance – to generate a new tonal poem that speaks to the economic world, and, perhaps the insecurity of that world (for why else would one bury coins?).
Code: This song re-uses Brian Foo’s 2 Trains code.
~o0o~
Track 2: Mancis the Poet. (Original blog post.) A neural network trained on Cape Breton Fiddle tunes in ABC notation; the output then sonified. This site converts ABC notation to MIDI file and makes the pdf of the score; this site converts to mp3, which I then uploaded to soundcloud.
~o0o~
Track 3: John Adams’ 20 (original post here). Topic modeling does some rather magical things. It imposes sense (it fits a model) onto a body of text. The topics that the model duly provide us with insight into the semantic patterns latent within the text…What I like about sonification is that the time dimension becomes a significant element in how the data is represented, and how the data is experienced. So – let’s take a body of text, in this case the diaries of John Adams. I scraped these, one line per diary entry (see this csv we prepped for our book, the Macroscope). I imported into R and topic modeled for 20 topics. The output is a monstrous csv showing the proportion each topic contributes to the entire diary entry (so each row adds to 1). If you use conditional formatting in Excel, and dial the decimal places to 2, you get a pretty good visual of which topics are the major ones in any given entry (and the really minor ones just round to 0.00, so you can ignore them). I then used ‘Musical Algorithms‘ one column at a time to generate a midi file. I’ve got the various settings in a notebook at home; I’ll update this post with them later. I then uploaded each midi file (all twenty) into GarageBand in the order of their complexity – that is, as indicated by file size.
~o0o~
Track 4: Jesuit Funk
The topic modeling approach seemed promising. I took the english translation of the complete Jesuit Relations, fitted a topic model, and then set about sonifying it. This time, I explored the live-coding music environment, Sonic Pi, but focussed on one topic only.
Code: https://gist.github.com/shawngraham/7ea86a33471acaaa5063
~o0o~
Track 5: PunctMoodie (original blog post here). There was a fashion, for a time, to create posters of various famous literary works as represented only by their patterns of punctuation. I used this code to reduce Susanna Moodie’s ‘Roughing it in the Bush‘ to its punctuation. Then I mapped the punctuation to its numeric ascii values, and fed the result into the Sonic Pi.
Bonus track! Disco version:
~o0o~
Track 6: Human Bone Song. I have scraped several thousand photos from Instagram for a study on the trade in human remains facilitated by social media. I ran a selection (the first 1000) through Manovich’s ImagePlot; first voice is brightness, second is saturation, third (drums) is hue. Sonified with musicalgorithms.org.
~o0o~
Track 7: Song of Dust and Ashes. Same rough procedure as before but with site photos from the Kenan Tepe excavations archived in Open Context. (see http://smgprojects.github.io/imageplot-opencontext/ ). Sonified via musicalgorithms.com, mixed in garageband. First pass.
~o0o~
Track 8: Kenentepe Colours – same data as track 7, but I used a fibonacci series per musicalgorithms.com to perform the duration mapping. Everything else was via the presets. Instrumentation via garageband.
~o0o~
Track 9: Bad Equity. I wondered if sonification could be useful in determining patterns in bad OCR of historical newspapers. I grabbed 1500 editions of The Shawville Equity, 1883-1914, (http://theequity.ca) using wget from the provincial archives of Quebec. Then, measure the number of special characters in each OCR’d txt file, taking the presence of these as a proxy for bad OCR. Add up for each file. Then to musicalgorithms, defaults. Then, because I’m a frustrated musician (and a poor one, at that), I threw a beat on it for the sake of interest. Read the full discussion and code over here.
Track 10: Lullaby
I wrote this one for my kids. It’s not algorithmic in any way. That’s me playing.
(featured image, British Library https://www.flickr.com/photos/britishlibrary/11147476496 )