A method and apparatus for observing the rhythmic cadence; or, an algorithmic alternative archaeology

Figure 1

Figure 1. A Wretched Garret Without A Fire (at least, according to Google Images)

A method and apparatus for observing the rhythmic cadence

ABSTRACT

A method and apparatus for observing the rhythmic cadence. The devices comprises a small shop, a wretched garret, a Russian letter, a mercantile house, a third storey

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 illustrates a wretched garret without a fire.

Figure 2 is a block diagram of a fearful storm off the island.

Figure 3 illustrates a mercantile house on my own account.

Figure 4 is a perspective view of the principal events of the Trojan war.

Figure 5 is an isometric view of a poor Jew for 4 francs a week.

Figure 6 is a cross section of a thorough knowledge of the English language.

Figure 7 is a block diagram of the hard trials of my life.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention is the son of a Protestant clergyman. The device is a wretched garret without a fire. The present invention facilitates the study of a language. The invention has my book in my hand. The invention acquires a thorough knowledge of the English language.

According to a beneficial embodiment, the invention such a degree that the study. The present invention shows his incapacity for the business. The device obtains a situation as correspondent and bookkeeper. The device understood a word of the language. The present invention established a mercantile house on my own account. The invention does not venture upon its study. The device devotes the rest of my life. The present invention realizes the dream of my whole life. The present invention publishes a work on the subject.

What is claimed is:

1. A method for observing the rhythmic cadence, comprising:
a wretched garret;
a small shop; and
a Russian letter.

2. The method of claim 1, wherein said wretched garret comprises a mercantile house on my own account.

3. The method of claim 1, wherein said small shop comprises the principal events of the Trojan war.

4. The method of claim 1, wherein said Russian letter comprises a fearful storm off the island.

5. An apparatus for observing the rhythmic cadence, comprising:
a mercantile house;
a small shop;
a third storey; and
a Russian letter.

6. The apparatus of claim 5, wherein said mercantile house comprises a wretched garret without a fire.

7. The apparatus of claim 5, wherein said small shop comprises a fearful storm off the island.

8. The apparatus of claim 5, wherein said third storey comprises a thorough knowledge of the English language.

9. The apparatus of claim 5, wherein said Russian letter comprises a thorough knowledge of the English language.

—————–
Did you recognize Troy and its Remains, by Henry (Heinrich) Schliemann, in that patent abstract? I took his ‘autobiographical notice’ from the opening of his account of the work at Troy, and ran it through Sam Lavigne’s Patent Generator. It’s a bit like the I-Ching. I have it in mind that this toy could be used to distort and reflect on, draw something new from, some of the classic works of archaeology – especially from that buccaneering phase when, well, pretty much anything went. What if, instead of publishing their discoveries, the early archaeologists had patented them instead? We live in such an era now, when new forms of life (or at least, its building blocks) can be patented; when workflows can be patented; when patents can be framed so broad that a word-generator and a lawyer will bring you riches beyond compare… the early archaeologists were after fame and fortune as much as they were about knowledge of the past. This patent of Schliemann’s uses as its source text an opening sketch about the man himself, rather than his discoveries. Doesn’t a sense of him shine through? Doesn’t he seem, well, rather over-inflated? What is the rhythmic cadence, I wonder. If I can sort out the encoding, I’ll try this on some of his discussion of what he found.

(think also the computational power that went into this toy: natural language processing, pattern matching… it’s rather impressive, actually, when you think what can be built by bolting existing bits together).

Here’s Chapter 1 of Schliemanns account of Troy. Please see the ‘detailed description of the preferred embodiments’, below.

——————-
An apparatus and method for according to the firman

ABSTRACT

An apparatus and method for according to the firman. The devices comprises a whole building, a large block

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is a block diagram of the north-western end of the site.

Figure 2 is an isometric view of the second secretary of his chancellary.

Figure 3 is a perspective view of a large block of this kind.

Figure 4 is a diagrammatical view of the steep side of the hill.

Figure 5 is a schematic drawing of the native soil before the winter.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention sells me the field at any price. The device reach the native soil before the winter. The present invention is the highest mountain in the world.

What is claimed is:

1. An apparatus for according to the firman, comprising:
a whole building; and
a large block.

2. The apparatus of claim 1, wherein said whole building comprises a large block of this kind.

3. The apparatus of claim 1, wherein said large block comprises the native soil before the winter.

4. A method for according to the firman, comprising:
a large block; and
a whole building.

5. The method of claim 4, wherein said large block comprises the north-western end of the site.

6. The method of claim 4, wherein said whole building comprises the second secretary of his chancellary.

Using Storymaps.js

The Fonseca Bust, a storymap

The Fonseca Bust, a storymap

A discussion on Twitter the other day – asking about the best way to represent ‘flowmaps’ (see DHQ&A) – led me to encounter a new toy from KnightLabs: Storymap.js. Knightlabs also provides quite a nice, fairly intuitive editor for making the storymaps. In essence, it provides a way, and a viewer, for tying various kinds of media and text to points along a map. Sounds fairly simple, right? Something that you could achieve with ‘my maps’ in Google? Well, sometimes, it’s not what you do but the way that you do it. Storymaps also allows you to upload your own large-dimension image so that you can bring the viewer around it, pointing out the detail. In the sample (so-called) ‘gigapixel’ storymap, you are brought around The Garden of Earthly Delights.

This struck me as a useful tool for my upcoming classes – both in terms of creating something that I could embed in our LMS and course website for later viewing, but also as something that the students themselves could use to support their own presentations. I also imagine using it in place of essays or blog post reflections. To that end, I whipped up two sample storymaps. One reports on an academic journal article, the other provides a synopsis of a portion of a book’s argument.

Here’s a storymap about the Fonseca Bust.

Here’s a storymap about looting Cambodian statues.

In the former, I’ve uploaded an image to a public google drive folder. It’s been turned into tiles, so as to load into the map engine that is used to jump around the story. Storymap’s own documentation suggests using Photoshop’s zoomify plugin. But if you don’t have zoomify? Go to sourceforge and get this: http://sourceforge.net/projects/zoomifyimage/ . It requires that you have Python and the Python Image Library installed (PIL). Unzip zoomifyimage, and put your image that you want to use for your story in the same folder. Open your image in any image processing program, and find out how many pixels wide by high it is. Write this down. Close the program. Then, open a command prompt in the folder where you unzipped zoomify (shift+right click, ‘open command prompt here’, in Windows). At the prompt, type


ZoomifyFileProcessor.py <your_image_file>

If all goes well, nothing much seems to happen – except that you have a new folder with the name of your image, an xml file called ImageProperties.xml and one or more TileGroupN folders with your sliced and diced images. Move this entire folder (with its xml and subfolders) into your google drive. Make sure that it’s publicly viewable on the web, and take note of the hosting url. Copy and paste it somewhere handy.

see the Storymap.js documentation on this:

“If you don’t have a webserver, you can use Google Drive orDropbox. You need the base url for your exported image tiles when you start making your gigapixel StoryMap. (show me how)).”

In the Storymap.js editor, when you click on ‘make a new storymap’, you select ‘gigapixel’, and give it the url to your folder.  Enter the pixel dimensions of the complete image, and you’re good to go.

Your image could be a high-resolution google earth image; it could be a detail of a painting or a sculpture; it could be a historical map or photograph. There are also detailed instructions on running a storymap off your own server here.

 

Using Goldstone’s Topic Modeling R package with JSTOR’s Data for Research

Andrew Goldstone and Ted Underwood have an article on ‘the quiet transformation of literary studies’ (preprint), where they topic model a literary history journal and discuss the implications of that model for their discipline. Andrew has a blog post discussing their process and the coding that went into that process.

I’m currently playing with their code, and thought I’d share some of the things you’ll need to know if you want to try it out for yourself – get it on github. I’m assuming you’re using a Windows machine.

1. Get the right version of R. You need 3.0.3 for this. Use either the 32 bit or 64 bit version of R ( both download in a single installer; when you install it, you can choose the 32 bit or the 64 bit version, depending on your machine. Choose wisely).

2. Make sure you’re using the right version of Java. If you are on a 64 bit machine, have 64 bit java; 32 bit: 32 bit java.

3. Dependencies. You need to have the rJava package, and the Mallet wrapper, installed in R. You’ll also need ‘devtools’. In the basic R gui, you can do this by clicking on packages >> install packages. Select rJava. Do the same again for Mallet. Do the same again for ‘devtools’. Now you can install Goldstone’s dfrtopics by typing, at the R command prompt

library(devtools)
install_github("dfrtopics","agoldst")

Now. Assuming that you’ve downloaded and unzipped a dataset from JSTOR (go to dfr.jstor.org to get one), here’s what you’re going to need to do. You’ll need to increase the available memory in Java for rJava to play with. You do this before you tell R to use the rJava library. I find it best to just close R, then reload it. Then, type the following, one at a time:

options(java.parameters="-Xmx2048m")
library(rJava)
library(mallet)
library(dfrtopics)

The first item in that list increases the memory heap size. If all goes well, there’ll be a little message telling you that your heap size is 2048 mb and you should really increase it to 2gb. As these are the same thing, then no worries. Now to topic model your stuff!

m <- model_documents(citations_files="[path to your]\\citations.CSV",
dirs="[path to your]\\wordcounts\\",
stoplist_file="[path to your]\\stoplist.txt",
n_topics=60)

Change n_topics to whatever you want. In the path to your files, remember to use double \\.

Now to export your materials.

output_model(m, "data")

This will create a ‘data’ folder with all of your outputs. But where? In your working directory! If you don’t know where this is, wait until the smoke clears (the prompt returns) and type

getwd()

You can use setwd() to set that to whatever you want:

setwd("c:\\path-to-your-preferred-work-directory")

You can also export all of this to work with Goldstone’s topic model browser, but that’ll be a post for another day. Open up your data folder and explore your results.

 

Still playing with historical maps into minecraft

I managed to get my map of the zone between the Hogs’ back falls and Dow’s Lake (nee Swamp) into Minecraft. I completely screwed up the elevations though, so it’s a pretty ….interesting… landscape. I’ve trying again with a map of Lowertown, coupled with elevation data from a modern map. This clearly isn’t ideal, as the topography of the area has changed a lot with 150 years of urbanism. But it’s the best I have handy. Anyway, it’s nearly been working for me.

Nearly.

So I provide to you the elevation and features for your own enjoyment, see if you can make ‘em run with the generate_map.py script. If you get ‘key errors’, try editing the features file in Paint, make sure the blocks of colour are not fuzzy on the edges.

https://dl.dropboxusercontent.com/u/37716296/byward-market/market-maps.zip

 

 

Historical Maps into Minecraft

dowslakemap1847

Dow’s Lake area, settlement by 1847 Map Source: Bruce Elliott, Nepean, The City Beyond, page 23, posted on http://www.bytown.net/dowslake.htm

The folks over at the New York Public Library published an excellent & comprehensive tutorial for digitizing historical maps, and then importing them into Minecraft.

First: thank you!

Unfortunately, for me, it’s not working. I document here what I’ve been doing and ideally someone far more clever than me will figure out what needs to happen…

The first parts of the tutorial – working with QGIS & Inkscape – go very well (although there might be a problem with colours, but more on that anon). Let’s look at the python script for combining the elevation map (generated from QGIS) with the blocks map (generated from Inkscape). Oh, you also need to install imagemagick, which you then run from the command line, to convert SVG to TIF.

“The script for generating the worlds uses PIL to load the TIFF bitmaps into memory, and pymclevel to generate a Minecraft worlds, one block at a time. It’s run successfully on both Mac OS X and Linux.”

After digitizing, looks like this.

After digitizing, looks like this.

I’ve tried both Mac and Linux, with python installed, and PIL, and pymclevel. No joy (for the same reasons as for Windows, detailed below). Like most things computational, there are dependencies that we only uncover quite by accident…

Anyway, when you’ve got python installed on Windows, you can just type the python file name at the command prompt and you’re off. So I download pymclevel, unzip it, open a command prompt in that folder (shift + right click, ‘open command prompt here’), and type ‘setup.py’. Error message. Turns out, I need setuptools. Which I obtain from:

https://pypi.python.org/pypi/setuptools#windows-7-or-graphical-install

Download, install. Works. Ok, back to the pymclevel folder, setup.py, and new error message. Looks like I need something called ‘cython’.

http://cython.org/#download

I download, unzip, go to that folder, setup.py. Problem. Some file called ‘vcvarsall.bat’ is needed. Solution? Turns out I need to donwload Microsoft Visual Studio 10. Then, I needed to create an environment variable called ‘vs90comntools’, which I did by typing this at the command prompt:

set VS90COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\Tools\

Wunderbar. I go back to the pymclevel folder, I run setup.py again, and hooray! It installs. I had PIL installed from a previous foray into things pythonesque, so at least I didn’t have to fight with that again.

I copy the generate_map.py script into notepad++, change the file names within it (so that it finds my own elevation.tif and features.tif files, which are called hogs-elevation.tif and hogs-features.tif; the area I’m looking at is the Hogsback Falls section of the Rideau. In the script, just change ‘fort-washington’ to ‘hogs’ or whatever your files are called). In my folder, at the command prompt, I type generate_map.py and get a whole bunch of error messages: various ‘yaml’ files can’t be found.

Did I mention PyYaml has to be installed? Fortunately, it has a windows installer.  Oh, and by the way – PyWin is also needed; I got that error message at one point (something obscure about win32api), and downloading/installing from here solved it: http://sourceforge.net/projects/pywin32/files/pywin32/

Ok, so where were we? Right, missing yaml files, like ‘minecraft.yaml’ and ‘classic.yaml’, and ‘indev.yaml’ and ‘pocket.yaml’. These files were there in the original repository, but for whatever, they didn’t install into the pymclevel that now lives in the Python directory. So I went to the pymclevel repo on github, copied-and-pasted the code into new documents in notepad++, saved them as thus:

c:\Python27\Lib\site-packages\pymclevel-0.1-py2.7-win32.egg\pymclevel\minecraft.yaml

Phew. Back to where I was working on my maps, and have my generate_map.py, which I duly enter and…. error. can’t find ‘tree import Tree, treeObjs’.  Googling around to solve this is a fool’s errand: ‘tree’ is such a common word, concept in programming that I just can’t figure out what’s going on here. So I turned that line off with a # in the code. Run it again…. and it seems to work (but is this the key glitch that kills all that follows?).

(update: as Jonathan Goodwin points out, ‘tree.py’ is there, in the NYPL repo

…so I uncommented out the line in generate_map.py, saved tree.py in the same directory, and ran the script again. Everything that follows still happens. So perhaps there’s something screwed-up with my map itself.)

The script tells me I need to tell it whether I’m creating a creative mode map or a survival mode:

so for creative mode: c:>generate_map.py map

for survival: c:>generate_map.py game

And it chugs along. All is good with the world. Then: error message. KeyError: 255 in line 241, block_id, block_data, depth = block_id_lookup[block_id]. This is the bit of code that tells the script how to map minecraft blocks to the colour scheme I used in Inkcraft to paint the information from the map into my features.tif. Thing is, I never used RGB R value of 255. Where’s it getting this from? I go back over my drawing, inspecting each element, trying to figure it out. All seems good with the drawing. So I just add this line to the code in the table:

block_id_lookup = {

[..existing code...]

255 : (m.Water.ID, 0, 1),

}

And run it again. Now it’s 254. And then 253. Then 249. 246. 244. 241. Now 238.

And which point, I say piss on this, and I provide you with my features tif and elevation tif and if you can please tell me what I’m doing wrong, I’d be ever so appreciative (and here’s the svg with the drawing layers, for good measure).

….when I first saw the tutorial from the NYPL, I figured, hey! I could use this with my students! I think not, at least, not yet.

(update 2: have downloaded the original map tifs that the NYPL folks used, and am running the script on them. So far, so good: which shows that, once all this stuff is installed, that it’s my maps that are the problem. This is good to know!)

Part Two:

(updated about 30 minutes after initial post) So after some to-and-fro on Twitter, we’ve got the tree.py problem sorted out. Thinking that it’s the maps where the problem is, I’ve opened the original Fort Washinton features.tif in MS Paint (which is really an underappreciated piece of software). I’ve zoomed in on some of the features, and compared the edges with my own map (similarly opened and zoomed upon). In my map, there are extremely faint colour differentations/gradations where blocks of colour meet. This, I think, is what has gone wrong. So, back to Inkscape I go…

Update the Third: looks like I made (another) silly error – big strip of white on the left hand side of my features.tif. So I’ve stripped that out. But I can’t seem to suss the pixel antialiasing issue. Grrrrr! Am now adding all of the pixels into the dictionary, thus:

lock_id_lookup = {
0 : (m.Grass.ID, None, 2),
10 : (m.Dirt.ID, 1, 1), # blockData 1 == grass can’t spread
11 : (m.Dirt.ID, 1, 1), # blockData 1 == grass can’t spread
12 : (m.Dirt.ID, 1, 1), # blockData 1 == grass can’t spread
14 : (m.Dirt.ID, 1, 1), # blockData 1 == grass can’t spread
16 : (m.Grass.ID, None, 2),
20 : (m.Grass.ID, None, 2),
30 : (m.Cobblestone.ID, None, 1),
40 : (m.StoneBricks.ID, None, 3),
200 : (m.Water.ID, 0, 2), # blockData 0 == normal state of water
210 : (m.WaterActive.ID, 0, 1),
220 : (m.Water.ID, 0, 1),
49 : (m.StoneBricks.ID, None, 3),
43 : (m.StoneBricks.ID, None, 3),
}

…there’s probably a far more elegant way of dealing with this. Rounding? Range lookup? I’m not v. python-able…

Update, 2.20pm: Ok. I can run the script on the Fort Washington maps and end up with a playable map (yay!). But my own maps continue to contain pixels of colours the script doesn’t want to play with. I suppose I could just add 255 lines worth, as above, but that seems silly. The imagemagick command, I’m told, works fine on a mac, but doesn’t seem to achieve anything on my PC. So something to look into (and perhaps try this http://www.graphicsmagick.org/ instead). In the meantime, I’ve opened the Fort Washington map in good ol’ Paint, grabbing snippets of the colours to paste into my own map (also open in Paint). Then, I use Paint’s tools to clean up the colour gradients at the edges on my map. In essence, I trace the outlines.

Then, I save, run the script and…… success!

I have a folder with everything I need (and you can have it, too.) I move it to

C:\Users\[me]\AppData\Roaming\.minecraft\saves and fire up the game:

Rideau River in Minecraft!

Rideau River in Minecraft!

Does it actually look like the Hogs’ Back to Dow’s Lake section of the Rideau Canal and the Rideau River? Well, not quite. Some issues with my basic elevation points. But – BUT! – the workflow works! So now to find some better maps and to start again…

Desert Island Archaeologies

You’ve been castaway on an uncharted desert isle… but friendly dolphins deposit a steamer trunk full of books on the shore to keep you occupied, the exact ten you’d pick. Thus the premise of Lorna Richardson’s new public archaeology project: Desert Island Archaeologies. Turns out, I was the first castaway. You can read my ten picks alongside those of other castaways, or just keep reading here.

[... the sun beats down...]

Damn steamer trunks. Can’t lift it. All these archaeology books! What those dolphins must be eating, I ask you!

Let’s see. Ah. Here we go. Goodness: the exact ten books I would want to be reading. First up: Ray Laurence, Roman Pompeii: Space and Society, 1994. This was the book that convinced me to go to grad school – we had a whole seminar built on it in my final year, back in ’96. It was unlike anything else I was reading as an undergraduate, and showed me that there were ways of looking at something as well-trod as Pompeii that were completely askew of what I’d come to expect. The geek in me loved the space-syntax, the way of reading street life. Hell, it was fun!

Next,Stephen Shennan, Genes, Memes and Human History – Darwinian Archaeology and Cultural Evolution (2002). By the time I came across this, I was getting very much into complex systems and simulation, and this was something that helped me make sense of what I was doing. And it’s a fun read. Oh look, here’s Amanda Claridge’s Rome: An Oxford Archaeological Guide‘ (1998). I hear Amanda’s dry wit every time I open this thing. This was my constant companion on my first trip to Rome. I can’t imagine going there without it.

If I ever get off this island.

What else, what else… It’s interesting how nostalgic I am about these items. Each one seems tied to a particular chapter of my life. Matthew Johnson’s ‘Archaeological Theory‘ (1999) still makes me laugh and provides guidance through the thorny thickets of theory. Sybille Haynes’ ‘Etruscan Civilization‘ is a treat for sore eyes, filled with the beauty and magic of that people. I expect it can also be used for self-defence, in case of wild animal attack on this island. I used it for the first class I ever taught, at the school of continuing education at Reading.

Harry Evans, ‘Water Distribution in Ancient Rome‘ (1997) reminds me of adventures through the Roman countryside on a dangerously lunatic vespa, trying to identify the standing ruins, with A. Trevor Hodge’s ‘Roman Aqueducts and Water Supply (1992) in the other hand. Hodge’s book was as a bible for me writing my MA; I had the opportunity to meet Hodge at Carleton University shortly after I started working there. Sadly, a trivial over-long meeting prevented that from happening. Hodge died later that week. I will regret that always.

Back to Ray Laurence. The man has had a profound impact on me as a scholar. His ‘Roads of Roman Italy: Mobility and Cultural Change‘ (1999) and all that space-economy stuff: fantastic! Totally connected with the ORBIS simulation of the Roman world by Meeks and Scheidel, by the way, in terms of how it changes our perspective on the Roman world (ORBIS isn’t a book, but maybe there’s a tablet in this steamer trunk somewhere?) In the intro to Roads of Roman Italy, Laurence mentions my name, which was the first time I’d seen my name in print, in an academic context. A real thrill! No less of a thrill than how I came to be mentioned in the first place: driving the British School at Rome’s death-trap ducato for Ray as we explored the remains of the Roman roads in the outskirts of town. If there is no tablet in this steamer trunk (with wifi provided by an unseen Google blimp, obviously), I think the ‘Baths of Caracalla‘ by Janet DeLaine (1997) might be buried down here somewhere… ah, here it is. When I first pitched my MA idea to Janet, she kept finishing my sentences. I wanted to do a quanity survey of the Roman aqueducts. Turned out, she was waaaaay ahead of me. She let me use the manuscript to this as I puttered away on the Aqua Claudia and the Anio Novus. It’s actually quite a fun read, especially when you start thinking about nuts-and-bolts type questions like, how the hell did they build this damned thing anyway?

Final book? It’s not archaeological, but it’s a good read. Complexity: A Guided Tour‘ by Melanie Mitchell, 2011. I’m quite into simulation and games, and the emergent behaviours of both ai and humans when they conspire together to create (ancient) history (as distinct from the past). That’s a whole lot of interdisciplinariness, so this volume by Mitchell always provides clarity and illumination.

So… that’s what I’ve found in this steamer trunk. The bibliographic biography of a digital archaeologist. Neat!

 

Shouting into the Void?

Carleton has an annual ‘academic retreat’, which is happening this weekend. I’m not sure what, precisely, occurs there, but I’ve been asked to talk about things digital/history/archaeological. In the wake of the recent SAA #blogarch session, and in advance of the upcoming special issue in Internet Archaeology on blogging archaeology, I thought I’d talk about one aspect of what I found when I set out to map the shape of ‘roman archaeology’ on the web.  It’s an update to what I did in 2011, for the #blogarch session at that year’s SAAs (you can read what I thought then here.)

I give you, ‘Shouting into the Void? Social Media and the Construction of Cultural Heritage Knowledge Online’