Setting the groundwork for an undergraduate thesis project

We have a course code, HIST4910, for students doing their undergraduate thesis project. This project can take the form of an essay, it can be a digital project, it could be code, it could be in the form of any of the manifold ways digital history/humanities research is communicated.

Hollis Peirce will be working with me this year on his HIST4910, which for now is called ‘The Evolution of the Digitization of History: Making History Accessible’. Hollis has been to DHSI twice now, once to take courses on digitization, once to work on the history of the book. Hollis’ interest in accessibility (as distinct from ‘open access’, which is an entirely different kettle of fish) makes this an exciting project, I think. If you’re interested in this subject, let us know! We’d like to make connections.

We met today to figure out how to get this project running, and I realized, it would be handy to have a set of guidelines for getting started. We don’t seem to have anything like this around the department, so Hollis and I cobbled some together. I figured other folks might be interested in that, so here they are.

Play along at home with #hist3812a

In my video games and history class, I assign each week one or two major pieces that I want everyone to read. Each week, a subset of the class has to attempt a ‘challenge’, which involves reading a bit more, reflecting, and devising a way of making their argument – a procedural rhetoric – via a game engine (in this case, Twine). Later on, they’ll be building in Minecraft. Right now, we have nearly 50 students enrolled.

If you’re interested in following along at home, here are the first few challenges. These are the actual prompts cut-n-pasted out of our LMS. Give ‘em a try if you’d like, upload to philome.la, and let us know! Ours will be at hist3812a.dhcworks.ca

I haven’t done this before, so it’ll be interesting to see what happens next.

Introduction to #hist3812a

Challenge #1

Read:

  1. Fogu, Claudio. ‘Digitalizing Historical Consciousness’, History and Theory 2, 2009.
  2. Tufekci, Zeynep. ‘What Happens to #Ferguson Affects Ferguson: Net Neutrality, Algorithmic Filtering and Ferguson. MediumAugust 14 2014

Craft:

A basic Twine that highlights the ways the two articles are connected.

Share:

Put your Twine build (the *html file) into the ‘public’ folder in your Dropbox account (if you don’t have a public folder, just right-click and select public link - see this help file). Share the link on our course blog:

  1. Create a new post.
  2. Hit the ‘html’ button.
  3. type:
  4. Preview your post to make sure it loads your Twine.

Play:

Explore others’ Twines and be ready to discuss this process and these readings in Tuesday’s class.

A history of games, and of video games

Challenge #2

Read & Watch:

Antecedents (read the intros):

Shannon, C. A Mathematical Theory of Communication  Reprinted with corrections from The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948. http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

Turing, Alan Mathison. “On computable numbers, with an application to the Entscheidungsproblem.” J. of Math 58 (1936): 345-363. http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

Cold War (watch this entire lecture): https://www.youtube.com/watch?v=_otw7hWq58A

1980s:

Dillon, Roberto. The golden age of video games : the birth of a multi-billion dollar industry CRC Press, c2011.

Christiansen, Peter ‘Dwarf Norad: A Glimpse of Counterfactual Computing History’ Play the Past August 6 2014 http://www.playthepast.org/?p=4892

Craft:

A Twine that imagines what an ENIAC developed to serve the needs of historians might’ve looked like, ie explore Christiansen’s argument.

Share:

Put your Twine build (the *html file) into the ‘public’ folder in your Dropbox account. Share the link on our course blog by:

  1. Create a new post.
  2. Hit the ‘html’ button.
  3. type:
  4. Preview your post to make sure it loads your Twine.

Play:

Explore others’ Twines and be ready to discuss this process and these readings in Tuesday’s class.

Historical Consciousness and Worldview

Challenge #3

Read:

Kee, Graham, et al. ‘Towards a Theory of Good History Through Gaming’ The Canadian Historical Review
Volume 90, Number 2, June 2009 pp. 303-326.

http://muse.jhu.edu/journals/can/summary/v090/90.2.kee.html

Travis, Roger. ‘Your practomimetic school: Duck Hunt or BioShock?’ Play the Past Oct 21 2011 http://www.playthepast.org/?p=2067

Owens, T. ‘What does Simony say? An interview with Ian Bogost’ Play the Past Dec 13, 2012 http://www.playthepast.org/?p=3394

Travis, Roger. ‘A Modest Proposal for viewing literary texts as rulesets, and for making game studies beneficial for the publick’ Play the Past Feb 9 2012 http://www.playthepast.org/?p=2417

McCall, Jeremiah. “Historical Simulations as Problem Spaces: Some Guidelines for Criticism”. Play the Past http://www.playthepast.org/?p=2594

(Not assigned, but more of Travis’ work: http://livingepic.blogspot.ca/2012/07/rules-of-text-series-at-play-past.html)

Craft:

A Twine that exposes the underlying rhetorics of the game of teaching history.

Share:

Put your Twine build (the *html file) into the ‘public’ folder in your Dropbox account. Share the link on our course blog by:

  1. Create a new post.
  2. Hit the ‘html’ button.
  3. type:
  4. Preview your post to make sure it loads your Twine.

Play:

Explore others’ Twines and be ready to discuss this process and these readings in Tuesday’s class.

Critical Play Week

Challenge # 4

Remember: 

Keep notes on the discussions from the critical play session; move around the class, talk with people about what they’re playing, why they’re making the moves they’re doing, and think about the connections with the major reading.

(nb, I’ve assigned all the students to bring in video games, board games, in both sessions this week that we’ll play. We might decamp to the game lab in the library to make this work. This group will observe the play. I’ve also pointed them to Feminist Frequency as an example of the kind of criticism I want them to emulate).

Craft:

Devise a Twine that captures the dynamic and discussions of this week’s in-class critical play. Remember, for historians, it may be all about time and space.

Share:

Put your Twine build (the *html file) into the ‘public’ folder in your Dropbox account. Share the link on our course blog by:

  1. Create a new post.
  2. Hit the ‘html’ button.
  3. type:
  4. Preview your post to make sure it loads your Twine.

Play:

Explore others’ Twines and be ready to discuss this process and these readings in Tuesday’s class.

Material Culture and the Digital

Challenge #5

Read

Montfort et al, ‘Introduction’, 10 Print http://10print.org/ (download the pdf)

Montfort et al, ‘Mazes,’ 10 Print http://10print.org/ (download the pdf)

Bogost, Ian, Montfort, N. ‘New Media as Material Constraint: An Introduction to Platform Studies.’ 1st International HASTAC Conference, Duke University, Durham NC  http://bogost.com/downloads/Bogost%20Montfort%20HASTAC.pdf

Craft:

Make a Twine game that emulates Space Invaders; then discuss (within the Twine) the interaction between game, platform, and experience. Think also about ‘emulation’…

OR

Play one of these games, reviewing it via Twine, thinking about in a way that reverses the points made my Montfort & Bogost (ie, think about the way the physical is represented in the software).

Share:

Put your Twine build (the *html file) into the ‘public’ folder in your Dropbox account. Share the link on our course blog by:

  1. Create a new post.
  2. Hit the ‘html’ button.
  3. type:
  4. Preview your post to make sure it loads your Twine.

Play:

Explore others’ Twines and be ready to discuss this process and these readings in Tuesday’s class.

 

SAA 2015: Macroscopic approaches to archaeological histories: Insights into archaeological practice from digital methods

Ben Marwick and I are organizing a session for the SAA2015 (the 80th edition, this year in San Francisco) on “Macroscopic approaches to archaeological histories: Insights into archaeological practice from digital methods”. It’s a pretty big tent. Below is the session ID and the abstract. If this sounds like something you’d be interested in, why don’t you get in touch?

Session ID 743.

The history of archaeology, like most disciplines, is often presented as a sequence of influential individuals and a discussion of their greatest hits in the literature.  Two problems with this traditional approach are that it sidelines the majority of participants in the archaeological literature who are excluded from these discussions, and it does not capture the conversations outside of the canonical literature.  Recently developed computationally intensive methods as well as creative uses of existing digital tools can address these problems by efficiently enabling quantitative analyses of large volumes of text and other digital objects, and enabling large scale analysis of non-traditional research products such as blogs, images and other media. This session explores these methods, their potentials, and their perils, as we employ so-called ‘big data’ approaches to our own discipline.

—-

Like I said, if that sounds like something you’d be curious to know more about, ping me.

Quickly Extracting Data from PDFs

By ‘data’, I mean the tables. There are lots of archaeological articles out there that you’d love to compile together to do some sort of meta-study. Or perhaps you’ve gotten your hands on pdfs with tables and tables of census data. Wouldn’t it be great if you could just grab that data cleanly? Jonathan Stray has written a great synopsis of the various things you might try and has sketched out a workflow you might use. Having read that, I wanted to try ‘Tabula‘, one of the options that he mentioned. Tabula is open source and runs on all the major platforms. You simply download it an double-click on the icon; it runs within your browser. You load your pdf into it, and then draw bounding boxes around the tables that you want to grab. Tabula will then extract that table cleanly, allowing you to download it as a csv or tab separated file, or paste it directly into something else.

For instance, say you’re interested in the data that Gill and Chippindale compiled on Cycladic Figures. You can grab the pdf from JSTOR:

Material and Intellectual Consequences of Esteem for Cycladic Figures
David W. J. Gill and Christopher Chippindale
American Journal of Archaeology , Vol. 97, No. 4 (Oct., 1993) , pp. 601-659
Article DOI: 10.2307/506716

Download it, and then feed it into Tabula. Let’s look at table 2.

gillchippendaletable2
You could just highlight this table in your pdf reader and hit ctrl+c to copy it; when you paste that into your browser, you’d get:
gillchippendaletable2cutnpaste
Everything in a single column. For a small table, maybe that’s not such a big deal. But let’s look at what you get with Tabula. You drag the square over that same table; when you release the mouse button you get:
tabula1
Much, much cleaner & faster! I say ‘faster’, because you can quickly drag the selection box around every table and hit download just the one time. Open the resulting csv file, and you have all of your tables in a useful format:
tabula2
But wait, there’s more! Since you can copy directly to the clipboard, you can paste directly into a google drive spreadsheet (thus taking advantage of all the visualization options that Google offers) or into something like Raw from Density Design.
Tabula is a nifty little tool that you’ll probably want to keep handy.

Getting Historical Network Data into Gephi

I’m running a workshop next week on getting started with networks & gephi. Below, please find my first pass at a largely self-directed tutorial. This may eventually get incorporated into the Macroscope.

Data files for this tutorial may be found here. There’s a pdf/pptx with the images below, too.

The data for this exercise comes from Peter Holdsworth’s MA dissertation research, which Peter shared on Figshare here. Peter was interested in the social networks surrounding ideas of commemoration of the centenerary of the War of 1812, in 1912. He studied the membership rolls for women’s service organization in Ontario both before and after that centenerary. By making his data public, Peter enables others to build upon his own research in a way not commonly done in history. (Peter can be followed on Twitter at https://twitter.com/P_W_Holdsworth).

On with the show!

Download and install Gephi. (What follows assumes Gephi 0.8.2). You will need the MultiMode Projection pluging installed.

To install the plugin – select Tools >> Plugins  (across the top of Gephi you’ll see ‘File Workspace View Tools Window Plugins Help’. Don’t click on this ‘plugins’. You need to hit ‘tools’ first. Some images would be helpful, eh?).

In the popup, under ‘available plugins’ look for ‘MultimodeNetworksTransformation’. Tick this box, then click on Install. Follow the instructions, ignore any warnings, click on ‘finish’. You may or may not need to restart Gephi to get the plugin running. If you suddenly see on the far right of ht Gephi window a new tab besid ‘statistics’, ‘filters’, called ‘Multimode Network’, then you’re ok.

Slide1

Getting the Plugin

Assuming you’ve now got that sorted out,

1. Under ‘file’, select -> New project.
2. On the data  laboratory tab, select Import-spreadsheet, and in the pop-up, make sure to select under ‘as table: EDGES table. Select women-orgs.csv.  Click ‘next’, click finish.

(On the data table, have ‘edges’ selected. This is showing you the source and the target for each link (aka ‘edge’). This implies a directionality to the relationship that we just don’t know – so down below, when we get to statistics, we will always have to make sure to tell Gephi that we want the network treated as ‘undirected’. More on that below.)

Slide2

Loading your csv file, step 1.

Slide3

Loading your CSV file, step 2

3. Click on ‘copy data to other column’. Select ‘Id’. In the pop-up, select ‘Label’
4. Just as you did in step 2, now import NODES (Women-names.csv)

(nb. You can always add more attribute data to your network this way, as long as you always use a column called Id so that Gephi knows where to slot the new information. Make sure to never tick off the box labeled ‘force nodes to be created as new ones’.)

Adding new columns

Adding new columns

5. Copy ID to Label
6. Add new column, make it boolean. Call it ‘organization’

Filtering & ticking off the boxes

Filtering & ticking off the boxes

7. In the Filter box, type [a-z], and select Id – this filters out all the women.
8. Tick off the check boxes in the ‘organization’ columns.

Save this as ‘women-organizations-2-mode.gephi’.

Now, we want to explore how women are connected to other women via shared membership.

Setting up the transformation.

Setting up the transformation.

Make sure you have the Multimode networks projection plugin installed.

On the multimode networks projection tab,
1. click load attributes.
2. in ‘attribute type’, select organization
4. in left matrix, select ‘false – true’ (or ‘null – true’)
5. in right matrix, select ‘true – false’. (or ‘true – null’)
(do you see why this is the case? what would selecting the inverse accomplish?)

6. select ‘remove edges’ and ‘remove nodes’.

7. Once you hit ‘run’, organizations will be removed from your bipartite network, leaving you with a single-mode network. hit ‘run’.

8. save as ‘women to women network.csv’

…you can reload your ‘women-organizations-2-mode.gephi’ file and re-run the multimode networks projection so that you are left with an organization to organization network.

! if your data table is blank, your filter might still be active. make sure the filter box is clear. You should be left with a list of women.

9. You can add the ‘women-years.csv’ table to your gephi file, to add the number of organizations the woman was active in, by year, as an attribute. You can then begin to filter your graph’s attributes…

10. Let’s filter by the year 1902. Under filters, select ‘attributes – equal’ and then drag ’1902′ to the queries box.
11. in ‘pattern’ enter [0-9] and tick the ‘use regex’ box.
12. click ok, click ‘filter’.

You should now have a network with 188 nodes and 8728 edges, showing the women who were active in 1902.

Let’s learn something about this network. On statistics,
13. Run ‘avg. path length’ by clicking on ‘run’
14. In the pop up that opens, select ‘undirected’ (as we know nothing about directionality in this network).
15. click ok.

16. run ‘modularity’ to look for subgroups. make sure ‘randomize’ and ‘use weights’ are selected. Leave ‘resolution’ at 1.0

Let’s visualize what we’ve just learned.

17. On the ‘partition’ tab, over on the left hand side of the ‘overview’ screen, click on nodes, then click the green arrows beside ‘choose a partition parameter’.
18. Click on ‘choose a partition parameter’. Scroll down to modularity class. The different groups will be listed, with their colours and their % composition of the network.
19. Hit ‘apply’ to recolour your network graph.

20. Let’s resize the nodes to show off betweeness-centrality (to figure out which woman was in the greatest position to influence flows of information in this network.) Click ‘ranking’.
21. Click ‘nodes’.
22. Click the down arrow on ‘choose a rank parameter’. Select ‘betweeness centrality’.
23. Click the red diamond. This will resize the nodes according to their ‘betweeness centrality’.
24. Click ‘apply’.

Now, down at the bottom of the middle panel, you can click the large black ‘T’ to display labels. Do so. Click the black letter ‘A’ and select ‘node size’.

Mrs. Mary Elliot-Murray-Kynynmound and Mrs. John Henry Wilson should now dominate your network. Who were they? What organizations were they members of? Who were they connected to? To the archives!

Congratulations! You’ve imported historical network data into Gephi, manipulated it, and run some analyzes. Play with the settings on ‘preview’ in order to share your visualization as svg, pdf, or png.

Now go back to your original gephi file, and recast it as organizations to organizations via shared members, to figure out which organizations were key in early 20th century Ontario…

Historian’s Macroscope- how we’re organizing things

‘One of the sideshows was wrestling’ from National Library of Scotland on Flickr Commons; found by running this post through http://serendipomatic.org

How do you coordinate something as massive as a book project, between three authors across two countries?

Writing is a bit like sausage making. I write this, thinking of Otto von Bismarck, but Wikipedia tells me:

  • Laws, like sausages, cease to inspire respect in proportion as we know how they are made.
    • As quoted in University Chronicle. University of Michigan (27 March 1869) books.google.de, Daily Cleveland Herald (29 March 1869), McKean Miner (22 April 1869), and “Quote… Misquote” by Fred R. Shapiro in The New York Times (21 July 2008); similar remarks have long been attributed to Otto von Bismarck, but this is the earliest known quote regarding laws and sausages, and according to Shapiro’s research, such remarks only began to be attributed to Bismarck in the 1930s.

I was thinking just about the messiness rather that inspiring respect; but we think there is a lot to gain when we reveal the messiness of writing. Nevertheless, there are some messy first-first-first drafts that really ought not to see the light of day. We want to do a bit of writing ‘behind the curtain’, before we make the bits and pieces visible on our Commentpress site, themacroscope.org.  We are all fans of Scrivener, too, for the way it allows the bits and pieces to be moved around, annotated, rejected, resurrected and so on. Two of us are windows folks, the other a Mac. We initially tried using Scrivener and Github, as a way of managing version control over time and to provide access to the latest version simultaneously. This worked fine, for about three days, until I detached the head.

Who knew that decapitation was possible? Then, we started getting weird line breaks and dropped index cards happening. So we switched tacts and moved our project into a shared dropbox folder. We know that with dropbox we absolutely can’t have more than one of us be in the project at the same time. We started emailing each other to say, ‘hey, I’m in the project….now. It’s 2.05 pm’ but that got very messy. We installed yshout  and set it up to log our chats. Now, we can just check to see who’s in, and leave quick memos about what we were up to.

Once we’ve got a bit of the mess cleaned up, we’ll push bits and pieces to our Commentpress site for comments. Then, we’ll incorporate that feedback back in our Scrivener, and perhaps re-push it out for further thoughts.

One promising avenue that we are not going down, at least for now, is to use Draft.  Draft has many attractive features, such as multiple authors, side-by-side comparisons, and automatic pushing to places such as WordPress. It even does footnotes! I’m cooking up an assignment for one of my classes that will require students to collaboratively write something, using Draft. More on that some other day.

Announcing a live-writing project: the Historian’s Macroscope, an approach to big digital history

Robert Hook’s Microscope http://www.history-of-the-microscope.org

I’ve just signed a book contract today with Imperial College Press; it’s winging its way to London as I type. I’m writing the book with the fantastically talented Ian Milligan and Scott Weingart. (Indeed, I sometimes feel the weakest link – goodbye!).

It seems strangely appropriate, given the twitter/blog furor over the AHA’s statement recommendation to graduate students that they embargo their dissertations online, for fear of harming their eventual monograph-from-dissertation chances. We were approached by ICP to write this book largely on the strength of our blog posts, social media presence, and key articles, many of which come from our respective dissertations. The book will be targeted at senior/advanced undergrads for the most part, as a way of unpeeling the tacit knowledge around the practice of digital history. In essence, we can’t all be part of, or initiate, fantastic multi-investigator projects like ChartEx or Old Bailey Online; in which case, what can the individual achieve in the realm of fairly-big data? Our book will show you.

One could reasonably ask, ‘why a book? why not a website? why not just continue adding to things like the Programming Historian?’.  We wanted to write more than tutorials (although we owe an enormous debt to the Programming Historian team whose example and project continues to inspire us). We wanted to make the case for why as much as explore the how, and we wanted reach a broader audience than the digital technosavy. In our teaching, we’ve all experienced the pushback from students who are exposed to digital tools & media all the time; a book-length treatment normalizes these kinds of approaches so that students (and lay-people) can say, ‘oh, right, yes, these are the kinds of things that historians do’ – and then they’ll seek out Programming Historian, Stack Overflow, and myriad other sites to develop their nascent skills.  Another attraction of doing a book is that we recognize that editors add value to the finished product. Indeed, our commissioning editor sent our first attempt at a proposal out to five single-blind reviewers! This project is all the stronger for it, and I wish to thank those reviewers for their generous reviews.

One thing that we insisted upon from the start was that we were going to live-write the book, openly, via a comment-press installation. I submitted a piece to the Writing History in the Digital Age project a few years ago. That project exposed the entire process of writing an edited volume. The number and quality of responses was fantastic, and we knew we wanted to try for that here. We argued in our proposal that this process would make the book stronger, save us from ourselves, and build a potential readership long before the book ever hit store shelves. We were astonished and pleased that ICP thought it was a great idea! They had no hesitation at all – thank you Alice! We’ve had long discussions about the relationship of the online materials to the eventual finished book, and wording to that effect is in the final contract. Does that mean that the final type-set manuscript will appear on the commentpress online? No, but nor will the book’s materials be embargoed.  None of us, including the Press, have tried this scale of things before. No doubt there will be hiccups along the way, but there’s a lot of goodwill built up and I trust that we will be able to work out any issues that may (will) arise.

We’re going to write this book over the course of one academic year. In all truthfulness, I’m a bit nervous about this, but the rationale is that digital tools and approaches can change rapidly. We want to be as up-to-date as possible, but we also have to be aware in our writing not to date ourselves either. That’s where all of you come in. As we put bits and parts up on The Historian’s Macroscope – Big Digital History, please do read and offer comments. Consider this an open invitation. We’d love to hear from undergraduate students. Some of these pieces I’m going to road test on my ‘HIST2809 Historian’s Craft’ students this autumn and winter. Ian, Scott, and I will be reflecting on the writing process itself (and my student’s experiences) on the blog portion of the live-writing website.

I’m excited, but nervous as hell, about doing this. Nervous, because this is a tall order. Excited, because it seems to me that the real transformative power of the digital humanities is not in the technology, but in a mindset that peels back the layers, to reveal the process underneath, that says it’s ok to tinker with the ways things have been done before.

Won’t you join us?

Shawn

The George Garth Graham Undergraduate Digital History Research Fellowship

My grandfather, George Garth Graham, in the 1930s.

At Carleton University, we have a number of essay awards for undergraduate history students. We do not have any awards geared towards writing history in new media, or doing historical research using digital tools, or any of the various permutations that would broadly fall within big-tent digital humanities.

So I decided to create an award, using the University’s micro-giving (crowdfunding) platform, Futurefunder.

I’m establishing this fellowship in tribute to my grandfather and the values he represented. George Garth Graham did not have any formal education after Grade 8. He educated himself through constant reading. One of my fondest memories is going through his stack of Popular Science and Popular Mechanics magazines, and making things with him in his basement workshop. Digital History is often about making, tinkering, and exploring, and this was something that my grandfather exemplified. He had a great love of history, showing my brothers and I around the area, telling us the stories of the land. He was generous with his time and would also quietly help those in need, never asking for nor expecting recognition for his contribution.

I’m calling this a ‘research fellowship’ rather than a scholarship because I want it to encourage future work, rather than reward past work. I intend this fellowship to be available to any History student who has taken the second year, required, HIST2809 Historian’s Craft course (a methods course). The student would have to have a certain GPA (appropriate to their year), and have a potential faculty member and project in mind (and I would help facilitate that). A committee of the department would adjudicate applications.

One of the conditions of the fellowship would be for the student to maintain an active research blog, where she or he would detail their work, their reflections, their explorations and experiments. It would become the locus for managing their digital online identity as a scholar. I imagine that holders of this fellowship would be well set-up to pursue further work in graduate programs in the digital humanities or in the digital media sector. I imagine opportunities for the students to publish with faculty (as did the students who worked on my 2011 ‘HeritageCrowd’ project, writinghistory.trincoll.edu). I know of no other undergraduate fellowship like this, in this field. Students who held such a post would not just be assistants, but potential leaders in the field.

For more details about the Fellowship, and how you can contribute to it, please see the Fellowship’s page on Futurefunder.

Historical Friction

edit June 6 – following on from collaboration with Stu Eve, we’ve got a version of this at http://graeworks.net/historicalfriction/

I want to develop an app that makes it difficult to move through the historically ‘thick’ places – think Zombie Run, but with a lot of noise when you are in a place that is historically dense with information. I want to ‘visualize’ history, but not bother with the usual ‘augmented reality’ malarky where we hold up a screen in front of our face. I want to hear the thickness, the discords, of history. I want to be arrested by the noise, and to stop still in my tracks, be forced to take my headphones off, and to really pay attention to my surroundings.

So here’s how that might work.

1. Find wikipedia articles about the place where you’re at. Happily, inkdroid.org has some code that does that, called ‘Ici’. Here’s the output from that for my office (on the Carleton campus):

http://inkdroid.org/ici/#lat=45.382&lon=-75.6984

2. I copied that page (so not the full wikipedia articles, just the opening bits displayed by Ici). Convert these wikipedia snippets into numbers. Let A=1, B=2, and so on. This site will do that:

http://rumkin.com/tools/cipher/numbers.php

3. Replace dashes with commas. Convert those numbers into music. Musical Algorithmns is your friend for that. I used the default settings, though I sped it up to 220 beats per minute. Listen for yourself here. There are a lot of wikipedia articles about the places around here; presumably if I did this on, say, my home village, the resulting music would be much less complex, sparse, quiet, slow. So if we increased the granularity, you’d start to get an acoustic soundscape of quiet/loud, pleasant/harsh sounds as you moved through space – a cost surface, a slope. Would it push you from the noisy areas to the quiet? Would you discover places you hadn’t known about? Would the quiet places begin to fill up as people discovered them?

Right now, each wikipedia article is played in succession. What I really need to do is feed the entirety of each article through the musical algorithm, and play them all at once. And I need a way to do all this automatically, and feed it to my smartphone. Maybe by building upon this tutorial from MIT’s App Inventor. Perhaps there’s someone out there who’d enjoy the challenge?

I mooted all this at the NCPH THATCamp last week – which prompted a great discussion about haptics, other ways of engaging the senses, for communicating public history. I hope to play at this over the summer, but it’s looking to be a very long summer of writing new courses, applying for tenure, y’know, stuff like that.

Edit April 26th – Stuart and I have been playing around with this idea this morning, and have been making some headway per his idea in the comments. Here’s a quick screengrab of it in action: http://www.screencast.com/t/DyN91yZ0