What Careers Need History?

We have a new website at CU today; one of the interesting things on it is a page under the ‘admissions’ section that describes many careers and the departments whose program might fit you for such a career.

I was interested to know what careers were listed as needing a history degree. Updated Oct 17: I have since learned that these career listings were generated by members of the department some time ago; I initially believed that the list was generated solely by admissions, and I apologize for the confusion. This paragraph has been edited to reflect that correction. See also the conclusion to this piece at bottom.

I used wget to download all of the career pages:

wget -r --no-parent -w 2 -l 2 --limit-rate=20k http://admissions.carleton.ca/careers/

I then copied all of the index.html files using the Mac finder (searched within the subdirectory for all index.html; copied them into a new folder).

Then, I used grep to figure out how many instances of capital-h History (thus, the discipline, rather than the generic noun) could be found on those career pages:

grep -c \<h3\>History *.html >history-results.tsv

I did the same again for a couple of other keywords. The command counts all instances of History in the html files, and writes the results to a tab separated file. I open that file in Excel. But – I don’t know what index 1.html is about, or index 45.html, and so on. So in text wrangler, I searched multiple files for the text betweentags, using a simple regex:

Screen Shot 2014-10-15 at 2.28.53 PM

Screen Shot 2014-10-15 at 2.29.35 PM

Copy and paste those results into a new page in the excel file, search and replace with blank spaces all of the extraneous bits (index, .html,and), sort by the file numbers, then copy and paste the names (now in the correct order) into a new column in the original count.

Which gives us this, insofar as History as a degree option leading to particular careers, (and where the numbers indicate not absolute importance of history, more of an emphasis than anything else):

Career count of History
TeachingPage 2 of 3 4
Museums and Historical SitesPage 2 of 2 4
Heritage Conservation 4
TourismPage 2 of 2 2
ResearchPage 2 of 3 2
JournalismPage 2 of 3 2
Foreign ServicePage 2 of 3 2
EducationPage 2 of 3 2
Library and Information Science 2
Design 2
Archival Work 2
Architectural History 2
Archaeology 2

And here’s ‘Global’ (we have some new ‘Globalization’ programmes):

Career count of Global
Tourism 12
Teaching 12
Research 12
Public Service 12
Polling 12
Politics 12
Policy Analysis 12
Non-Profit Sector 12
Non-Governmental Organizations 12
Museums and Historical Sites 12
Media 12
Lobbying 12
Law 12
Journalism 12
International Relations 12
International Development 12
Government 12
Foreign Service 12
Finance 12
Education 12
Diplomacy 12
Consulting 12
Conservation 12
Civil Service 12
Business 12
Advocacy 12
Administration 12
Foreign ServicePage 2 of 3 6
FinancePage 2 of 3 6
TeachingPage 2 of 3 4
Museums and Historical SitesPage 2 of 2 4
TourismPage 2 of 2 4
ResearchPage 2 of 3 4
JournalismPage 2 of 3 4
EducationPage 2 of 3 4
Public ServicePage 2 of 3 4
PollingPage 2 of 2 4
PoliticsPage 2 of 3 4
Policy AnalysisPage 2 of 3 4
Non-Profit SectorPage 2 of 2 4
Non-Governmental OrganizationsPage 2 of 3 4
MediaPage 2 of 2 4
LobbyingPage 2 of 3 4
LawPage 2 of 2 4
International RelationsPage 2 of 2 4
International DevelopmentPage 2 of 3 4
GovernmentPage 2 of 4 4
DiplomacyPage 2 of 3 4
ConsultingPage 2 of 3 4
ConservationPage 2 of 2 4
Civil ServicePage 2 of 3 4
BusinessPage 2 of 5 4
AdvocacyPage 2 of 2 4
AdministrationPage 2 of 4 4
Management 2
International Trade 2
International Business 2
Humanitarian Aid 2
Human Resources 2
Broker 2
Banking 2

Interesting, non? 

Update October 17 - we shared these results with Admissions. There appears to have been a glitch in the system. See those ‘page 2 of 3′ or ‘page 3 of 5′ notes in the tables above? The entire lists were visible to wget, but not to the user of the site, leaving ‘history’ off the page of careers under ‘museums and historical sites’, for instance. The code was corrected, and now the invisible parts are visible. Also, in my correspondence with the folks at Admissions, they write “[we believe that] Global appears more than History because careers were listed under each of its 12 specializations. We will reconfigure the way the careers are listed for global and international studies so that it will reduce the number of times that it comes up.”

So all’s well that ends well. Thank you to Admissions for clearing up the confusion, fixing the glitch, and for pointing out my error which I am pleased to correct.

 

Historical Maps, Topography, Into Minecraft: QGIS

Building your Minecraft Topography(An earlier version of this uses Microdem, which is just a huge page in the butt. I re-wrote this using Qgis, for my hist3812a students)

If you are trying to recreate a world as recorded in a historical map, then modern topography isn’t what you want. Instead, you need to create a blank, flat world in Worldpainter, and then import your historical map as an overlay. In worldpainter, File >> New World. In the dialogue box, uncheck ‘circular world’. Tick of ‘flat’ under topography. Then, on the main icon ribbon, select the ‘picture frame’ icon (‘image overlay’). In the dialogue box, tick ‘image overlay’. Select your file. You might have to fiddle with the scale and the x, y offset to get it exactly positioned where you want. Watch the video mentioned below to see all this in action. Then you can paint the terrain type (including water), raise, lower the terrain accordingly, put down blocks to indicate buildings… Worldpainter is pretty powerful.

If you already have elevation data as greyscale .bmp or .tiff

  • Watch the video about using Worldpainter.
  • Skip ahead to where he imports the topographic data and then the historical map imagery and shows you how to paint this against your topography.
  • You should also google for Worldpainter tutorials.

If you have an ARCGIS shapefile

This was cooked up for me by Joel Rivard, one of our GIS & Map specialists in the Library. He writes,

  • Using QGIS: In the menu, go to Layer > Add Vector Layer. Find the point shapefile that has the elevation information.
  • Ensure that you select point in the file type.
  • In the menu, go to Raster > Interpolation.
  • Select “Field 3″ (this corresponds to the z or elevation field) for Interpolation attribute and click on “Add”.
  • Feel free to keep the rest as default and save the output file as an Image (bmp, jpg or any other raster)

If you need to get topographic data

In some situations, modern topography is just what you need.

  • Grab Shuttle Radar Topography Mission data for the area you are interested in (it downloads as a tiff.) To help you orient yourself, click off ‘toggle cities’ at the bottom of that page. You then click on the tile that contains the region your are interested in. This is a large piece of geography; we’ll trim in a moment.
  • Open QGIS
  • Go to Layer >> Add Raster Layer. Navigate to the location where your srtm download is located. You’re looking for the .tiff file. Select that file.

Add Raster Layer

  • You now have a grayscale image in your QGIS workspace, which might look like this

Straights of Hercules, Spain, Morocco

  • Now you need to crop this image to just the part that you are interested in. On the main menu ribbon, select Raster >> Extraction >> Clipper

Select Clipper Tool

  • In the dialogue box that opens, make sure that ‘Clipping Mode’ is set to ‘Extent’. With this dialogue box open, you can click and drag on the image to highlight the area you wish to crop to. The extent coordinates will fill in automatically.

  • Hit ‘Select…’ beside ‘Output File’. Give your new cropped image a useful name. Hit ‘Save’.

  • Nothing much will appear to happen – but on the main QGIS window, under ‘layers’ a new layer will be listed.

Imgur

  • UNCHECK the original layer (which will have a name like srtm_36_05). Suddenly, only your cropped image is left on the screen. Use the magnifying glass with the plus sign (in the icons at the top of the window) to zoom so that your cropped image fills as much of the screen as possible.
  • Go to Project >> Save as image. Give it a useful name, and make sure to set ‘files of type’ to .bmp. You can now import the .bmp file to your Worldpainter file.

Importing your grayscale DEM to a Minecraft World

Video tutorial again – never mind the bit where he talks about getting the topographic data at the beginning

At this point, the easiest thing to do is to use WorldPainter. It’s free, but you can donate to its developers to help them maintain and update it. Now, the video shown above shows how to load your DEM image into WorldPainter. It parses the black-to-white pixel values and turns them into elevations. You have the option of setting where ‘sea level’ is on your map (so elevations below that point are covered with water). There are many, many options here; play with it! Adam Clarke, who made the video, suggests scaling up your image to 900%, but I’ve found that that makes absolutely monstrous worlds. You’ll have to play around to see what makes most sense for you, but with real-world data of any area larger than a few kilometres on a side, I think 100 to 200% is fine.

So: in Worldpainter – File >> Import >> Height map. In the dialogue box that opens, select your bmp file. You’ll probably need to reduce the vertical scale a bit. Play around.

Now, the crucial bit for us: you can import an image into WorldPainter to use as an overlay to guide the placement of blocks, terrain, buildings, whatever. So, again, rather than me simply regurgitating what Adam narrates, go watch the video. Save as a .world file for editing; export to Minecraft when you’re ready (be warned: big maps can take a very long time to render. That’s another reason why I don’t scale up the way Adam suggests).

Save your .world file regularly. EXPORT your minecraft world to the saves folder (the link shows where this can be found.

Go play.

Wait, what about the historical maps again?

The video covers it much better than I could here. Watch it, but skip ahead to the map overlay section. See the bit at the top of this post.

Ps. Here’s Vimy Ridge, site of a rather important battle in WW1 fought by the Canadian Army, imported into Minecraft this way:
Vimy Ridge in Minecraft

Open Notebooks Part V: Notational Velocity and 1 superRobot

The thought occurred that not everyone wants to take their notes in Scrivener. You might prefer the simple elegance and speed of Notational Velocity, for instance. Yet, when it comes time to integrate those notes, to interrogate those notes, to rearrange them to see what kind of coherent structure you might have, Scrivener is hard to beat.

Screen Shot 2014-09-26 at 1.12.02 PMWith Notational Velocity installed, go to ‘preferences’. Under ‘Notes’ change ‘Read notes from folder’ to point to the Scrivener synchronization folder. Then, change ‘store and read notes on disk as:’ to ‘rich text format files’. This will save every note as a separate rtf file in the folder. Now you can go ahead and use Notational Velocity as per normal. Notational Velocity uses the search bar as a way of creating notes, so start typing in there; if it finds existing notes with those keywords, it’ll bring them up. Otherwise, you can just skip down to the text editing zone and add your note.

When next you sync scrivener, all of these notes will be brought into your project. Ta da! A later evolution of Notational Velocity, nvALT, has more features, and can be used locally as a personal wiki (as in this post). I haven’t played with it yet, but given its genesis, I imagine it would be easy to make it integrate with Scrivener this way. (A possible windows option is Notation, but I haven’t tried it out yet).

~o0o~

I’ve combined all of my automator applications into one single automator app, a superrobot if you will, that grabs, converts, creates a table of contents in markdown, and pushes the results into github, whereupon it lives within my markdown wiki page. I found I had to insert 10 second pauses between stages, or else the steps would get out of order making a godawful mess. Presumably, with more notecards, I’d have to build in more time? We shall see. No doubt there is a much more elegant way of doing this, but the screenshot gives you what you need to know:

Screen Shot 2014-09-26 at 1.36.03 PM

Update with Caveat Ah. Turns out that the Scrivener sync feature renames the notes slightly, which seems to break things in Notational Velocity. So perhaps the workflow should go like this:

1. Use notational velocity to keep notes, and for its handy search feature.
2. Have preferences set to individual files as rtf, as above, in a dedicated folder just for notational-velocity.
3. Create an automator app that moves everything into Scrivener sync, for your writing and visualizing of the connections between the notes.
4. Sync scrivener, continue as before. OR, if you wish to dispense with scrivener altogether, just use the rtf to md script and proceed.

Perhaps that’s just making life way too complicated.

Oh, and as Columbo used to say… “…one more thing”: Naming. Some kind of naming convention for notes needs to be developed. Here is some really good advice that I aspire to implement.

Open Notebooks Part IV – autogenerating a table of contents

I’ve got MDWiki installed as the public face of my open notebook.

Getting it installed was easy, but I made it hard, and so I’ll have to collect my thoughts and remember exactly what I did… but, as I recall, it was this bit I found in the documentation that got me going:

First off, create a new (empty) repository on GitHub, then;

git clone https://github.com/exalted/mdwiki-seed.git
cd mdwiki-seed
git remote add foobar <HTTPS/SSH Clone URL of the New Repository>
git push foobar gh-pages

 

Then, I just had to remember to edit the ‘gh-pages’ branch. Also, on github, if you click on ‘settings’, it’ll give you the .io version of your page, which is the pretty bit. So, I updated robot 3 to push to the ‘uploads/documents’ folder. Hooray! But what I needed was a self-updating ‘table of contents’. Here’s how I did that.

In the .md file that describes a particular project (which goes in the ‘pages’ folder) I have a heading ‘Current Notes’ and a link to a file, content.md, like so:

## [Current Notes](uploads/documents/contents.md)

Now I just train a robot to always make an updated contents.md file that gets pushed by robot 3.

I initially tried building this into robot 2 (‘convert-rtf-to-md’), but I outfoxed myself too many times. So I inserted a new robot into my flow between 2 & 3. Call it 2.5, ‘Create-toc':

Screen Shot 2014-09-24 at 9.40.16 PM

It’s just a shell script:

cd ~/Documents/conversion-folder/Draft
ls *.md &gt; nolinkcontents.md
sed -E -n 's/(^.*[0-9].*$)/ \* [\1](\1)/gpw contents.md' nolinkcontents.md 
rm nolinkcontents.md

Or, in human: go to the conversion folder. List out all the newly-created md files and write that to a file called ‘nolinkcontents.md’. Then, wrap markdown links around each line, and use each line as the text of the link, and call that ‘contents.md’. Then remove the first file.

Ladies and gentlemen, this has taken me the better part of four hours.

Anyway, this ‘contents.md’ file gets pushed to github, and since my project description page always links to it, we’re golden.

Of course, I realize now that I’ll have to modify things slightly, structurally and in my nomenclature, once I start pushing more than one project’s notes to the notebook. But that’s a task for another night.

Now to lesson plan for tomorrow.

(update: when I first posted this, I kept saying robot 4. Robot 4 is my take-out-the-trash robot, which cleans out the conversion folder, in readiness for the next time. I actually meant Robot 3. See Part III)

Open notebooks part III

Do my bidding my robots!

Do my bidding my robots!

I’ve sussed the Scrivener syncing issue by moving the process of converting out of the syncing folder (remember, not the actual project folder, but the ‘sync to external folder’). I then have created four automator applications to push my stuff to github in lovely markdown. Another thing I’ve learned today: when writing in Scrivener, just keep your formatting simple. Don’t use markdown syntax within Scrivener or your stuff on github will end up looking like this \##second-heading. I mean, it’s still legible, but not as legible as we’d like.

So – I have four robots. I write in Scrivener, keep my notes, close the session, whereupon it syncs rtf to the ‘external folder’ (in this case, my dropbox folder for this purpose; again, not the actual scrivener project folder).

  1. I hit robot 1 on my desktop. Right now, this is called ‘abm-project-move-to-conversion-folder’. When I have a new project, I just open this application in Automator, and change the source directory to that project’s Scrivener external syncing folder. It grabs everything out of that folder, and copies it into a ‘conversion-folder’ that lives on my machine.
  2. I hit robot 2, ‘convert-rtf-to-md’, which opens ‘conversion-folder’ and turns everything it finds into markdown. The conversion scripts live in the ‘conversion-folder'; the things to be converted live in a subfolder, conversion-folder/draft
  3. I hit robot 3, ‘push-converted-files-to-github-repo’. This grabs just the markdown files, and copies them into my local github repository for the project. When I have a new project, I’d have to change this application to point to the new folder. This also overwrites anything with the same file name.
  4. I hit robot 4, ‘clean-conversion-folder’ which moves everything (rtfs, mds,) to the trash. This is necessary because if not, then I can end up with duplicates of files I haven’t actually modified getting through my pipeline onto my github page. (If you look at some of my experiments on github, you’ll see the same card a number of times with 1…2…3…4 versions).

Maybe it’s possible to create a meta-automator that strings those four robots into 1. I’ll try that someday.
[pause]
Ok, so of course, I tried stringing them just now. And it didn’t work. So I put that automator into the trash -
[pause]
and now my original four robots give me errors, ‘the application …. can’t be opened. -1712′. I found the solution here (basically, go to spotlight, type in activity, then locate the application on the list and quit it).

Here are my automators:

Robot 1

Robot 1

Robot 2

Robot 2

Robot 3

Robot 3

Robot 4

Robot 4

Automator….

I think I love you.

 

An Open Research Notebook Workflow with Scrivener and Github Part 2: Now With Dillinger.io!

A couple of updates:

First item

The four scripts that sparkygetsthegirl crafted allow him to

1. write in Scrivener,

2. sync to a Dropbox folder,

3. Convert to md,

4. then open those md files on an android table to write/edit/add

5. and then reconvert to rtf for syncing back into Scrivener.

Screen Shot 2014-09-19 at 2.24.27 PMI wondered to myself, what about some of the online markdown editors? Dillinger.io can scan Dropbox for md files. So, I went to Dillinger.io, linked it to my dropbox, scanned for md files, and lo! I found my project notes. So if the syncing folder is shared with other users, they can edit the notecards via Dillinger. Cool, eh? Not everyone has a native app for editing, so they can just point their browser’s device to the website. I’m sure there are more options out there.

Second Item

I was getting syncing errors because I wasn’t flipping the md back to rtf.

But, one caveat: when I went to run the md to rtf script, to get my changes back into Scrivener (and then sync), things seemed to go very wonky indeed. One card was now blank, the others were all Scrivener’s markup but Scrivener wasn’t recognizing it.

So I think the problem is me doing things out of order. I continue to play.

Third Item

I automated running of the conversion scripts. You can see my automator set up in the screenshot below. Again, I saved it as an application on my desktop. First step is to grab the right folder. Second, to open the terminal, input the commands, then close the terminal.

Screen Shot 2014-09-19 at 2.36.03 PM

Postscript

I was asked why on earth would I want to share my research notes? Many many reasons – see Caleb McDaniel’s post, for instance – but one other feature is that, because I’m doing this on Github, a person could fork (copy) my entire research archive. They could then use it to build upon. Github keeps track of who forks what, so forking becomes a kind of mass citation and breadcrumb trail showing who had an idea first. Moreover, github code (or in this case, my research archive) can be archived on figshare too, thus giving it a unique DOI *and* proper digital archiving in multiple locations. Kinda neat, eh?

An Open Research Notebook Workflow with Scrivener and Github

I like Scrivener. I *really* like being able to have my research and my writing in the same place, and most of all, I like being able to re-arrange the cards until I start to see the ideas fall into place.

I’m a bit of a visual learner, I suppose. (Which makes it ironic that I so rarely provide screenshots here. But I digress). What I’ve been looking for is a way to share my research, my lab notes, my digital ephemera in a single notebook. Lots of examples are out there, but another criterion is that I need to be able to set something up that my students might possibly be able to replicate.

So my requirements:

1. Visually see my notes, their layout, their possible logical connections. The ability to rearrange my notes provides the framework for my later written outputs.

2. Get my notes (but not all of the other bits and pieces) onto the web in such a way that each note becomes a citable object, with revision history freely available.

3. Ideally, that could then feed into some sort of shiny interface for others’ browsing – something like Jeckyll, I guess – but not really a big deal at the moment.

So #1 is taken care of with Scrivener. Number 2? I’m thinking Github. Number 3? We’ll worry about that some other day. There are Scrivener project templates that can be dropped into a Github repository (see previous post). You would create a folder/repo on your computer, drop the template into that, and write away to your hearts content, committing and syncing at the end of the day. This is what you’d get. All those slashes and curly brackets tell Scrivener what’s going on, but it’s not all that nice to read. (After all, that solution is about revision history, not open notebooks).

Now, it is possible to manually compile your whole document, or bits at a time, into markdown files and to commit/sync those. That’s nice, but time consuming. What I think I need is some way to turn Scrivener’s rtf’s into nice markdown. I found this, a collection of scripts by Sparkygetsthegirl as part of a Scrivener to Android tablet and back writing flow. Check it out! Here’s how it works. NB, this is all Mac based, today.

1. Make a new Scrivener project.

2. Sync it to dropbox. (which is nice: backups, portability via Dropbox, sharing via Github! see below)

3. drop the 4 scripts into the synced folder. Open a terminal window there. We’ll come back to that.

4. open Automator. What we’re going to do is create an application that will open the ‘drafts’ folder in the synced project, grab everything, then filter for just the markdown files we made, then move them over to our github repo, overwriting any pre-existing files there. Here’s a screenshot of what that application looks like in the Automator editing screen:

Remember, you're creating an 'application', not a 'workflow'

Remember, you’re creating an ‘application’, not a ‘workflow’

You drag the drafts folder into the ‘Get specified finder items’ box, get the folder contents, filter for files with file extension .md, and then copy to your github repo. Tick off the overwrite checkbox.

Back in scrivener, you start to write.

Write write write.

Here’s a screenshot of how I’m setting up a new project.

Screen Shot 2014-09-17 at 1.50.14 PM

In this screenshot, I’ve already moved my notecards from ‘research’ into ‘draft’. In a final compile, I’d edit things heavily, add bits and pieces to connect the thoughts, shuffle them around, etc. But right now, you can see one main card that identifies the project and the pertinent information surrounding it (like for instance, when I’m supposed to have this thing done). I can compile just that card into multimarkdown, and save it directly to the github repository as readme.md.

Now the day is done, I’m finished writing/researching/playing. I sync the project one last time. Then, in the terminal window, I can type

./rtf2md Draft/*.rtf

for everything in the draft folder, and

./rtf2md Notes/*.rtf

for everything in the notes folder. Mirabile dictu, the resulting md files will have the title of the notecard as their file name!

Screen Shot 2014-09-17 at 1.56.06 PM

Here, I’ve used some basic citation info as the name for each card; a better idea might be to include tags in there too. Hey, this is all still improv theatre.

Now, when I created that application using automator, I saved it to my desktop. I double-click on it, and it strains out the md files and moves them over to my github repository. I then commit & sync, and I now have an open lab notebook on the web. Now, there are still some glitches; my markdown syntax that I wrote in, in Scrivener, isn’t being recognized on github because I think Scrivener is adding backslashes here and there, which are working like escape characters?

Anyway, this seems a promising start. When I do further analysis in R, or build a model in Netlogo, I can record my observations this way, create an R notebook with knitr or a netlogo applet, and push these into subfolders in this repo. Thus the whole thing will stick together.

I think this works.

~o~
Update Sept 18. I’ve discovered that I might have messed something up with my syncing. It could be I’ve just done something foolish locally or it might be something with my workflow. I’m investigating, but the upshot is, I got an error when I synced and a new folder called ‘Trashed Files’, and well, I think I’m close to my ideal setup, but there’s still something wonky. Stay tuned.

Update Sept 19 Don’t write in Scrivener using markdown syntax! I had a ‘doh’ moment. Write in Scrivener using bold, italics, bullets, etc to mark up your text. Then, when the script converts to markdown, it’ll format it correctly – which means that github will render it more or less correctly, making your notes a whole lot easier to read. Click on ‘raw’ on this page to see what I mean!

Open Notebooks

This post is more a reminder to me that anything you’d like to read, but anyway-

I want to make my research more open, more reproducible, and more accessible. I work from several locations, so I want to have all my stuff easily to hand. I work on a Mac (sometimes) a PC (sometimes) and on Linux (rarely, but it happens; with new goodies from Bill Turkel et al I might work more there!).

I build models in Netlogo. I do text analysis in R. I visualize and analyze with things like Voyant and Overview. I scrape websites. I use Excel quite a lot. I’m starting to write in markdown more often. I want to teach students (my students typically have fairly low levels of digital literacy) how to do all this too. What I don’t do is much web development type stuff, which means that I’m still struggling with concepts and workflow around things like version control. And indeed, getting access to a server where I can just screw around to try things out is difficult (for a variety of reasons). So my server-side skills are weak.

What I think I need, is an open notebook. Caleb McDaniel has an excellent post on what this could look like. He uses Gitit. I looked at the documentation, and was defeated out of the gate. Carl Boettiger uses a combination of github and jekyll and who knows what else. What I really like is Mark Madsen’s example but I’m not aufait enough yet with all the bits and pieces (damn you version control, commits, make, rake, et cetera et cetera!)

I’ve got ipython notebooks working on my PC, which are quite cool (I installed the Anaconda version). I don’t know much python though, so yeah. Stefan Sinclair is working on ‘voyant notebooks’ which uses the same general idea to wrap analysis around Voyant, so I’m looking forward to that. Ipython can be used to call R, which is cool, but it’s still early days for me (here’s a neat example passing data to R’s ggplot2).

So maybe that’s just the wrong tool.  Much of what I want to do, at least as far as R is concerned is covered in this post by Robert Flight on ‘creating an analysis as a package and vignette‘ in R studio. And there’s also this, for making sure things are reproducible – ‘packrat

Some combination of all of this I expect will be the solution that’ll work for me. Soon I want to start doing some more agent based modeling & simulation work, and it’s mission critical that I sort out my data management, notebooks, versioning etc first this time.

God, you should see the mess around here from the last time!

On Teaching High School

“Hey! Hey Sir!”

Some words just cut right to the cerebellum. ‘Sir’ is not normally one of them, but I was at the Shawville Fair, and ‘sir’ isn’t often used in the midway. I turned, and saw before me a student from ten years previously. We chatted; he was married, had a step daughter, another one on the way. He’d apprenticed, become a mechanic. He was doing well. I was glad to see him.

“So, you still teaching us assholes up at the school?”

No, I was at the university. “You guys weren’t assholes.”.

A Look. “Yes, we were. But there were good times, too, eh?”

Ten years ago, I held my first full-time, regular, teaching contract, at the local highschool. The year before that, I was a regular-rotation substitute teacher. Normally one would need a teaching certificate to teach in a highschool, but strangely enough newly minted teachers never seem to consider rural or more remote schools. Everyone wants to teach in the city. Having at least stood in front of students in the past, I was about the best short-term solution around. Towards the latter part of that year holes had opened up in the schedule and I was teaching every day. This transmuted into a regular gig teaching Grade 9 computing, Grade 9 geography (a provincially mandated course), and Grade 10/11 technical drawing.

And Math for Welders.

The school is formally a ‘polyvalente’, meaning a school where one could learn trades. However, our society’s bias against trades and years of cuts to the English system in Quebec (and asinine language laws which, amongst other things, mandate that only books published in Quebec can be used as textbooks. How many English textbooks are published for a community with only around a million people, full stop?) meant that all of the trades programs were dead. In the last decade this last-gasp program had been established in the teeth of opposition (which meant these students were watched very carefully indeed – and they knew it). Instead of taking ‘high math’ and other courses (targeted at the University bound) these students could take ‘welding’ math. They also worked in a metal shop. If they could pass my course, and pass the ticket exam for Welders, they could graduate High School and begin apprenticeships.

The welding program was conceived as a solution for students (typically boys) who had otherwise fallen through the cracks in the system. It was intense. These boys (though there have been maybe five or six girls in the program over the years) had never had academic success. They were older than their peers, having fallen behind. They had all manner of social issues, family issues, learning difficulties, you name it.

And they were all mine. Not only did I teach technical drawing and math (so right there, two or three hours of face to face time per day, every day) I was also their home room teacher. At our school, ‘home room’ was not just about morning attendance, but was also a kind of group therapy session too. (I say, ‘group therapy’, but really in other classes, there was a mix of years in these home rooms, so older students could work with younger on homework, personal stuff, whatever; but in my class, it was just me, and the welders. We didn’t mix).

I learned a lot about teaching over those two years.

I could tell you a lot of stories of pain and stress. I’ve never been quite so near to quitting, to tears, to breaking down, to screaming at the world. I did a PhD! I was from the same town! I’d beaten the system! Did that not earn me some respect? Was I not owed?

No.

And that was the hardest lesson right there. In fact, although I thought myself humble when I started the job (after two years of slogging in the sessional world, hustling for contract heritage work, and so on), I still had a hard time disentangling my expectations of what students should be from my notion of the kind of student I was. Those first two months, up to Thanksgiving, might’ve been a lot easier if I had.

I also underestimated how hard it would be to earn respect. I figured ‘PhD’ meant I’d already earned it, in the eyes of the world. But I hadn’t counted on the ‘if you were any good you wouldn’t be working here’ attitude that infects so much of Canadian life (and rural life in particular).

Once, one of the students fell asleep in class. What do you do, as a novice teacher? You wake him up. You take him into the hallway to ‘deal’ with him. And then I sent him up to the office. What I didn’t know: his Dad was long gone. His mom was with a new beau, and had been spending every night at the bar. The oil bill had not been paid, and what with it being winter and all, there was no heat. He had been sitting up, every night to watch over his sisters whom he’d put in sleeping bags in the kitchen, in front of an open electric oven. He was afraid of burning down the house if he fell asleep.

And god help me, I was giving him shit for not drawing his perspective drawings correctly, for falling asleep.

With time, I began to earn their respect. It helped that at school functions I had no fear of standing up and making a fool of myself doing whatever silly activity the pep leaders had devised. “He’s a goof but he’s OUR goof!” seemed to be the sense. I learned that I had to stop being a ‘teacher’ and start being these guys’ advocate. Who else was going to stand up for them? Everyone else had already written them off.

In some corners of the school, there was a firmly held conviction that these guys were getting off easy, that somehow what they were doing was less mentally challenging. There were some ugly staffroom showdowns, sometimes. Welding math involves a lot of geometry and trigonometry, finances, and mental calculation. It’s not easy in any way shape or form. Tradesmen in Canada frequently work in Imperial units, while officialdom works in metric. Calculating, switching, tallying… these are all non-trivial things! “Sir, that’s the first time I passed a math test since Grade four” said one lad, around about October.

The first test since Grade four. My god, what have we done to ourselves? None of these students were dumb, in the sense that students use. When I lost most of the class to moose hunting season, when they got back I had them explain to me exactly what they did. Extremely complicated thinking about camouflage, fish and game laws & licensing, working with weapons and bullets… these guys were smart. They never hesitated to call me on it either when what I was saying to them was nonsense or not making sense.

“Sir”, a voice in the back would say, “what the fuck are you talking about?” You can’t get angry about language. This is how they’ve learned to speak. But imagine: a student in your class actually taking the time to explain that they don’t understand, and to show you where they lost you? These guys did that! Once I learned to take the time to listen, they had a lot to say.  Would that my university students had the bravery to do the same.

It was never easy, working with these guys. At the end of the year, I was completely drained. A tenured teacher came back from sick leave, and I was bumped from my position. Unemployed again.  Look at that from my students’ perspective. Here’s a guy, finished first in his high school, got a phd. Came back home without a job. Ends up working with us – us! – and then loses his job again afterwards. Maybe, just maybe, doing the whole ‘academic’ thing they push isn’t the thing. Maybe, maybe, working with my hands, welding, machining… I’ll always have work. If I can figure out how to plan the best cuts in this sheet of metal so that I don’t waste any money. If I can pass the welding exam. If I don’t get my girlfriend pregnant. If I maybe pass on the blow this weekend and go to work.

Did some of them think that? I’d like to think so. We bickered, we locked horns, but once I proved to them that I was on their side, I’d like to think the good stuff outweighed the bad. I certainly know that it did wonders for me as a teacher. First and foremost, it forced me to get over myself. I learned that:

  • nobody owes me anything
  • what I was like as a student is no guide to what my students are like as students
  • I need to ask how do I make it safe to try something, for students to admit that I’m making not an ounce of sense?
  • I need to not assume I know anything about my students’ backgrounds
  • I need to make my expectations crystal clear for what constitutes proof-of-learning
  • I need to be part of the life of my school/community so that my students see that I’m invested in them.

A few years later, I won a postdoc position at U Manitoba, and began teaching in distance education and online education. That helped me transmogrify into whatever this ‘digital humanities’/’digital archaeology’ thing is. That’s the final lesson right there. I have a PhD in the finer points of the Tiber Valley brick industry. Don’t be afraid to change: your PhD is not you. It’s just proof that you can see a project through to the end, that you are tenacious, and that you can put the pieces together to see something new. Without the PhD, I could never have worked with those boys.

I was glad to see Jeremy, at the fair this year.

 

 

 

Setting the groundwork for an undergraduate thesis project

We have a course code, HIST4910, for students doing their undergraduate thesis project. This project can take the form of an essay, it can be a digital project, it could be code, it could be in the form of any of the manifold ways digital history/humanities research is communicated.

Hollis Peirce will be working with me this year on his HIST4910, which for now is called ‘The Evolution of the Digitization of History: Making History Accessible’. Hollis has been to DHSI twice now, once to take courses on digitization, once to work on the history of the book. Hollis’ interest in accessibility (as distinct from ‘open access’, which is an entirely different kettle of fish) makes this an exciting project, I think. If you’re interested in this subject, let us know! We’d like to make connections.

We met today to figure out how to get this project running, and I realized, it would be handy to have a set of guidelines for getting started. We don’t seem to have anything like this around the department, so Hollis and I cobbled some together. I figured other folks might be interested in that, so here they are.