Open By Default

Recently, there’s been a spat of openness happening in this here government town. This post is just me collecting my thoughts.

First thing I saw: a piece in the local paper about the Canadian Science and Technology Museum’s policy on being ‘open by default’. The actual news release was back in April.

This is exciting stuff; I’ve had many opportunities to work the folks from CSTM and they are consistently in the lead around here in terms of how they’re thinking about the ways their collections (archival, artefactual and textual) could intersect with the open web.

This morning, I was going over the Government of Canada’s ‘Draft Plan on Open Government’ and annotating it. (I’m using, so can’t use Kris Shaffer’s awesome new plugin that would pull these annotations into this post.)

There’s a lot of positive measures in this plan. Clearly, there’s been a lot of careful thought and consideration and I applaud this. There are a few things that I am concerned about though (and you can click on the link above, ‘annotating it’ to see my annotations). Broadly, it’s about the way access != openness. It’s not enough to simply put materials online, even if they have all sorts of linked open data goodness. There are two issues here.

  1. accessing data is something that is not equitably available to all. Big data dumps require fast connections, or good internet plans, or good connectivity. In Canada, if you’re in a major urban area, you’re in luck. If you live in a more rural area, or a poorer area, or an area that is broadly speaking under-educated, you will not have any of these. Where I’m from, there’s a single telephone cable that connects everything (although in recent years a cell phone tower was built. But have you looked at the farce that is Canadian mobile data?)
  2. accessing data so that it becomes useful depends on training. Even I struggle often to make use of things like linked open data to good effect. Open Context for instance (an open archaeological data publishing platform) provides example ‘api recipes‘ to show people what’s possible and how to actually accomplish something.

So my initial thought is this: without training and education (or funds to encourage same), open data becomes a public resource that only the private few can exploit successfully. Which makes things like the programming historian and the emergence of digital humanities programs at the undergraduate and graduate level all the more important, if the digital divide (and the riches being on the right side of it brings) is to be narrowed. If ‘open by default’ is to be for the common good.