I’ve not actually seen this.
At my university, we’ve been asked to consider discipline-specific language for new tenure & promotion guidelines. I’ve been writing a response to our chair, and I thought, in keeping with how I regard this problem, it would be a good idea to share these thoughts.
The 1.4 edition of the Journal of Digital Humanities wrestles with the problem of evaluating digital scholarship for tenure http://journalofdigitalhumanities.org/volumes/ (or download as pdf: http://journalofdigitalhumanities.org/files/jdh_1_4.pdf )
Moving Goalposts & Scholarship as Processes
As far as discipline specific guidelines are concerned, from my perspective, is the problem that the goalposts are always going to be shifting. What was fairly technically demanding becomes easier with time, and so the focus shifts from ‘can we do x’ to ‘what are the implications of x for y’, or as Bethany Nowviskie put it, a shift from 18th century ‘Lunaticks‘ who lay the groundwork for 19th century science and industrialization. Another problem is that in digital work, the lone scholar is very much the outlier. To achieve anything worthwhile takes a team – and who gets to be first author does not necessarily reflect the way the work was divied up or undertaken. We should resist trying to shoehorn digital work into boxes meant for a different medium. Nowiskie writes,
“The danger here … is that T&P committees faced with the work of a digital humanities scholar will instigate a search for print equivalencies — aiming to map every project that is presented to them, to some other completed, unary and generally privately-created object (like an article, an edition, or a monograph). That mapping would be hard enough in cases where it is actually appropriate ”
She goes on to say,
“…the new responsibility of tenure and promotion committees [is] to assess quality in digital humanities work — not in terms of product or output — but as embodied in an evolving and continuous series of transformative processes.”
This was the gist of Bill Turkel’s address to the Underhill Graduate Students Colloquium on ‘doing history in real time’ – that the unique value, in an increasingly digital world, of formal academic knowledge is not about things per se, but rather about method. You can look up any fact in the world in seconds. But learning how to think, how to query, how to judge between competing stories – that’s what we bring. That then is the problem for assessing digital work as part of tenure and promotion: how does this work change the process?
That suggests a hierarchy too, of importance. Merely putting things online, while important, is not necessarily transformative unless that kind of material has never been digitized before. Then the conversation also becomes about how that work was done, the decisions made, the relationship between the digital object and the physical one. I have a student working on a project, for instance, to put together an online exhibition related to Black History in Canada. This is important, but the exhibition itself is not transformative. The real scholarship, the real transformation, happens when she starts exploring those materials through text analysis, putting a macroscopic lens on the whole corpus of materials that she has collected.
Digital Work is Public Work
The other important point about process is that digital work always (99.9 times out of 100; my early agent modeling work had no internet presence, for instance) has a public, outward looking face. Platforms like blogs allow for public engagement with our work – so digital work is a kind of public humanities. The structure of the internet, of how its algorithmns find and construct knowledge and serve that up to us via Google, is such that work that is valuable and of interest creates a bigger noise in a positive feedback loop. The best digital work is done in public. ‘Public’ should not be a dirty word along the lines of ‘popular’. The internet looks different to each person who goes online (and our algorithmns make sure that each person sees a personalized internet, because that’s how one makes money online), so hits on a blog post are not random meaningless clicks but rather an engagement with a broader community. As far as academic blogging goes, that broader community is other academics and students. Print journals & peer reviewed articles are just one way of engaging with our chosen communities. With post-publication models of peer review like Digital Humanities Now and the Journal of Digital Humanities (models that are making inroads in other disciplines), we should treat these on an equal footing with the more familiar models. I’d argue that post-publication peer review is a greater indicator of significance and value that a regular, two blind reviewers into print model.
I’d like to see language then that regarded digital work, or work in media other than print, on an equal footing with the more familiar forms. That is, as things that do not have equivalencies to what we traditionally expect and thus must be taken on their own terms. I appreciate that I’m pretty much the only person in this department that any of this might apply to, for the time being. I would hate to see my work on topic modeling though get considered as ‘service’. Figuring out how to apply natural language processing to vast corpi of historical materials, figuring out the ways the code force particular worldviews, hide others, and writing all of this up as a ‘how-to’ guide is indeed research. It’s akin to figuring out how gene-sequencing works, its limitations, etc, which needs to be well understood before a biologist can use it to link modern humans to Neanderthals. We understand both of those activities as research, in biology, but we’d only understand the second as research if the example was the limits, potentials of topic modeling / discourses in the political thought of the 18th century. I bring this up, because of Sean Takats experience at George Mason:
Project Management & Project Outputs
In that particular case, Takats was also managing major development projects to develop various tools and approaches. He writes,
” I want to focus on the committee’s disregard for project management, because it’s here I think that we find evidence of a much broader communication breakdown between DH and just-H, despite the best efforts to develop reasonable standards for evaluating digital scholarship. Although the committee’s letter effectively excludes “project management” from consideration as research, I would argue that it’s actually the cornerstone of all successful research. It’s project management that transforms a dissertation prospectus into a thesis, and it’s certainly project management that shepherds a monograph from proposal to published book. Fellow humanists, I have some news for you: you’re all project managers, even if you only direct a staff of one.”
Which leads me to my next point. Digital work creates all sorts of outputs, that are of use at many different stages to other researchers. These outputs should be considered as valuable publications in their own right. An agent based simulation of emergent social structures in the early Iron Age makes an argument in code about how the Roman world worked. If I published a discussion of the results of such a model, that is fine; but if I don’t make that code available for someone else to critique, extend, or transform, I am being academically dishonest. The time it takes to build a model that works, is valid, that simulates something important, and the process it takes to build such a model, is considerable. The data that such a model produces is valuable for others looking to re-build a model of the same phenomena in another platform (which is crucial to validating the truth-content of models). All of these sorts of outputs can be made available online in digital archives built for the purpose of long term storage. The number of times such models are downloaded or discussed online can often be measured; these measures should also be taken into account as a kind of citation (see http://figshare.com/authors/Shawn_Graham/97736 ).
Experimentation and Risk Taking
Finally, I think that work that is experimental, that discusses what didn’t work, should be recognized and celebrated. Todd Presner writes, (http://journalofdigitalhumanities.org/1-4/how-to-evaluate-digital-scholarship-by-todd-presner/ )
” Digital projects in the Humanities, Social Sciences, and Arts share with experimental practices in the Sciences a willingness to be open about iteration and negative results. As such, experimentation and trial-and-error are inherent parts of digital research and must be recognized to carry risk. The processes of experimentation can be documented and prove to be essential in the long-term development process of an idea or project. White papers, sets of best practices, new design environments, and publications can result from such projects and these should be considered in the review process. Experimentation and risk-taking in scholarship represent the best of what the university, in all its many disciplines, has to offer society. To treat scholarship that takes on risk and the challenge of experimentation as an activity of secondary (or no) value for promotion and advancement, can only serve to reduce innovation, reward mediocrity, and retard the development of research.”
One of my blog posts, ‘How I Lost the Crowd‘, discusses how my one project got hacked. That piece was read by some 400 people shortly after it was posted – and it later found its way into various digital history syllabi ( for instance here. This post has been read over 700 times in the past 10 months. Failing in public is where research and teaching are the same side of the same coin (he said, to mangle a metaphor).
So what should one look for?
Work that is transformative; where multi-authored work is valued as much as the single-author opus; work that is outward facing and is recognized by others through linking, reposting, sharing (and other so-called ‘alt-metrics; cf http://impactstory.org/ for one attempt to pull these all together); data-as-publication; code-as-publication; experiments and risktaking and open discussion of what does and what does not work; software development & project management should be recognized as research; and any work that lays the groundwork for others to see further – the humble ‘how to’ (our lunatick moment; see for instance http://programminghistorian.org ).
For explicit guidelines on how to evaluate digital work, see Rockwell, http://journalofdigitalhumanities.org/1-4/short-guide-to-evaluation-of-digital-work-by-geoffrey-rockwell/
Considering any digital work, Rockwell suggests the following questions:
- Is it accessible to the community of study?
- Did the creator get competitive funding? Have they tried to apply?
- Have there been any expert consultations? Has this been shown to others for expert opinion?
- Has the work been reviewed? Can it be submitted for peer review? (things like Digital Humanities Now, & JDH are crucial here)
- Has the work been presented at conferences?
- Have papers or reports about the project been published? (whether online or print, born-digital or otherwise is not the issue here)
- Do others link to it? Does it link out well?
- If it is an instructional project, has it been assessed appropriately?
- Is there a deposit plan? Will it be accessible over the longer term? Will the library take it?
I’m not saying that we should build this checklist into any tenure and promotion language; rather I’m offering it here to suggest that any such language, if it broadly considers such things, will probably be ok, in the hopes of finding an acceptable middle ground between the box-tickers and the non-boxtickers. Rockwell offers some best practices for carrying out digital work, that speak to these questions:
- Appropriate content (What was digitized?)
- Digitization to archival standards (Are images saved to museum or archival standards?)
- Encoding (Does it use appropriate markup like XML or follow TEI guidelines?)
- Enrichment (Has the data been annotated, linked, and structured appropriately?)
- Technical Design (Is the delivery system robust, appropriate, and documented?)
- Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?)
- Online Publishing (Is it published from a reliable provider? Is it published under a digital imprint?)
- Demonstration (Has it been shown to others?)
- Linking (Does it connect well with other projects?)
- Learning (Is it used in a course? Does it support pedagogical objectives? Has it been assessed?)
This is of course a thinking-out-loud exercise, and will no doubt change. Thoughts?