Authors on Issues: James E. Dobson on Critiquing Digital Humanities Methods

The following is a guest post from James E. Dobson, author of Critical Digital Humanities: The Search for a Methodology.

 


The sheer number of sessions on the digital humanities at the recent Modern Language Association meeting in Chicago demonstrates the increasing acceptance of computational and social scientific methods originating outside of the humanities within humanities scholarship. However, observing a number of these digital humanities sessions, I noticed an increasing separation between discussions of a presenter’s methods and the significance of the results. In response to an audience question, one presenter said that he had no interest in improving his results because he found a large enough signal or result to confirm his hypothesis. This moment raised some major concerns for the continued use of computational methods within a discipline that is predominantly anti-empirical and suspicious of quantitative methods. If we merely use the results of some statistical and/or computational method to confirm what we already know or to prop up an unsubstantiated hypothesis without a thorough explanation and investigation of these methods then digital humanists run the risk of an appeal to the authority sciences without submitting their work to either a traditional humanist critique or the norms of contemporary computational science.

The humanities, and literary studies in particular, has a long history of importing methods from other fields. In many ways, it might be the hallmark of modern literary criticism. From psychoanalysis to structuralism to phenomenology, some of the major strains of literary criticism drew on the rich and varied resources provided by the methods and insights of other fields as critics applied these methods to their objects. Literary critics certainly might lean on the authority of historians when historicizing a text but that authority was always somewhat restricted and it was available for critique. The norms, so to speak, of the humanities are such that these methods are understood as co-existing, without a sense that one method might, in the end, trump another.

 

Its detractors  imagine computational criticism as a naïve scientism that is fundamentally incompatible with the other approaches used in humanities. Recent work including Andrew Piper’s Enumerations (Chicago, 2018) proves that not only can humanist methods and statistical analysis complement each other but that one of the most common and basic of these practices, close reading, is a necessary supplement to the presentation of data derived from textual sources.

What has been mostly missing within the digital humanities and what was most troubling about the hand waving dismissal of questions about methods and data during the MLA panel was the notion that the critical examination of computation is not required or, potentially worse, that it is a problem for another field. There are numerous critiques of computation within the humanities from digital studies, feminist epistemology, science and technology studies, and elsewhere but these tend to be directed at a higher level to abstract notions and concepts, such as critiques of data or algorithms in general. There has been some excellent recent work that continues this line of critique. Jacqueline Wernimont’s recent Numbered Lives (MIT, 2018) places contemporary discussions over the quantified self within the long history of the collection and use of data derived from human life and Wendy Chun’s analysis of network science in the recent collection Pattern Discrimination (Minnesota, 2019) demonstrates the obfuscation or laundering, as she puts, of the past and our biases in machine learning. Alexander Galloway’s ongoing investigation  into whether algorithms—and not just the creators, the training data, or their applications but the methods themselves—might be biased provides additional theoretical grounding for those interested in a critique of computation as such. While Ted Underwood  suggests that the humanities should to “join forces” with computational science, arguing that “any critique of contemporary culture needs to include a critique of machine learning,” a much stronger version of critique is needed that can historicize our field’s own computational turn.

The computational sciences have been recently undergoing a different type of critique. The so-called replication crisis in several fields, social psychology perhaps most notably, primarily occurred because of the move from in vivo to in silica experiment and the increasing dependence on computational models and simulations. The use of computational models and the standardization of methods has made it much easier for other researchers to attempt to reproduce experiments. The open science movement emerged from the difficulties experienced by these researchers to reproduce these experiments. In making methods and data open, in the sharing of the computational approaches (termed “workflows”), fields with a deep dependence on computational models acknowledge the importance of the ability of others to understand and validate their experiments.

It is still an early moment for the computational turn in the humanities but what we need is a critical digital humanities that refuses a split between methods and interpretation, between critique and results. Computational humanists cannot inhabit the site of this split through an act of disavowal that shifts attention away from their methods and toward their own interpretive arguments. To do so fails to acknowledge both the import of the already existing critiques of computation and the imminent critical methods available within computational science.

-James E. Dobson, Dartmouth College

 


About michael

Marketing & Sales Manager since 2012