Skip to content
Autumn 2022, Congruence Engine | Discussion

Working at scale: what do computational methods mean for research using cases, models and collections?

Daniel C S Wilson

Keywords

AI for GLAM, Computational Humanities, History, Machine Learning, scale, STS

The ascent of scale

https://dx.doi.org/10.15180/221805/001

The keyword ‘scale’ has become as ubiquitous in the humanities and social sciences as it has long been polysemous. Beyond the senses of ‘climbing’ and ‘succession’ in its several etymologies, the sense in relation to measurement – a scale as a referent for counting – looms large among its meanings. When fused with the senses of ‘perspective’ and ‘representation’ (stemming from mapping and modelling) ‘scale’ begins to look like an ambiguous super-word, widely used in verb forms as in ‘scale up’ or ‘at scale’; forms now used unproblematically as terms of art in a range of fields. The merits of doing things at scale, we often hear in public discourse, is that bigger means better: conclusions self-evidently become stronger on the basis of more and bigger data. The rise of big data now has its own big and burgeoning historiography. New histories of data and information science have responded to twenty-first-century digital practices and infrastructures associated with Silicon Valley, social media and the cloud to re-interpret earlier episodes through the lens of (but also often with the use of) these new technologies. The big data produced in the nineteenth century, influentially called an ‘avalanche of numbers’ by the historian Ian Hacking (Hacking, 1991), invite (and have begun to receive) re-interpretation in light of twenty-first century developments (Wilson, 2018).

A sense of incongruity nonetheless persists between what might be called the human scale, on the one hand, and the incomparable scale of information produced by institutions and states, but also scientists, on the other. This tension – as we might expect – is not new and has surfaced, in particular, in relation to the public understanding of science. Writing in 1926 the statistically minded biologist J B S Haldane outlined the difficulties in encouraging citizens to make the leap in perspective from the human scale to that of the millions and the billions needed to make sense of scientific modernity: ‘The average man complains’ claimed Haldane (1927, p 2), ‘that he cannot imagine the eighteen billion miles which is the unit in modern astronomy’ that make up a single parsec (in fact, ‘trillions’, by today’s convention). ‘Beyond those limits space does not have the properties ascribed to it by common sense, and visual imagination does not help us’ (p 4). This breakdown in habitual forms of reasoning explains our reliance on scale models to stand in for phenomena of interest. When common sense deserts us, shifting scales allows elucidation, explication and understanding by mapping ungraspable phenomena onto a canvas rendered visible; not only for pedagogy or popularisation, but also as part of the core ratiocination involved in scientific work, namely: hypothesis generation, research design and the mental manipulation of concepts.

It is in such a marketplace of ideas that ‘scale’ has become a zeitgeist, in particular for the digital humanities, increasingly coupled to the field of data science, its methods, thought-style and knowledge claims. A data-driven approach to research in the humanities, however, remains a minority pursuit, often to the frustration of those who advocate it (and the relief of its critics). The ‘data-fication’ of the natural sciences is a process long underway, with profound implications for scientific career paths, infrastructure and institutions.[1] New national centres for data science (sometimes coupled with so-called ‘Artificial Intelligence’) abound, drawing in substantial funding from hopeful public and private backers. Some scholars concerned about the viability of the humanities tout court have sought to ground claims of significance in their willingness to adopt data-driven approaches: ‘change or die’. Nonetheless, data science institutes remain dominated by the natural sciences, which is unsurprising given the congruence between the statistical methods required for such work and the existing skills and training of the scientific workforce (as opposed, broadly speaking, to that of the humanities, quite apart from the differing research and publication conventions in actual existence). Data-fication in the natural sciences – which has turned on notions of scale as much as on congenial tools and methods in computer science – has nonetheless found awkward kinship with the new field of data science, which has its genealogy in the culture and politics of Silicon Valley, rather than in research laboratories. ‘Data scientist’ is a moniker of a nascent profession rather than of a disciplinary field; however, it is one whose perspective on the existing disciplines is to reduce all other fields of knowledge to the status of ‘domains’ to which its methods in turn might subsequently be applied. Applications of data science are therefore as likely to relate to the work of the security services as they are to helping diagnose ovarian cancer.[2] It is this idea of a general purpose technology which drives much of the interest in data science, whose strengths lie both in operationalising enormous volumes of data at scale, but also in the shifting between the scales involved in its myriad and versatile research (and commercial) applications.

The promise of working at scale introduces a panoptic quality to research questions: it gestures towards the longue durée as well as the exciting possibilities of new forms of breadth or depth. These can be useful claims for disciplines staking their claims for significance or tussling for the attention of funders.[3] Questions of scale have been the occasion for theoretical and methodological reflections on the part of historians, sometimes contentiously.[4] Is a study focusing on one historic day, more or less significant than one on the scale of centuries? And likewise: is the global scale a necessary condition for certain forms of understanding? In which case, are local studies necessarily parochial? Studies involving a broad scale often include more information, perhaps requiring different methods and infrastructure to handle seemingly bigger data but which is often analysed at a lower resolution of detail.

What do these trends and preoccupations mean for material history and research in museum collections? Scale has the potential to act as a useful prism through which to think about the way that – on the one hand – digital research methods might be brought to bear on collections – and on the other – the way that museums already operate with a sense of the problematics outlined above, with a theory of scale necessarily embedded in the work of curating and displaying objects. Referring to the Science Museum’s Making the Modern World as exemplary in this regard, Thomas Söderqvist remarks how its ‘big hall provides an immediate, almost intuitive, grasp of the longue durée’, and more generally how ‘one can move from the anecdotal features of singular objects to broader cultural and political themes and issues’ (Söderqvist, 2016, p 343). This shifting perspective experienced by museum visitors is, on reflection, a fundamental precept behind choosing to exhibit things in the first place. To see a fine example of a type of artefact, can transport the viewer through the looking glass to a different time and/or place, whose significance is established by its curation. Söderqvist continues his paean to the inherently political nature of the museum by pointing to the way exhibits are now ‘accompanied by the visualization of digitalized historical data’. This is a vision of a new and radical practice of contextualisation at scale, made possible by the accessibility of metadata about objects created by librarians, archivists and others, as well as open-access statistics more generally. However, this is not the main aspect of the role currently being imagined for digital methods, machine learning (ML) and ‘AI’ in the GLAM sector (Galleries, Libraries, Archives and Museums).

An authoritative definition of data science lists among its features ‘the processing of large amounts of data in order to provide insights into real-world problems’.[5] The question therefore suggests itself: what are the real-world problems the GLAM sector needs help with? Setting aside the challenges of seemingly indefinite austerity, the depredations of becoming cannon fodder for culture warriors, as well as the murky issue of sponsorship – which even data science cannot address – an equally authoritative recent overview of what we might call ‘AI for GLAM’ highlighted four broad areas where data science ‘could have – or already is having – an impact.’ (van Strien, 2022, p 1): ‘Cataloguing and other forms of metadata generation. Enabling search and discovery of collections. Supporting and carrying out research. Public engagement and crowdsourcing.’ These are core areas of what museums do and they have long incurred expensive costs in the form of human labour. One reason for excitement among museum managers therefore rests in the potential for doing more with less, and doing it by leveraging collections data ‘at scale’. One aspect of the rhetoric surrounding ML is its implicit ability to save money for organisations, a claim made more explicitly in other sectors, but whose implications for the work of curators have not been fully understood. One appeal of new technology has always been as a labour-saving device, which historians of science and technology are well-placed to point out. Another feature of new technology has also been its susceptibility to animal spirits and exaggeration, modelled formally in one instance as the ‘Gartner hype cycle’. A recent piece of research to examine possible impacts of ML in this sector concluded that the mood of information professionals had reached Gartner’s notorious ‘trough of disillusion’ (Cox, 2021, p 28). Cox found that there ‘remains scepticism that many of the products being labelled as AI are truly novel or can fully deliver on vendors’ promises. They are often perceived to be familiar technologies rebadged. If they do offer something novel it is more limited than the claim. How proprietary systems work is often a secret.’ The reliance on proprietary information systems creates many potential problems for institutions, especially in the areas of sustainability and access. These problems may nonetheless seem less serious than the alternative challenge of building (and maintaining) bespoke tools and systems in-house. A much greater understanding is needed of where genuine benefits can be obtained from the latest digital methods, and where the appropriate limits lie. Such judgments require a technical skillset often wanting among decision-makers who risk being seduced by ‘shiny tech’, creating ‘the pattern of technical solutionism: technologies in search of a problem’ (Cox, 2021).

GLAM organisations remain in a process of developing an understanding of how data science methods might productively be put to work in furthering their goals in research, curation and exhibition. To do this critically means a more rounded and realistic sense of the processes and tendencies of data science, its potential, as well as possible pitfalls. A large and ambitious scheme such as the AHRC’s ‘Towards a National Collection’ (TaNC) signals an intent from research councils to support the use of digital methods. To this extent it may be useful to survey some salient experiences of related and neighbouring fields, whose more early adoption of methods such as ML could offer useful object lessons. The following sections draw on certain of these experiences in turn, paying special attention to the role of data science, its central claims about scale and its increasingly influential thought-style.

Scalar readings

https://dx.doi.org/10.15180/221805/002

English literature has been the site of perhaps the most robust discussion around the uses of ML as a tool, as a sub-set of debates about computational methods within the field more generally.[6] The writings which are the topos of literary studies are, in a sense, found objects which happen to be inherently amenable to computational analysis. This differs (on the whole) from museum objects and is an accident of their material property of being texts which, if not already machine readable directly, can be digitised and, thereby, processed relatively easily into material for what has famously been called ‘distant reading’.[7] The relative availability of early modern textual corpora in digital form has led to the (perhaps incongruent) rise of the eighteenth century and Romanticism as the leading edge of literary digital humanities.[8] Although not caused by the rise of computational approaches we can observe that a certain degree of crisis in the humanities (and literary studies especially) has been collocated with it. This has left digital scholars especially well placed to consider and introspect on the nature of their discipline, some of whom have raised valuable questions about the very nature of the literary object and its study.

The issue of scale has been crucial in narrating a new disciplinary self-understanding, in a sense, provoked by the rise of digital texts and their ubiquity in the wider culture. For James F English and Ted Underwood (2016) the history of the changing preoccupations of literary studies can be told in terms of scalar contraction and expansion over time.[9] Interwar critics in Cambridge (among other places) ‘sought to establish literary studies as a distinct and legitimate’ practice, in contrast to philology and belles-lettres, by reducing the scale of their object to ever smaller units of analysis and so making it ‘teachable, testable, rigorous’. This scientistic approach, they claim, proved ‘a winning strategy’ as ‘[l]iterary studies massively expanded its institutional footprint and widened its cultural power’ (p 278). Further waves of expansion and contraction have followed with respect to the object of study, culminating in the new historicism of the 1980s which, for all its capacious inclusivity of cultural subject matter, could nonetheless take tiny units of time or text as high-resolution representatives of something bigger. They continue: ‘For all its expansive effect on the texts and topics deemed pertinent to literary studies, New Historicism was in this respect a ‘nanohistoricism’ (Liu, 2008). Today, by contrast, we confront something more like gigahistoricisms’ (English and Underwood, 2016, p 280). This expansion of scale has allegedly precipitated a ‘crisis of largeness’ as a result of the big data by which many scholars apparently feel overwhelmed. English and Underwood, however, reject any opposition between data on the one hand and theory on the other; an idea that might follow if we imagine that ‘big data’ involves a merely technical advance. On the contrary, there are new and sophisticated hermeneutical practices developing alongside techniques based on machine learning, for example, in relation to the use and interpretation of how inputs and variables relate to model outputs. These are more visible when deployed by natural and social scientists, whose methods are often more explicitly articulated, but are now in use by humanists as well. In any case, they point out, practically all scholars are now the users of algorithms that underlie the search engines on which they depend, only without openly acknowledging their impact.

This issue is explored directly in a recent essay in relation to vector-space models of language, in which Dobson (2022) problematises the use of linguistic tools such as word2vec (widely used to construct arguments about patterns of word use and meaning). Dobson raises questions about the hidden interpretative work performed by such tools, in a discussion which shows scholars coming to terms with computational methods and generating a productive new critical discourse. Nonetheless, there remains a sense of discomfort about critical practice being conducted using black-boxed algorithms – which is the sub-text to much of the anxious commentary around literary studies, at times recalling the tired rhetoric of the ‘two cultures’ war. Sub-fields of literary studies concerned with what might be called more empirical questions – such as literary history – have been happier bedfellows for cutting-edge methods that ‘scale up’. Notable among these is the network analysis used by Ahnert and Ahnert (2019) to make inferences about history from the big metadata gleaned from literary and epistolary archives.[10]

However, scaling up in fields whose interest lies adjacent to or, in some sense, beyond texts creates its own challenges. The move from internal or pure hermeneutics to a form of distant reading which uses texts as a gateway to understanding social reality, faces questions about how, and in what way, do they represent the reality that lies beyond. This problem is closer to the one faced by museum curators and researchers aspiring to work digitally: what additional forms of mediation (beyond the traditional act of curation) stand between a collection and some machine-readable representation, or simulacrum, of its contents, to be analysed ‘at scale’? Social historians have been trained in source criticism to interrogate this nexus between text and reality; but how does this translate to the digital realm, with its multiple stages and varieties of processing? For many researchers working with newly accessible big historical data, it has been enough to point to the size – or scale – of their source material.[11] However, this flies in the face of decades of practice in the social sciences around sampling and the question of representativeness. The facility of using tools such as Google’s N-Gram has dazzled too many scholars into overlooking the rather important question: but which books did Google digitise?[12]

The form of historicism in most need of updating relates to this issue of representativeness that cannot be simply overcome by sheer volumes of data. Critical practices in relation to the assembly and exhibition of collections have not routinely been translated into the digital realm, leading naïve claims to be made on the basis of hastily assembled corpora.[13] Pioneering work has begun in relation to digitised newspaper collections (Beelen, 2023) that point towards more dependable forms of reasoning at scale that may prove a model for other fields. Introducing the notion of an ‘environmental scan’, Beelen et al, argue that it is only by reconstructing a sense of the historical newspaper landscape (by painstaking work with contemporary reference sources) that one can credibly evaluate (one’s own, let alone others’) historical claims based on large digital collections. Without such source criticism at scale we cannot say what it is that our sources represent.

Scale models

https://dx.doi.org/10.15180/221805/003

Shifts in scale are a stock-in-trade for the allied disciplines of Science and Technology Studies and the History and Philosophy of Science (STS-HPS), acting as a useful heuristic for the scientific claims they have been concerned to understand, often based on some relation between micro- and macro-analyses. The classical locus of the laboratory and its controlled environment, away from the world outside, involves scientists in scalar modes of reasoning, whether implicitly or explicitly. Ubiquitous in the sciences, experiments, models and diagrams all involve some theory of scale on the part of their users. The study of metrology – of the literal practices of measuring and counting according to one scale or another – and how these have been indexed and calibrated over time, continues to be a fundamental issue in STS-HPS.[14] Scientific instruments, most visibly in relation to the observational practices dependent on the optics of microscopes and telescopes, are frequently scalar instruments for seeing. More recently, the problem of measuring global heating and climate change has been conceptualised in terms of the very possibility of making observations at the planetary scale.[15]

STS-HPS scholars have made rich use of case studies to represent bigger things, as well as often reflecting on the nature of the work done by case studies.[16] The small scale of the clinical case study was no barrier to wide generalisation, for example, in Sigmund Freud’s time. Despite the widespread understanding of sampling and statistics, certain cases took on an outsized medical and then cultural importance, to become what Monica Krause (2021) has called ‘model cases’. For the purposes of thinking about scale, the uses of case studies – and especially model cases – become pertinent when we think about the way that such cases stand in for other phenomena. Krause makes a useful distinction for us between the material object of research and the formal object of research, or epistemic target. The material research object is a concrete object, accessed through particular traces, or ‘data’, that are produced by specific tools and instruments. It stands in for the epistemic target of the study — what a given study aims to understand better, which is not usually available for direct observation.[17]

This practice will seem as familiar to researchers across many fields as it is unexamined. Some form of logic, according to which cases relate to broader knowledge claims, can be found everywhere from urban studies (in which the cases of ‘Chicago’ or ‘Los Angeles’ becomes the focus for different schools of thought, depending on their precepts) to biology (where frogs or fruit flies have played canonical roles). When the case in question is at the scale of the whole world – such as in climate science, and studies of its history – the problem of measurement can become acute: as Simon Schaffer once commented, where do you stick the thermometer? This problem explains the use of formal modelling – not to represent a thing, but as a model for a type of thing which can then be manipulated, the results of which ‘can then be compared to the world’ (Krause, 2021, p 27). It is this connection back to the real world of phenomena-to-be-explained that seems especially pertinent to digital and computational research in history. What role do datasets and the idea of scale play in mediating between historical sources and the reality beyond, which, in Krause’s terms, is its epistemic target? This is an issue we find addressed by theoretically sophisticated researchers who have wrestled with the question of ‘representativeness’ and the inevitable process of sampling that happens when using data to stand in for something else.[18] However, too often we see datasets treated uncritically as if simply equivalent to the target of study. For researchers in the digital humanities, ‘getting hold of the dataset’ can itself feel like the Holy Grail, which perhaps explains why so many words are often spilled in describing and interrogating the dataset, whose properties can end up becoming the rather insular target of the inquiry. Datasets that are well-curated and freely available are likely themselves to become model cases, as other researchers are invited to repeat aspects of the research. The increasing emphasis on reproducibility shows the influence of data science in setting new standards of best practice and is made possible by code sharing platforms such as GitHub, now a ubiquitous part of research infrastructure (and privately owned by Microsoft since 2018).[19]

Historical scale

https://dx.doi.org/10.15180/221805/004

Historians have sought new metaphors to describe their accommodation to the panoptic possibilities of working across new enlarged scales. Instead of the microscope or the telescope, we have the ‘macroscope’, hinting at a paradoxical combination of simultaneous depth and breadth (Graham, 2016). More recent work has highlighted the ways that big data can be leveraged to explore not only long-term trends, but also short, momentary episodes and suggest associated metrics of salience (Guldi, 2022). However, the use of quantitative methods in historical studies is not in itself new: working with large numbers has long been practised within a certain strand of historical work related (principally) to demography and economics, in which scale has been posited as a means of overcoming the cherry-picking of evidence and the problem of case studies altogether.[24] This ‘cliometric’ form of history reached its peak several decades ago but fell from favour (as popular history boomed instead) due to its remote, social-scientific idiom and lack of narrative and engagement.[20] Can a version of cliometrics bound to digital methods reconnect with the public? One form this has already taken is with the use of history as a form of forecasting. In a strange inversion of teleological thinking, big data has been used to pursue forms of ‘retrojection’, in which the past itself is modelled to see how well the model ‘predicts’ the outcome we now know to have been true.[21] Such approaches may have a role where data and evidence are in fact lacking, but don’t appear to answer the call for history to become more relevant as articulated, for example, in The History Manifesto (Guldi and Armitage, 2014). A more promising use of history at scale may be in the use of techniques such as data visualisation and the design of front-end interfaces, such as those being developed in heritage institutions and museums to encourage engagement with their collections.[22]

At the level of collections research, the use of big data (at scale) offers the prospect of discovering trends and patterns not observable even by expert curators. One refrain heard among data scientists is to ‘let the data speak’ without preconceptions. Notwithstanding the difficulty of accepting such precepts in the humanities (because the notion of ‘raw data’ is an oxymoron – Gitelman, 2013), we must ask what the nature of such trends and patterns might be, even in principle. The increasingly impressive track record of data science in the natural sciences is based on speculative attempts to detect meaningful (or useful) patterns that could not have been predicted from within disciplinary domains. This is hard to imagine in a discipline such as history; however, we can close our discussion with an example based on a type of source material for which ‘scale’ is an essential keyword combining several of its senses; namely, maps.

Cutting-edge new work in historical geography has involved creating – on the one hand – new methods of working with these source materials – and on the other – wholly new categories of analysis. One example of such a procedure can be found in recent work using maps to generate new perspectives on the historical landscape (Hosseini, 2021; 2022). Thanks to the enormous investment in creating item-level metadata within its vast collection of historical Ordnance Survey maps, the National Library of Scotland (NLS) made it possible for researchers to create a machine-learning pipeline based on computer vision, to ‘see’ the historical landscape in entirely new ways. The NLS not only catalogued each of its hundreds-of-thousands of individual historical map sheets, it also digitally geo-referenced them (which situates each sheet in space). This has made it possible for members of the public to view the historical landscape in a smooth and well-ordered manner using the NLS’s online mapping platform, in which map sheets are imperceptibly stitched together in both space and time.[23] However, a further affordance of the underlying collection information that was generated, was to make the map sheets machine-readable in ways that could not have been anticipated before the advent of the neural networks that allow computers, as it were, to ‘see’. After a long process of experimentation, a team of multidisciplinary researchers and curators developed a new typology for the historical landscape (different from anything in the existing literature) in relation – in this case, to industrial development. These (and other) features of research interest can consequently be detected by a neural network and then located ‘at scale’ across the entire collection of thousands of map sheets. This new data or output of a machine learning model can subsequently be added back into catalogue and collections systems as an enrichment of metadata, or stand as a dataset in its own right to be used in fresh combination with other datasets. The value of this procedure depends both on the scale of the map (meaning its resolution) insofar as the features must exist at a sufficient level of visual detail; and also on the broad scale of coverage offered at the level of the (national) collection.[25]

This brief overview of one case suggests a model of how ML can be brought to bear productively for research on and with collections. The deployment of data science in the GLAM sector should not be restricted to the level of generic tools – such as for processing audience preferences, or algorithmic recommendation systems – which would be thin gruel indeed. Instead, research-led explorations of galleries, libraries, archives and museums which leverage the affordances of scale in ways that are both critical and well-judged point to much richer and promising avenues of exploration. This flows from the way they combine innovation in tooling with the serendipity of combination and discovery on which computational data science, at its best, is based. GLAM institutions seeking to reap such benefits should put research at the heart of what they do, operating at whatever scale is most appropriate to the material in their possession, rather than that being pushed by the purveyors of new techniques and technologies regardless.

Tags

Footnotes

1. Sabina Leonelli has written widely on this issue; for a comprehensive overview, see her entry, ‘Scientific Research and Big Data’, in The Stanford Encyclopedia of Philosophy , edited by Edward N Zalta, Summer 2020 (Metaphysics Research Lab, Stanford University, 2020), https://plato.stanford.edu/archives/sum2020/entries/science-big-data/. Back to text
2. For suggestive examples, see https://www.turing.ac.uk/news Back to text
3. The controversy surrounding The History Manifesto turned, to an extent, on the equation of scale and significance; discussed further below. Back to text
4. In relation to the turn to micro-histories such as by Carlo Ginzburg, see for example, Jacques Revel, ed, Jeux d’échelles: la micro-analyse à l’expérience (Paris: Gallimard le Seuil, 1996). Back to text
6. See (Da, 2019) and the ensuing responses in Critical Inquiry and elsewhere. Back to text
7. See (Moretti, 2013), but others have attempted to differentiate between quantitative and digital approaches altogether, see (Underwood, 2017). The work of digitisation is always difficult, complex and expensive; nonetheless, literary texts tend to have presented fewer barriers to usability than other sources such as maps or newspapers, to take just two examples. Back to text
8. See the work of the Stanford Literary Lab, Ted Underwood, Peter de Bolla and the centrality and widespread availability, for example, of collections such as ECCO (Eighteenth Century Collections Online). Back to text
9. A recently published book stakes a claim for ‘Scale Studies’ as a sub-field of English, (Horton, 2021). Back to text
10. See (Ahnert et al, 2021) for an exploration of this approach. Back to text
11. C.f., (Bode, 2020) Back to text
12. See (Pechenick, 2015) Back to text
13. (‘Scientists pinpoint the year Britons were happiest’, 2019) Back to text
14. In specific relation to metrology and the shift from local to global, see Simon Schaffer, ‘Les cérémonies de la mesure’, Annales. Histoire, Sciences Sociales 70.2 (2015): 409–35. Back to text
15. See Lino Camprubí and Philipp Lehmann, ‘The Scales of Experience: Introduction to the Special Issue Experiencing the Global Environment’, Studies in History and Philosophy of Science Part A, 70 (2018): 1–5. For a discussion of the appropriate scale, in relation to both time and space, see Deborah R Coen and Fredrik Albritton Jonsson, ‘Between History and Earth System Science’, Isis 113, no. 2 (1 June 2022): 407–16) as well as Coen’s prize-winning monograph, Climate in Motion: Science, Empire, and the Problem of Scale (Chicago, IL: University of Chicago Press, 2020). Back to text
16. On the latter point see John Forrester’s classic discussion, which departs from the clinical case history, but goes much further (Forrester, 1996); on the former, Michel Callon’s study of scallops pushed ‘scaling down’ to new limits (Callon, 1996). Back to text
17. (Krause, 2021, pp 14–32) emphasis added. Back to text
18. See on this point, Long, 2021. Back to text
19. See The Turing Way, as an attempt to formalise such practices (https://zenodo.org/record/6909298) (accessed 10 September 2022). Back to text
20. This spirit continues in the work of the Seshat: Global History project, see (http://www.digitalhumanities.org/dhq/vol/10/4/000272/000272.html). Back to text
21. See this excellent, if alarming, overview: ‘History as a giant data set: how analysing the past could help save the future’ by Laura Spinney, The Guardian 12 November 2019 (https://www.theguardian.com/technology/2019/nov/12/history-as-a-giant-data-set-how-analysing-the-past-could-help-save-the-future) Back to text
22. See also the approach to scalar reading in history proposed by Clavert and Fickers, 2020. Back to text
23. See https://maps.nls.uk/geo/explore/side-by-side for an example of this functionality, created by Chris Fleet. Back to text
24. Such as in the work of the Cambridge Group for the History of Population and Social Structure, pioneered by the late Tony Wrigley and Peter Laslett (https://www.campop.geog.cam.ac.uk/). Back to text

References

Ahnert, R and Ahnert, S E, 2019, ‘Metadata, Surveillance and the Tudor State’, History Workshop Journal 87 (April): pp 27–51 Back to text
Ahnert, R and Ahnert, S E, Coleman, C E and Weingart, S, 2021, The Network Turn: Changing Perspectives in the Humanities (Cambridge: Cambridge University Press) Back to text
Armitage, D and Guldi, J, 2016, ‘Longing for the Longue Durée’, Isis 107 (2): pp 353–57 Back to text
Beelen, K, Lawrence, J, Wilson, DCS, Beavan, D, 2023, ‘Bias and Representativeness in Digitized Newspaper Collections: Introducing the Environmental Scan’, Digital Scholarship in the Humanities 38(1): pp1–22 https://doi.org/10.1093/llc/fqac037 Back to text
Bode, K, 2020, ‘Why You Can’t Model Away Bias’, Modern Language Quarterly 81(1): pp 95–124 Back to text
Callon, M, 1986. ‘Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St. Brieuc Bay’, in Law, J (ed), Power, Action and Belief: A New Sociology of Knowledge? (London: Routledge & Kegan Paul) Back to text
Camprubí, L and Lehmann, P, 2018, ‘The Scales of Experience: Introduction to the Special Issue Experiencing the Global Environment’, Studies in History and Philosophy of Science Part A, Experiencing the Global Environment, 70 (August): pp 1–5 Back to text
Clavert, F and Fickers, A, 2021. ‘On Pyramids, Prisms, and Scalable Reading’, Journal of Digital History 1.1 Back to text
Coen, D R, 2020, Climate in Motion: Science, Empire, and the Problem of Scale. (Chicago, IL: University of Chicago Press) Back to text
Coen, D R and Albritton Jonsson, F, 2022, ‘Between History and Earth System Science’, Isis 113 (2): pp 407–16 Back to text
Cohen, H F, 2016, ‘Editor’s Introduction’, Isis, June Back to text
Cox, A, 2021, ‘The Impact of AI, Machine Learning, Automation and Robotics on the Information Professions: A Report for CILIP’, Monograph, CILIP https://www.cilip.org.uk/page/researchreport Back to text
Dobson, J E, 2022, ‘Vector Hermeneutics: On the Interpretation of Vector Space Models of Text’, Digital Scholarship in the Humanities 37 (1): pp 81–93 Back to text
English, J F and Underwood, T, 2016, ‘Shifting Scales: Between Literature and Social Science’, Modern Language Quarterly 77 (3): pp 277–95 Back to text
Forrester, J, 1996. ‘If p, Then What? Thinking in Cases’, History of the Human Sciences 9 (3): pp 1–25 Back to text
Gitelman, L (ed), 2013, ‘Raw Data’ Is an Oxymoron (Cambridge MA: MIT Press) Back to text
Graham, S, 2016, Exploring Big Historical Data: The Historian’s Macroscope (London: Imperial College Press) Back to text
Guldi, J, 2022, ‘The Algorithm: Mapping Long-Term Trends and Short-Term Change at Multiple Scales of Time’, The American Historical Review 127 (2): pp 895–911 Back to text
Hacking, I, 1991, ‘How Should We Do the History of Statistics’, in Gordon, C, Miller, P and Burchell, G (eds), The Foucault Effect: Studies in Governmentality (Chicago: University of Chicago Press) Back to text
Haldane, J B S, 1927, ‘On Scales’, in Possible Worlds and Other Essays, 1–6 (London: Chatto & Windus) Back to text
Horton, Z K, 2021, The Cosmic Zoom: Scale, Knowledge, and Mediation (Chicago: The University of Chicago Press) Back to text
Hosseini, K, Wilson, DCS, Beelen, K, McDonough, K, 2022, ‘MapReader: A Computer Vision Pipeline for the Semantic Exploration of Maps at Scale’, in Proceedings of the 6th ACM SIGSPATIAL International Workshop on Geospatial Humanities, 8–19. GeoHumanities ’22, New York, NY, USA: Association for Computing Machinery, https://doi.org/10.1145/3557919.3565812 Back to text
Hosseini, K, Wilson, D C S, McDonough, K, van Strien, D and Vane, O, 2021, ‘Maps of a Nation? The Digitized Ordnance Survey for New Historical Research’, Journal of Victorian Culture 26 (2): pp 284–99 Back to text
Keller, E F, 2000, ‘Models Of and Models For: Theory and Practice in Contemporary Biology’, Philosophy of Science 67 (S3): S72–86 Back to text
Krause, M, 2021, Model Cases: On Canonical Research Objects and Sites (University of Chicago Press) Back to text
Leonelli, S, 2020, ‘Scientific Research and Big Data’, in Zalta, E N (ed), The Stanford Encyclopedia of Philosophy, Summer 2020, Metaphysics Research Lab, Stanford University https://plato.stanford.edu/archives/sum2020/entries/science-big-data/ Back to text
Liu, A, 2008, Local Transcendence: Essays on Postmodern Historicism and the Database (Chicago: University of Chicago Press) Back to text
Long, H, 2021, The Values in Numbers: Reading Japanese Literature in a Global Information Age (Columbia University Press) Back to text
Moretti, F, 2013, Distant Reading (London: Verso) Back to text
Morgan, M S and Boumans, M J, 2004, ‘The Secrets Hidden by Two-Dimensionality: Modelling the Economy as a Hydraulic System’, in de Chadarevian, S and Nick Hopwood, N (eds), Models: The Third Dimension of Science (Stanford: Stanford University Press) Back to text
Revel, J (ed), 1996, Jeux d’échelles: la micro-analyse à l’expérience, Hautes études (Paris: Gallimard le Seuil) Back to text
Schaffer, S, 2015, ‘Les cérémonies de la mesure’, Annales. Histoire, Sciences Sociales 70 (02): pp 409–35 Back to text
Söderqvist, T, 2016, ‘The Muse(Um) Is Political’ Isis 107 (2): pp 342–44 Back to text
Spinney, L, 2019, ‘History as a Giant Data Set: How Analysing the Past Could Help Save the Future’, The Guardian, 12 November, sec. Technology https://www.theguardian.com/technology/2019/nov/12/history-as-a-giant-data-set-how-analysing-the-past-could-help-save-the-future Back to text
The Independent, 2019, ‘Scientists Pinpoint the Year Britons Were Happiest’, 15 October Back to text
Underwood, T, 2017, ‘A Genealogy of Distant Reading’, Digital Humanities Quarterly 11.2 Back to text
Van Strien, D, Bell, M, McGregor, N R and Trizna, M, 2022, ‘An Introduction to AI for GLAM’, in Kinnaird, K M, Steinbach, P and Guhr, G (eds), Proceedings of the Second Teaching Machine Learning and Artificial Intelligence Workshop, 170: pp 20–24. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v170/strien22a.html Back to text
Wilson, D C S, 2018, ‘Babbage among the Insurers: Big 19th-Century Data and the Public Interest’, History of the Human Sciences 31 (5): pp 129–53 Back to text

Author

Daniel C S Wilson

Historian of Science, Technology and Data

Daniel C S Wilson is a historian of science and technology and a senior researcher at The Alan Turing Institute, where he works in the computational humanities on the history and politics of data and machines. He makes use of both paper and digital archives, such as large collections of 19th-century maps and texts; he spends his time collaborating daily with software engineers, librarians and curators, including at the Science Museum

https://orcid.org/0000-0001-6886-775X

Imprint

Author:
Daniel C S Wilson
Published date:
17 January 2023
Cite as:
10.15180.221805
Title:
Working at scale: what do computational methods mean for research using cases, models and collections?
Published in:
Autumn 2022, Congruence Engine
Article DOI:
https://dx.doi.org/10.15180/221805