On February 10, 2016 the New York Academy of Medicine, in conjunction with NYCDH Week held a workshop entitled “Digital Humanities: Visualizing Data”, co-hosted by Miriam Posner and Heidi Knoblauch. The attendees were largely of scholars and professionals who, through some aspect of their work, have a dataset, collection, or other resource that they felt would benefit from the use of DH visualization tools. The workshop provided a general overview of the field, its method, and its roles and responsibilities in the larger academic realm, and focused largely on familiarizing attendees with large datasets and creating exploratory visualizations using them.
The event began with an overview of the audience: who we were, our field of study, and why we’re interested in DH methods for visualizing data. I think it’s important to point out at the beginning of this event review that the audience for this workshop was NOT digital humanists, nor was that the target audience, but it was rather for professionals and scholars in traditional humanities, libraries and cultural institutions who see DH methods as something that can benefit their scholarship, collections or institutions. There was a good mix of people: librarians, scholars, graduate students, designers, museum professionals and academics. Most of them were tangentially aware of “The Digital Humanities” and seemed eager to learn about how the tools and methods of DH can apply to their work.
Miriam Posner, coordinator of the Digital Humanities program at UCLA, began the afternoon with a question that is ubiquitous in introductory DH workshops: “What is Digital Humanities?”, and the related “What do Digital Humanists do?” And the short answer that arose out of both group discussion and Posner and Knoblauch’s guidance was that DH is asking humanistic questions about the digital, as well as implementing digital methods for the humanities.
There is an interesting duality of practice in this definition. On the one hand, the humanities are the object that is acted upon by digital methods, which is perhaps the most pervasive one-liner about DH, but on the other hand, DH can also include humanities methods applied to digital technologies and resources. I feel like this creates a fuller picture of the potential and scope of DH, but (as I have noted before) runs the risk of becoming too broad, when anything can become DH if it is framed as a humanities question.
Having defined DH thusly, the discussion moved into figuring out how DH manifests itself. Posner asked for examples of DH projects, and we began with an examination of Histories of the National Mall project. It is easy to see how this type of work makes disparate collections of historical, geographic and artistic materials easier to access, and how curating and displaying these items together creates a broader narrative about the subject at hand. But, Posner asks, how does it get made? What choices are made along the way, and how do those choices and methods reflect the final project?
Posner refers to the process of answering these questions as “unpacking the black box”, or reverse engineering a DH project. There are, she says, 3 major things to consider: sources, the process, and the presentation. I won’t go into the details of this process here, as there are already some reviews of Posner’s full talk on this subject available.
The thing I took away from this discussion is the importance of thinking about choices. What choices do the authors of the project make, and how do those choices highlight or obfuscate elements of the source material, or even of the subject area at large. An example that Posner returns to again and again is that of materiality. When physical objects are digitized, how does that separate the information from the context in which it was originally understood? What emotions and experiences are removed or transformed by by turning paper into a digital form?
Having thus discussed the What and How of DH projects, Heidi Knoblauch, Digital Projects Coordinator for the Experimental Humanities Initiative at Bard College, stepped in to discuss more broadly DH’s role and responsibility in the scholarly and public realms. She says that DH is at a juncture: no longer about just digitizing, it is becoming— and should be— about advancing the public good through the sources, tools and ideas. Giving a brief overview of the history of the field, she says that “DH 1.0” was using technology to answer humanities questions that had never been able to be answered before: looking at larger bodies of data, distance reading, etc. “DH 2.0” was about decentering knowledge and authority. And that now, we are moving beyond this easily-defined waves of iteration in the field and that DH is evolving, saying that we have the potential to make impact on communities outside the academy. Reframing, rather than crossroads. Knoblauch emphasizes that DH has an obligation to advance the social good, by remaining grounded in the humanistic qualities of our history, and to provide a counter for corporate uses of digital technologies.
Following this was the actual hands-on portion of the workshop. We worked with Palladio, which is “is a web-based platform for the visualization of complex, multi-dimensional data. “ and went through, individually, a tutorial to work with some photo collection data from the Charles W. Cushman Photograph Collection. We talked about what is and isn’t present in the metadata, and how different methods of visualization can highlight different aspects of what is present in the raw data. I think the process of moving collectively and then individually from raw .csv files, into looking at the site the data was scraped from (with more traditional item-level cataloging and metadata), through into seeing how tools such as Palladio and Google Fusion Tables can represent collection-level themes and ideas about the same raw dataset.
Of note here, in my opinion, were some of the pedagogical choices that Posner and Knoblauch made in leading this hands-on portion of the workshop, which reflect well the principles of Digital Humanities as a field, and balanced the wide spectrum of technological skills and backgrounds in the room.
The tutorial was conducted individually, but concurrently, by everyone in the workshop, and participants could flag whether they needed assistance or not via a series of multicolored post-it notes (green to signal “All’s well”, red for “I’m stuck” and blue for “I’m done”). On the surface, it seems trivial to focus on this aspect of the workshop, but I can’t have been the only person who felt genuinely relieved that we were not going to be relegated into groups and forced to work through the tutorial like that. Posner was also careful to note that getting stuck with the tools or technology at any point in the process was not a reflection of any personal shortcoming, and that everyone has different skills in different areas. Making this kind of inclusive, supportive and welcoming statement really makes all the difference in creating genuine engagement with the activities at hand, and I feel, represents the social good, accessibility and care that Posner and Knoblauch both highlighted as important aspects of DH as a field earlier in the workshop.
As for the tools themselves, Palladio and Google Fusion Tables are fairly straightforward data visualization tools, well-known to DH scholars. Both have shortcomings, especially in terms of creating very large, or very specific visualizations, and making them accessible to the public after their creation. However, the intention of this workshop was largely introduce an audience of newcomers to the possibilities of data visualization to transform how the view their collections, and plant a seed of understanding about what DH tools and methods can do for data sometimes trapped in traditional cultural heritage and research dissemination models. And in that goal, I feel it was a highly successful afternoon.
1 comment for ““Digital Humanities: Visualizing Data” NYCDH Week Event Review”