Symposium Review: “The Uncomfortable Archive”

 

 

I attended a New York Archives Week Symposium at the Center for Jewish History on West 16th Street on Friday October 16th entitled “The Uncomfortable Archive.” The symposium, co-sponsored by the CJH and the MetLife Foundation, was open to the general public and aimed at bringing together archivists, librarians, museum professionals, scholars, and researchers around the subject of difficult and “dangerous” information in the digital age. Of particular interest to me was the early afternoon program entitled “Uncomfortable Powers: Archiving Dangerous Knowledge,” which promised talks ranging from cloistered Soviet-era archives, presidential records, and Wikileaks.  

Omission and Obfuscation in the Private Soviet Archive

Katherine Tsan presented the first talk, “Omission and Obfuscation in the Private Soviet Archive.”  It was structured around her research into the coded messaging that survived this highly-censored historical epoch.  Tsan outlined the difficulty facing the contemporary archivists responsible for interpreting these incomplete records, which were obfuscated in order to circumvent the draconian provisions of Soviet-era oversight. Archives were state-controlled this way until 1991, meaning abbreviations, incomplete names, and code words were the norm in information files.

Tsan discussed the dual concerns when focusing on Soviet-era projects.  She highlighted the ethical conundrum involved in archiving writings and information that were purposefully celf-sensored. Tsan also discussed the dilemma posed by Putin’s current-day deep-freeze of national archives, which show strong evidence of private citizens blotting out images and cultural memory. Tsan questioned if historical preservation should probe beyond these intentions or approach them from an ostensibly globalist, progressivist slant? Putin’s unwillingness to fund archival activities is in line with Soviet effacement, indicated by the complete lack of KGB archives and the concealment of Russian presidential archives.

Tsan’s talk echoed concepts of power and the archive that we read in Schwartz and Cook’s article Archives, Records, and Power: The Making of Modern Memory. They write: “The point is for archivists to (re)search thoroughly for the missing voices, for the complexity of the human or organizational functional activities under study during appraisal, description, or outreach activities, so that archives can acquire and reflect multiple voices, and not, by default, only the voices of the powerful.”1 The near-totalitarian aspects of Soviet rule should be examined in the archival renegotiation of history. However, the key challenge here is how archivists can locate missing voices in a historical period in which they were silenced and redacted? 

Tsan’s talk also recalled Drabinski’s article Queering the Catalog: Queer Theory and the Politics of Correction. Drabinski notes that Queer theory also found roots in a postmodernism that challenged the idea that truth could be final.”2 Is there a possibility for a more thoroughly accurate and truthful picture of Soviet Russia given the degree of suppression and censorship prevalent in that era? Or is the fact that so much of Soviet history was censored the truest depiction of its archival history? Would further excavation create a muddled history? These are intriguing questions posed by Tsan’s presentation. 

Watergate, Covfefe, and presidential records

Katherine M. Wisser followed with her presentation, “Watergate, Covfefe, and presidential records.”  Wisser, an Associate Professor and Co-Director of the Archives/History Dual Degree Program at Simmons College in Boston, conducted an entertaining talk which contemplated the implications of presidential records. Presidents Nixon and Trump were Wisser’s primary examples as she grappled with the debate over whether or not presidential records constitute the private personal property of those individuals in office.

The Presidential Records Act of 1978 served as Wisser’s primary  point of orientation. She chronicled the various ways in which the executive branch has handled this Congressional decision, which mandates the preservation of Presidential and Vice Presidential records and states public ownership of said records. Various Executive Orders have been issued since the Act’s inception that have variously limited and broadened the scope of the PRA.

Wisser was quick to point out the Trump administration’s valuing of  secrecy over transparency. She highlighted this by discussing Trump’s proclivity for tearing papers to shreds, which has resulted in government officials taping said documents together to avoid egregious violations of the PRA.

SID Today and SID Tomorrow: Releasing an Archive of Leaked Government Documents

The final talk was given by Tayla Cooper, Digital Archivist at The Intercept.  The Intercept is home to the Snowden Archive, which archives the internal newsletter of the NSA’s Signal Intelligence Directive (SID).

According to The Intercept’s website: “SIDtoday is the internal newsletter for the NSA’s most important division, the Signals Intelligence Directorate. After editorial review, The Intercept is releasing nine years’ worth of newsletters in batches, starting with 2003. The agency’s spies explain a surprising amount about what they were doing, how they were doing it, and why.”3 In August 2018 alone, The Intercept published 328 separate documents from a source inside the NSA . These documents covered a range of topics, and summarized “how corporate the agency had become and rallied other frustrated spies to his cause; about the NSA’s environmentally-driven spying; and about some of the virtual private networks the agency cracked into, and why. Other highlights from this release, which covers the first half of 2006, touch on Iranian influence in Iraq, the attitudes of NSA staff toward the countries where they are stationed, and much more.”4

Cooper discussed the labor involved in redacting elements from these documents when sent to the NSA for review. Cooper also talked about  how organizations like The Intercept work to counteract what she described as “surveillant anxiety,” in which no amount of data is ever seen as offering a complete picture of governmental activity. She concluded by stating that this anxiety is something that can not be quelled, a dispiriting endnote that also served as a rallying cry.

 

Sources:

  1. Joan M. Schwartz and Terry Cook, “Archives, Records, and Power: The Making of Modern Memory,” 4.
  2. Emily Drabinski, “Queering the Catalog: Queer Theory and the Politics of Correction,” The Library Quarterly: Information, Community, Policy 83, no. 2 (2013): 94-111. doi:10.1086/669547.
  3. https://theintercept.com/staff/talyacooper/
  4. https://theintercept.com/snowden-sidtoday/

FACETS Un-Conference on Art & Artificial Intelligence

 

Blog Post 2 Pic

 

On Saturday, October 28, 2017, the Goethe-Institut in New York City hosted the FACETS Un-Conference on Art & Artificial Intelligence. This year’s topics ranged from ethical machine learning to human-AI communication to data security. I attended the keynote panel discussions on (1) the future of AI research and practice and (2) current issues at the intersection of AI and data security. What I found from these discussions is that there exists a lack of regulation over those who profit from the work of technologists and a general overconfidence in the information that an intelligent machine produces. This brief report will explore the issues raised in these panel discussions in relation to ongoing conversations in the field of information science.

 

What is FACETS?

FACETS is an “interdisciplinary creative coding, interactive art, and videogames un-conference” that focuses on underrepresented individuals in STEM fields and art (http://www.facets-con.com/). This year’s theme was on Art and AI. I attended the keynote panel discussions, “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” and “AI and Data Security.” The first of these discussions was moderated by Kenric McDowell (head of the Art + Machine Intelligence group at Google Research) and accompanied by Ida Momennejad (postdoctoral fellow at Princeton University who uses AI to research human neuropsychology) and Madeleine Elish (cultural anthropology researcher at the Data and Society Institute who focuses on cultural responses to automation and AI). The next discussion was moderated by Caroline Sinders (machine-learning designer and digital anthropologist at WikiMedia who focuses on online harassment) and accompanied by Kate Crawford (co-founder/director of the AI Now Research Institute and 2016 co-chair of the Obama White House symposium on the impacts of AI), Jer Thorp (former Data Artist in Residence at the New York Times and co-founder of The Office for Creative Research), and Chelsea Manning (former intelligence analyst in the U.S. Army who exposed human rights violations committed by the U.S. military in Iraq). The range of voices included in these two discussions led to intriguing conversations between diverse perspectives on the current and near future states of AI.

 

A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?

This panel discussion started with a series of short presentations followed by an open discussion. Presentations covered AI-produced poetry and images, forms of human-like memory and agency in AI models, and potential moral crises of virtually autonomous AI (e.g. self-driving cars). The discussion that followed focused primarily on the concept of human-centrism in technology since the rise of agriculture and who owns (i.e. has a copyright) or is responsible for the actions of seemingly autonomous AI.

While the discussion on human-centrism in technology was interesting, it reached a fairly quick consensus: human-centrism may be useful to keep technology in check, but what we define as human is often discriminatory within our own species (e.g. racism and sexism), so an alternative focus on the results of agency may be more beneficial to the work of ethical technologists. The discussion on the moral ownership of AI actions was a bit more novel since it centered on what may be called the “moral crumple zone” (phrase mentioned by Madeleine Elish during her presentation), which exists when the behavior of an artificial intelligence defies the predictions of its creators because they do not understand all of the possible outcomes of the machine-learning algorithms that they have created. Current legislation firmly places the blame of AI actions on humans, i.e. either the creator or the user (through a contract, such as agreeing to terms and conditions). The moral crumple zone, however, is becoming more apparent as AI becomes increasing prevalent in our contemporary military and consumer landscapes, while life and death decisions are places in the “hands” of technology that even their creators do not fully understand. The panelists expressed some fear and uncertainty about what new legislation could form in the U.S. under the current presidential administration due to the pressures of business and military awareness of the moral crumple zone.

 

AI and Data Security

The panel discussion on AI and Data Security remained solely a discussion – no presentations were needed to dive right into the pressing issues. Through the curiosity of the other panelists and the moderator, and after the panelists’ introductions, Chelsea Manning started off the discussion by running through her top concerns about uses of AI in the military. She illustrated how machine learning used in the field is largely inaccurate: 50% probabilities are usually enough, and even 20% is considered enough in an actual war-zone. However, even more disturbing than these inaccuracies is how predictions in feedback loops are often interpreted in ways to fit a specific narrative. Intelligence analysts might focus on specific predictions that would impress their commanders or create stories or connections between predictions where none exist (this can also be done by the commanders themselves). A breakdown of a prediction of a potentially hostile target, in Manning’s case, was often misunderstood by commanders: explaining to a commander how a target predicted by an AI is simply formed from a specific, and low, degree of certainty based on potentially faulty training sets and field tests is likely to be ignored when another analyst can simple point out a target to the same commander. This point was expanded on by other panelists who expressed concern over how mass media outlets are currently reporting on studies that utilize AI. One example was the recent virality of a study on “AI gaydar,” which has been reported to determine if someone is “gay or straight” by only utilizing facial recognition technology. The panelists were frustrated by how few articles failed to stress the experimental design of the study, which used a limited pool of images from a dating site and was constricted to the site’s inadequate dichotomy of human sexuality. Furthermore, the timeliness of these images made the algorithms rely on visual stereotypes – a problem in facial recognition software that could help to preserve a status quo that would otherwise change, while also boldly assuming that physical traits can reliably expose interiority.

It seems that there is an increasing drive to trust unreliable conclusions of AI in our current economic and political climate, especially when so few regulations and standards are placed on AI research and on businesses and military agencies that utilize this technology. Panelists also expressed concern over the proliferation of black box algorithms (i.e. algorithms who’s inputs and outputs are considered without an understanding of their internal working) in public agencies since they are at odds with due process.

The panelists explored some solutions to their concerns:

  1. Temperature checks during coding to gauge the impact of work, especially for those that work in media sharing (e.g. Facebook timeline engineers) and search engines.
  2. Culture/education change for AI creators so that they are less separated from the real-world consequences of their work.
  3. Paying closer attention to the quality of training sets used in machine learning and putting a greater emphasis on communicating the quality of these training sets when sharing results.
  4. More regulation (through legislation) on AI research in private and public sectors.
  5. More restrictions on what data can be collected and stored by businesses and hold businesses accountable for this data’s security (note: Equifax data breech).
  6. Promote mass concern about the use of AI, especially in business – with regards to personal data – and in the military.

 

What’s Next?

The FACETS Un-Conference on Art & Artificial Intelligence exposed a lot of dire issues that are present in the use and creation of contemporary artificial intelligence. However, all of the panelists’ potential solutions for their concerns during the discussion periods of both panels were quite classic. The critical examination of information as it is collected, reconfigured, and then molded into new information should not be taken for granted in our current era of machine-learning technology. While all of the panelists shared how novel these situations are because of the incredible power of the new technologies that they have worked with, their solutions (such as paying close attention to training set data and expressing results clearly and accurately) are very typical to those in traditional information science professions. There exists a divide between some technologists and those in the humanities and traditional information professions that allows for creators and users of AI to skip the critical steps necessary to ensure a safe, secure, and accurate method of archiving, producing, and sharing information. The panelists at FACETS did an exemplary job at exposing and exploring this divide, and so I recommend that readers of this post who are interested in the intersection of technology, information, and art to check out their next event when it is announced. What I learned from this event makes the mission of Pratt’s School of Information seem even more urgent and timely: we need more information professionals who can critically examine how new technologies change our information landscape in order to better prepare us for the challenges that are already arriving.

 

Works Cited

FACETS. http://www.facets-con.com/. Accessed 28 Oct. 2017.

McDowell, Kenric, et al. “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

Sinders, Caroline, et al. “AI and Data Security.” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.