Blog Post 2 Pic

 

On Saturday, October 28, 2017, the Goethe-Institut in New York City hosted the FACETS Un-Conference on Art & Artificial Intelligence. This year’s topics ranged from ethical machine learning to human-AI communication to data security. I attended the keynote panel discussions on (1) the future of AI research and practice and (2) current issues at the intersection of AI and data security. What I found from these discussions is that there exists a lack of regulation over those who profit from the work of technologists and a general overconfidence in the information that an intelligent machine produces. This brief report will explore the issues raised in these panel discussions in relation to ongoing conversations in the field of information science.

 

What is FACETS?

FACETS is an “interdisciplinary creative coding, interactive art, and videogames un-conference” that focuses on underrepresented individuals in STEM fields and art (http://www.facets-con.com/). This year’s theme was on Art and AI. I attended the keynote panel discussions, “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” and “AI and Data Security.” The first of these discussions was moderated by Kenric McDowell (head of the Art + Machine Intelligence group at Google Research) and accompanied by Ida Momennejad (postdoctoral fellow at Princeton University who uses AI to research human neuropsychology) and Madeleine Elish (cultural anthropology researcher at the Data and Society Institute who focuses on cultural responses to automation and AI). The next discussion was moderated by Caroline Sinders (machine-learning designer and digital anthropologist at WikiMedia who focuses on online harassment) and accompanied by Kate Crawford (co-founder/director of the AI Now Research Institute and 2016 co-chair of the Obama White House symposium on the impacts of AI), Jer Thorp (former Data Artist in Residence at the New York Times and co-founder of The Office for Creative Research), and Chelsea Manning (former intelligence analyst in the U.S. Army who exposed human rights violations committed by the U.S. military in Iraq). The range of voices included in these two discussions led to intriguing conversations between diverse perspectives on the current and near future states of AI.

 

A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?

This panel discussion started with a series of short presentations followed by an open discussion. Presentations covered AI-produced poetry and images, forms of human-like memory and agency in AI models, and potential moral crises of virtually autonomous AI (e.g. self-driving cars). The discussion that followed focused primarily on the concept of human-centrism in technology since the rise of agriculture and who owns (i.e. has a copyright) or is responsible for the actions of seemingly autonomous AI.

While the discussion on human-centrism in technology was interesting, it reached a fairly quick consensus: human-centrism may be useful to keep technology in check, but what we define as human is often discriminatory within our own species (e.g. racism and sexism), so an alternative focus on the results of agency may be more beneficial to the work of ethical technologists. The discussion on the moral ownership of AI actions was a bit more novel since it centered on what may be called the “moral crumple zone” (phrase mentioned by Madeleine Elish during her presentation), which exists when the behavior of an artificial intelligence defies the predictions of its creators because they do not understand all of the possible outcomes of the machine-learning algorithms that they have created. Current legislation firmly places the blame of AI actions on humans, i.e. either the creator or the user (through a contract, such as agreeing to terms and conditions). The moral crumple zone, however, is becoming more apparent as AI becomes increasing prevalent in our contemporary military and consumer landscapes, while life and death decisions are places in the “hands” of technology that even their creators do not fully understand. The panelists expressed some fear and uncertainty about what new legislation could form in the U.S. under the current presidential administration due to the pressures of business and military awareness of the moral crumple zone.

 

AI and Data Security

The panel discussion on AI and Data Security remained solely a discussion – no presentations were needed to dive right into the pressing issues. Through the curiosity of the other panelists and the moderator, and after the panelists’ introductions, Chelsea Manning started off the discussion by running through her top concerns about uses of AI in the military. She illustrated how machine learning used in the field is largely inaccurate: 50% probabilities are usually enough, and even 20% is considered enough in an actual war-zone. However, even more disturbing than these inaccuracies is how predictions in feedback loops are often interpreted in ways to fit a specific narrative. Intelligence analysts might focus on specific predictions that would impress their commanders or create stories or connections between predictions where none exist (this can also be done by the commanders themselves). A breakdown of a prediction of a potentially hostile target, in Manning’s case, was often misunderstood by commanders: explaining to a commander how a target predicted by an AI is simply formed from a specific, and low, degree of certainty based on potentially faulty training sets and field tests is likely to be ignored when another analyst can simple point out a target to the same commander. This point was expanded on by other panelists who expressed concern over how mass media outlets are currently reporting on studies that utilize AI. One example was the recent virality of a study on “AI gaydar,” which has been reported to determine if someone is “gay or straight” by only utilizing facial recognition technology. The panelists were frustrated by how few articles failed to stress the experimental design of the study, which used a limited pool of images from a dating site and was constricted to the site’s inadequate dichotomy of human sexuality. Furthermore, the timeliness of these images made the algorithms rely on visual stereotypes – a problem in facial recognition software that could help to preserve a status quo that would otherwise change, while also boldly assuming that physical traits can reliably expose interiority.

It seems that there is an increasing drive to trust unreliable conclusions of AI in our current economic and political climate, especially when so few regulations and standards are placed on AI research and on businesses and military agencies that utilize this technology. Panelists also expressed concern over the proliferation of black box algorithms (i.e. algorithms who’s inputs and outputs are considered without an understanding of their internal working) in public agencies since they are at odds with due process.

The panelists explored some solutions to their concerns:

  1. Temperature checks during coding to gauge the impact of work, especially for those that work in media sharing (e.g. Facebook timeline engineers) and search engines.
  2. Culture/education change for AI creators so that they are less separated from the real-world consequences of their work.
  3. Paying closer attention to the quality of training sets used in machine learning and putting a greater emphasis on communicating the quality of these training sets when sharing results.
  4. More regulation (through legislation) on AI research in private and public sectors.
  5. More restrictions on what data can be collected and stored by businesses and hold businesses accountable for this data’s security (note: Equifax data breech).
  6. Promote mass concern about the use of AI, especially in business – with regards to personal data – and in the military.

 

What’s Next?

The FACETS Un-Conference on Art & Artificial Intelligence exposed a lot of dire issues that are present in the use and creation of contemporary artificial intelligence. However, all of the panelists’ potential solutions for their concerns during the discussion periods of both panels were quite classic. The critical examination of information as it is collected, reconfigured, and then molded into new information should not be taken for granted in our current era of machine-learning technology. While all of the panelists shared how novel these situations are because of the incredible power of the new technologies that they have worked with, their solutions (such as paying close attention to training set data and expressing results clearly and accurately) are very typical to those in traditional information science professions. There exists a divide between some technologists and those in the humanities and traditional information professions that allows for creators and users of AI to skip the critical steps necessary to ensure a safe, secure, and accurate method of archiving, producing, and sharing information. The panelists at FACETS did an exemplary job at exposing and exploring this divide, and so I recommend that readers of this post who are interested in the intersection of technology, information, and art to check out their next event when it is announced. What I learned from this event makes the mission of Pratt’s School of Information seem even more urgent and timely: we need more information professionals who can critically examine how new technologies change our information landscape in order to better prepare us for the challenges that are already arriving.

 

Works Cited

FACETS. http://www.facets-con.com/. Accessed 28 Oct. 2017.

McDowell, Kenric, et al. “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

Sinders, Caroline, et al. “AI and Data Security.” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

Leave a Reply

Your email address will not be published. Required fields are marked *