Event Review: AI Now 2019 Symposium

On October 2nd, 2019 the AI Now Institute hosted its 4th annual symposium. Titled “The Growing Pushback Against Harmful AI,” the symposium brought together lawyers, professors, community advocates, and organizers to discuss the ways that Artificial Intelligence has negatively affected their communities, their work, and their lives. The AI Now Institute is an interdisciplinary research institute based at New York University that focuses on the social implications of current and emerging AI technology. The AI Now Institute brings experts across fields, professions, and communities together to identify and respond to the growing ubiquity of AI technology, and the harmful effects it is proving to have.

Co-Founders and Co-Directors Kate Crawford and Meredith Whittaker introduced the symposium by doing a “Year in Review,” highlighting some of the major events involving AI in the past year, including San Francisco’s ban on facial recognition technology and Amazon abandoning its HQ2 in New York City. The symposium was then divided into four panels, which explored topics such as the use of AI technology by the police and border patrol agents, the pushback by tenants in an apartment building in Brooklyn who are fighting against facial recognition technology, a class-action lawsuit against the state of Michigan for using an algorithm that falsely flagged over 20,000 Michigan residents of employment fraud, and lastly the methods, successes, and goals of organizing tech workers across platforms to win gains in the workplace.

The first panel of the symposium, “AI and the Police State,” was chaired by Andrea Nill Sánchez, incoming Executive Director of AI Now. This panel spoke with Marisa Franco, Director and Co-Founder of Mijente; Ruha Benjamin, Professor of African American Studies at Princeton University; and Kristian Lum, Lead Statistician of the Human Rights Data Analysis Group. The panelists dove right into the ways that AI systems, technology, and information practices are used by border patrol agents and local police departments to target undocumented and marginalized people. On the same day as this panel, the New York Times published an article detailing how Donald Trump suggested border police “shoot migrants in the leg” (Shear and Hirschfield Davis) if they threw rocks at border agents, a chilling backdrop for the discussion. 

Franco spoke to the fact that Immigrations and Customs Enforcement (I.C.E.) relies on local police and contracts with technology companies to meet their arrest and deportation goals. Amazon’s “Ring” and the Palantir’s “FALCON Tipline” have been specifically exposed as aiding police departments and I.C.E in locating people who are undocumented for arrest and deportation. Franco directly pointed to Amazon and Palantir as targets of Mijente’s strategizing against the use of tech companies profiting off of deportations (using the hashtag #NoTechForICE on social media). 

Benjamin and Lum spoke to the use of AI and various algorithms to criminalize and target marginalized communities. Benjamin highlighted the specific threats that automated risk assessment technology pose to already vilified communities. Municipalities are increasingly turning to pre-trial risk assessment algorithms to determine a defendant’s risk of committing a future crime, a process that is deeply embedded with racial stereotypes and highly questionable, racially biased data. These algorithms serve to perpetuate racist stereotypes and the criminalization of poverty by drawing from data in a deeply racist, sexist, and classist society. Benjamin powerfully spoke to how these algorithms aren’t actually “flawed,” they are working exactly as intended for police departments, to legitimize racially targeted policing by pointing to algorithms that are described as neutral and objective, when they are anything but neutral or objective.

As outlined by Safiya Noble in her 2016 PDF talk titled “Challenging the Algorithms of Oppression,” she makes it is clear that technology and algorithms reflect and produce the racism and prejudices of the society they are created in. These algorithms serve to maintain and continue racist stereotypes because of the perception that technology, data, and algorithms can be objective. The question becomes, how do we challenge both the racist society that produces the data the algorithms used, and how do we prevent algorithms from perpetuating racism in our virtual and physical lives?

Amid the sea of examples of the ways that facial recognition technology is being used to target, criminalize, and further marginalize already vulnerable populations, the second panel focussed on AI technology used to monitor tenants in the Atlantic Towers in Brownsville, Brooklyn. The panel “Tenants Against Facial Recognition” included two Community Activists from the Atlantic Towers Tenants Association, Tranae Moran and Fabian Rogers. Along with Mona Patel, an attorney from Brooklyn Legal Services, they both spoke of the Tenants Association’s case against their landlord for attempting to install facial recognition software in their building without informing or acquiring consent from the tenants. Their case highlights how there is no precedent for legislation on facial recognition technology in housing units. This case will be an important milestone in the fight against surveillance and attacks on privacy, and speaks to the new ways that people will have to fight back against invasions of their privacy and the collection of their data.

The session “Automating Judgement” was a conversation between Jennifer Lord, a lawyer from Michigan, and Kate Crawford. They discussed the automation system MiDAS, which ultimately upended over 20,000 people’s lives by falsely accusing them of employment fraud. Sparking the question, when algorithms fail, who is responsible? Lord spoke to the dangers of outsourcing fraud detection work, as well as outsourcing any social benefit dispersal programs, to machines and algorithms. 

These three sessions highlighted that when algorithms both work as intended or fail at their task, they have the ability to ruin people’s lives. These panels demonstrated that when these technologies do work, they pose serious threats to marginalized communities by drawing from data sets imbued with racist histories and stereotypes, and also act as unwanted tools of surveillance in marginalized communities. When algorithms don’t work as intended they are able to act as the most brutal bureaucrat and withhold necessary services to citizens and flag them as criminals.

The final session, “Organizing Tech,” brought together organizers from the Awood Center and the New York Taxi Workers Alliance (Abdirahman Muse and Bhairavi Desai, respectively), and Veena Dubal, a lawyer focussed on technological and social issues, in conversation with Meredith Whittaker. This panel highlighted the need for tech workers to connect with workers across both sector and class to make demands of employers. Groups have emerged such as the Tech Workers Coalition which aims to connect various tech related labor, social, and economic movements as a result of the widespread struggles that tech workers have been experiencing. 

This symposium brought together key figures in ongoing struggles with AI. However these issues are just the tip of the iceberg. As AI becomes more ubiquitous in our culture, and the business model of tech companies continues to exploit both workers and the consumers of their products, the need to hold AI and tech companies accountable will become ever more important. Whittaker and Crawford concluded the symposium with calls to continue to challenge the ways that AI can be used to discriminate, exploit, and cause harm to individuals and communities and to do so by centering the voices and experiences of those who are most affected by these systems.

Works referenced 

AI Now Institute. https://ainowinstitute.org/about.html

Felton, Ryan. 2016. “Michigan unemployment agency made 20,000 false fraud accusations – report.” The Guardian. Accessed October 2019. https://www.theguardian.com/us-news/2016/dec/18/michigan-unemployment-agency-fraud-accusations

Michael Shear and Julie Hirschfield Davis. 2019. “Shoot Migrants’ Legs, Build Alligator Moat: Behind Trump’s Ideas for Border.” The New York Times. Accessed October 2019. https://www.nytimes.com/2019/10/01/us/politics/trump-border-wars.html

Noble, Safiya. 2016. “Challenging the Algorithms of Oppression.” PDF 2016 Talk. https://www.youtube.com/watch?v=iRVZozEEWlE

Observation: Visitors, artwork, and technology at the Museum of Modern Art

In 2013 the Guggenheim hosted a James Turrell’s retrospective that transformed the iconic rotunda of the Guggenheim into Aten Reign, a large scale and site specific work using light, changing colors, air and space, and the curves of the museum itself. Turrell transformed the Guggenheim into a site for artful reflection for all who entered the space. Rather than an object to look at or a subject to contemplate, the experience of being in this transformed space was the work of art. “A lot of it is the idea of seeing yourself seeing, and how we perceive” Turrell has said of his work that lacks image, object, or “one place of focus” (Guggenheim).

During the length of this exhibition, there was no art on the walls of the Guggenheim’s spiraled hallway. Visitors were encouraged to lay on the ground of the lobby and gaze at the ceiling, which, using light and color, had been transformed into overlapping ovals of bright fluorescent hues. It was ethereal, magical, meditative, sublime. In an attempt to preserve the sense of bliss and encourage quiet, unmediated reflection, no photography was allowed in the space. 

Something I can say with complete certainty is that hearing a security guard shout into an echoing rotunda “NO PHOTOGRAPHY” every two minutes was not conducive to an ethereal, magical, meditative, or sublime experience. Something I learned during my visit to the Guggenheim that day was that visitors will do whatever they want. They will get the validation of their experience that they expect. They will share their experience no matter what.

This experience, six years ago, has stuck with me and prompted me to think about how the relationship between people and their smartphones has affected the experience of viewing art in a museum gallery setting. This experience, among others, has largely influenced my interest in museum studies, digital media, and tech theory. 

Background

I used to always bring a journal into a gallery and take notes on the works I liked, the artists, themes, books I should follow up with. Now I take photos of wall texts, books I want to look into, and of other people taking photos in the galleries. Sometimes I feel weary and critical of my own increased use of technology in galleries, but recognize that people, including myself, want to personalize, document, and share their experiences. In Finding Augusta, Cooley discusses Michel Foucault’s conception of “speaking the self,” and that often “much of what we document of ourselves transpires at the nonconscious level of the proto-self, at the level of impulse” (Cooley, 2014). This interest in “speaking the self” extends into many, if not all, facets of our digitally connected world.

Prompted by this assignment to conduct an observation relating to information studies and our personal interests, I decided to do an observation at the Museum of Modern Art. My intention was to observe specifically how people use technology, specifically smartphones, in a museum gallery setting. 

Perspective, or, guiding questions for assignment

General questions to guide my inquiry:

  1. How do people engage with art in a museum gallery setting?
  2. How do people engage with technology  in a museum gallery setting (both their own devices or provided by the museum)?
  3. How does technology, specifically the use of personal devices, mediate a viewer’s experience with art in a museum gallery setting?
  4. What do people do with their personal devices? Social media, digital scrapbooks, text messaging, etc?
  5. How does the use of technology by others affect an individuals experience in a museum gallery setting?

Specific questions for observations:

  1. How many people who walked through the gallery used a photo to take a photo of a work of art?
  2. How many people used their phone for non-art related purposes, namely, communication?
  3. What other phone use did people engage in?
  4. How many people used a camera to take a photo of a work of art or the gallery?
  5. What other technology was used (either personal devices or provided by museum)

Observations and data

I sat between rooms 205 and 206 (marked on Fig 1) within an exhibition titled “1970’s-Present.” Immediately upon entering this exhibition space on the second floor of the museum, there were large paintings by Keith Haring, Jenny Holzer, and Basquiat. Further into the gallery I found a place to sit, where I could observe visitors walking through the gallery in either direction. My focus was on visitors who walked into room 205 and then through rooms 205 and 206. 

Fig 1. Area I was observing marked in pen.

I used a worksheet to tally how people took photos of artwork in these gallery rooms with a smartphone, camera, or other device. Using a tally system I recorded how many people used their phone for communication (in some cases I couldn’t tell, in some cases I could see that people were texting, snapchatting, emailing, on Instagram, etc.). When I could, I recorded when people were using their phone for non communication or photographic purposes. The accuracy of these recordings were to the best of my observations as someone casually sitting on a bench in the gallery (I did not walk up to people or move to try to see anyone’s screens).

Fig 2. Worksheet/observations

Breakdown of tally (from Fig 2):

Phones used to take pictures33
Phones used for communication22
Other phone use5
Cameras used to take photos5
Other tech* seen
*Airpods, iPads, earpods
8

I realized while taking these observations that I did not include a section for audio guides. I added a section on my worksheet, but studying and observing audio guide use in the galleries could be an entirely separate set of observations and data that could easily include analytics from the devices.

A few things that stood out to me during these observations and while reflecting on the data:

  1. Within the scope of half an hour I had collected more data and notes than expected. I had intended to stay in the gallery for one hour but decided to end observing at half an hour.
  1. Almost as many visitors used their phones to communicate as take photos of the works, although visitors mostly used their phones to take photos.
  1. It was noticeable how many people idly held their phones in their hands. Rather than reaching for a phone from a pocket or bag to take a picture or send a text, the phone was constantly ready to be used. 

This last phenomena of visitors constantly holding their phone also points directly to our class discussion around Steve Jobs saying of the iPhone that it “fits beautifully in the palm of your hand.” This also relates to Foucault’s conception of “speaking the self” mentioned earlier, where we, as visitors in a cultural institution, see something that we react to (either emotionally, aesthetically, personally, etc) and find ourselves compelled to document and/or share. By focusing on cell phone use within the space of the gallery, I was in a unique position to notice a seemingly small detail that could have interesting implications in understanding how people connect to their technology in a museum setting.

Further research

Further research may include doing similar observations near works of art that are of higher profile. Had there been a place to sit and observe unobtrusively, I may have chosen to sit in the room with the Haring, Holzer, and Basquiat works. Immediately upon entering the 1970’s-Present exhibition, visitors are confronted with large scale, recognizable, and graphically engaging works by artists that are more recognizable than in the rest of the gallery. I am confident that different observations would have been recorded in that space given those particular works of art.

Further research that would be fruitful would be to understand what visitors do with their photos after they’re taken. If asked, “what are you going to do with that photo you just took?” Answers could range from, “I want to post it to my Facebook page,” to “I want to save this memory,” to “I am sending it to my friend who loves this artist,” to “I am working on a research paper.” I am interested in what the actual responses would be and their frequency. 

Conclusion

The conversation about technology in art and museum spaces is continuing to unfold as our lives and relationships become more and more mediated by technology. Much thought is being put into how museums and cultural institutions should relate to users and their lifestyles, many institutions have dramatically changed their photography policies in the past decade (Gilbert 2016), a direct result of the ubiquity of smartphones and visitors interest (and adamance) in documenting their experiences. As technology changes and evolves in ways that affect habits and our attachment to convenience and accessibility of social media, public institutions will need to grapple with how these developments affect their mission, rules, and expectations of visitors. 

Works referenced

Cooley, Heidi Rae. Finding Augusta: Habits of Mobility and Governance in the Digital Era. Hanover: Dartmouth College Press, 2014.

Gilbert, Sophie. 2016. “Please Turn On Your Phone in the Museum.” The Atlantic. Accessed October 2019. https://www.theatlantic.com/magazine/archive/2016/10/please-turn-on-your-phone-in-the-museum/497525/ 

“Introduction to James Turrell.” The Guggenheim Museum. Accessed October 2019. https://www.guggenheim.org/video/introduction-to-james-turrell