Event Review: AI Now 2019 Symposium

On October 2nd, 2019 the AI Now Institute hosted its 4th annual symposium. Titled “The Growing Pushback Against Harmful AI,” the symposium brought together lawyers, professors, community advocates, and organizers to discuss the ways that Artificial Intelligence has negatively affected their communities, their work, and their lives. The AI Now Institute is an interdisciplinary research institute based at New York University that focuses on the social implications of current and emerging AI technology. The AI Now Institute brings experts across fields, professions, and communities together to identify and respond to the growing ubiquity of AI technology, and the harmful effects it is proving to have.

Co-Founders and Co-Directors Kate Crawford and Meredith Whittaker introduced the symposium by doing a “Year in Review,” highlighting some of the major events involving AI in the past year, including San Francisco’s ban on facial recognition technology and Amazon abandoning its HQ2 in New York City. The symposium was then divided into four panels, which explored topics such as the use of AI technology by the police and border patrol agents, the pushback by tenants in an apartment building in Brooklyn who are fighting against facial recognition technology, a class-action lawsuit against the state of Michigan for using an algorithm that falsely flagged over 20,000 Michigan residents of employment fraud, and lastly the methods, successes, and goals of organizing tech workers across platforms to win gains in the workplace.

The first panel of the symposium, “AI and the Police State,” was chaired by Andrea Nill Sánchez, incoming Executive Director of AI Now. This panel spoke with Marisa Franco, Director and Co-Founder of Mijente; Ruha Benjamin, Professor of African American Studies at Princeton University; and Kristian Lum, Lead Statistician of the Human Rights Data Analysis Group. The panelists dove right into the ways that AI systems, technology, and information practices are used by border patrol agents and local police departments to target undocumented and marginalized people. On the same day as this panel, the New York Times published an article detailing how Donald Trump suggested border police “shoot migrants in the leg” (Shear and Hirschfield Davis) if they threw rocks at border agents, a chilling backdrop for the discussion. 

Franco spoke to the fact that Immigrations and Customs Enforcement (I.C.E.) relies on local police and contracts with technology companies to meet their arrest and deportation goals. Amazon’s “Ring” and the Palantir’s “FALCON Tipline” have been specifically exposed as aiding police departments and I.C.E in locating people who are undocumented for arrest and deportation. Franco directly pointed to Amazon and Palantir as targets of Mijente’s strategizing against the use of tech companies profiting off of deportations (using the hashtag #NoTechForICE on social media). 

Benjamin and Lum spoke to the use of AI and various algorithms to criminalize and target marginalized communities. Benjamin highlighted the specific threats that automated risk assessment technology pose to already vilified communities. Municipalities are increasingly turning to pre-trial risk assessment algorithms to determine a defendant’s risk of committing a future crime, a process that is deeply embedded with racial stereotypes and highly questionable, racially biased data. These algorithms serve to perpetuate racist stereotypes and the criminalization of poverty by drawing from data in a deeply racist, sexist, and classist society. Benjamin powerfully spoke to how these algorithms aren’t actually “flawed,” they are working exactly as intended for police departments, to legitimize racially targeted policing by pointing to algorithms that are described as neutral and objective, when they are anything but neutral or objective.

As outlined by Safiya Noble in her 2016 PDF talk titled “Challenging the Algorithms of Oppression,” she makes it is clear that technology and algorithms reflect and produce the racism and prejudices of the society they are created in. These algorithms serve to maintain and continue racist stereotypes because of the perception that technology, data, and algorithms can be objective. The question becomes, how do we challenge both the racist society that produces the data the algorithms used, and how do we prevent algorithms from perpetuating racism in our virtual and physical lives?

Amid the sea of examples of the ways that facial recognition technology is being used to target, criminalize, and further marginalize already vulnerable populations, the second panel focussed on AI technology used to monitor tenants in the Atlantic Towers in Brownsville, Brooklyn. The panel “Tenants Against Facial Recognition” included two Community Activists from the Atlantic Towers Tenants Association, Tranae Moran and Fabian Rogers. Along with Mona Patel, an attorney from Brooklyn Legal Services, they both spoke of the Tenants Association’s case against their landlord for attempting to install facial recognition software in their building without informing or acquiring consent from the tenants. Their case highlights how there is no precedent for legislation on facial recognition technology in housing units. This case will be an important milestone in the fight against surveillance and attacks on privacy, and speaks to the new ways that people will have to fight back against invasions of their privacy and the collection of their data.

The session “Automating Judgement” was a conversation between Jennifer Lord, a lawyer from Michigan, and Kate Crawford. They discussed the automation system MiDAS, which ultimately upended over 20,000 people’s lives by falsely accusing them of employment fraud. Sparking the question, when algorithms fail, who is responsible? Lord spoke to the dangers of outsourcing fraud detection work, as well as outsourcing any social benefit dispersal programs, to machines and algorithms. 

These three sessions highlighted that when algorithms both work as intended or fail at their task, they have the ability to ruin people’s lives. These panels demonstrated that when these technologies do work, they pose serious threats to marginalized communities by drawing from data sets imbued with racist histories and stereotypes, and also act as unwanted tools of surveillance in marginalized communities. When algorithms don’t work as intended they are able to act as the most brutal bureaucrat and withhold necessary services to citizens and flag them as criminals.

The final session, “Organizing Tech,” brought together organizers from the Awood Center and the New York Taxi Workers Alliance (Abdirahman Muse and Bhairavi Desai, respectively), and Veena Dubal, a lawyer focussed on technological and social issues, in conversation with Meredith Whittaker. This panel highlighted the need for tech workers to connect with workers across both sector and class to make demands of employers. Groups have emerged such as the Tech Workers Coalition which aims to connect various tech related labor, social, and economic movements as a result of the widespread struggles that tech workers have been experiencing. 

This symposium brought together key figures in ongoing struggles with AI. However these issues are just the tip of the iceberg. As AI becomes more ubiquitous in our culture, and the business model of tech companies continues to exploit both workers and the consumers of their products, the need to hold AI and tech companies accountable will become ever more important. Whittaker and Crawford concluded the symposium with calls to continue to challenge the ways that AI can be used to discriminate, exploit, and cause harm to individuals and communities and to do so by centering the voices and experiences of those who are most affected by these systems.

Works referenced 

AI Now Institute. https://ainowinstitute.org/about.html

Felton, Ryan. 2016. “Michigan unemployment agency made 20,000 false fraud accusations – report.” The Guardian. Accessed October 2019. https://www.theguardian.com/us-news/2016/dec/18/michigan-unemployment-agency-fraud-accusations

Michael Shear and Julie Hirschfield Davis. 2019. “Shoot Migrants’ Legs, Build Alligator Moat: Behind Trump’s Ideas for Border.” The New York Times. Accessed October 2019. https://www.nytimes.com/2019/10/01/us/politics/trump-border-wars.html

Noble, Safiya. 2016. “Challenging the Algorithms of Oppression.” PDF 2016 Talk. https://www.youtube.com/watch?v=iRVZozEEWlE

Leave a Reply

Your email address will not be published. Required fields are marked *