{"id":6875,"date":"2019-11-06T15:29:36","date_gmt":"2019-11-06T20:29:36","guid":{"rendered":"http:\/\/studentwork.prattsi.org\/foundations\/?p=6875"},"modified":"2019-11-06T15:29:38","modified_gmt":"2019-11-06T20:29:38","slug":"event-review-ai-now-2019-symposium","status":"publish","type":"post","link":"https:\/\/studentwork.prattsi.org\/foundations\/2019\/11\/06\/event-review-ai-now-2019-symposium\/","title":{"rendered":"Event Review: AI Now 2019 Symposium"},"content":{"rendered":"\n<p>On October 2nd, 2019 the AI Now Institute hosted its 4th annual symposium. Titled \u201cThe Growing Pushback Against Harmful AI,\u201d the symposium brought together lawyers, professors, community advocates, and organizers to discuss the ways that Artificial Intelligence has negatively affected their communities, their work, and their lives. The AI Now Institute is an interdisciplinary research institute based at New York University that focuses on the social implications of current and emerging AI technology. The AI Now Institute brings experts across fields, professions, and communities together to identify and respond to the growing ubiquity of AI technology, and the harmful effects it is proving to have.<br><\/p>\n\n\n\n<p>Co-Founders and Co-Directors Kate Crawford and Meredith Whittaker introduced the symposium by doing a \u201cYear in Review,\u201d highlighting some of the major events involving AI in the past year, including San Francisco\u2019s ban on facial recognition technology and Amazon abandoning its HQ2 in New York City. The symposium was then divided into four panels, which explored topics such as the use of AI technology by the police and border patrol agents, the pushback by tenants in an apartment building in Brooklyn who are fighting against facial recognition technology, a class-action lawsuit against the state of Michigan for using an algorithm that falsely flagged over 20,000 Michigan residents of employment fraud, and lastly the methods, successes, and goals of organizing tech workers across platforms to win gains in the workplace.<br><\/p>\n\n\n\n<p>The first panel of the symposium, \u201cAI and the Police State,\u201d was chaired by Andrea Nill S\u00e1nchez, incoming Executive Director of AI Now. This panel spoke with Marisa Franco, Director and Co-Founder of Mijente; Ruha Benjamin, Professor of African American Studies at Princeton University; and Kristian Lum, Lead Statistician of the Human Rights Data Analysis Group. The panelists dove right into the ways that AI systems, technology, and information practices are used by border patrol agents and local police departments to target undocumented and marginalized people. On the same day as this panel, the New York Times published an article detailing how Donald Trump suggested border police \u201cshoot migrants in the leg\u201d (Shear and Hirschfield Davis) if they threw rocks at border agents, a chilling backdrop for the discussion.&nbsp;<br><\/p>\n\n\n\n<p>Franco spoke to the fact that Immigrations and Customs Enforcement (I.C.E.) relies on local police and contracts with technology companies to meet their arrest and deportation goals. <a href=\"https:\/\/www.vox.com\/2019\/9\/5\/20849846\/amazon-ring-explainer-video-doorbell\">Amazon\u2019s \u201cRing\u201d<\/a> and the <a href=\"https:\/\/www.vice.com\/en_us\/article\/9kegq8\/activists-explain-how-palantirs-tech-is-used-in-ice-raids\">Palantir\u2019s \u201cFALCON Tipline\u201d<\/a> have been specifically exposed as aiding police departments and I.C.E in locating people who are undocumented for arrest and deportation. Franco directly pointed to Amazon and Palantir as targets of Mijente\u2019s strategizing against the use of tech companies profiting off of deportations (using the hashtag #NoTechForICE on social media).&nbsp;<br><\/p>\n\n\n\n<p>Benjamin and Lum spoke to the use of AI and various algorithms to criminalize and target marginalized communities. Benjamin highlighted the specific threats that automated risk assessment technology pose to already vilified communities. Municipalities are increasingly turning to pre-trial risk assessment algorithms to determine a defendant&#8217;s risk of committing a future crime, a process that is deeply embedded with racial stereotypes and highly questionable, racially biased data. These algorithms serve to perpetuate racist stereotypes and the criminalization of poverty by drawing from data in a deeply racist, sexist, and classist society. Benjamin powerfully spoke to how these algorithms aren\u2019t actually \u201cflawed,\u201d they are working exactly as intended for police departments, to legitimize racially targeted policing by pointing to algorithms that are described as neutral and objective, when they are anything but neutral or objective.<br><\/p>\n\n\n\n<p>As outlined by Safiya Noble in her 2016 PDF talk titled \u201cChallenging the Algorithms of Oppression,\u201d she makes it is clear that technology and algorithms reflect and produce the racism and prejudices of the society they are created in. These algorithms serve to maintain and continue racist stereotypes because of the perception that technology, data, and algorithms can be objective. The question becomes, how do we challenge both the racist society that produces the data the algorithms used, and how do we prevent algorithms from perpetuating racism in our virtual and physical lives?<br><\/p>\n\n\n\n<p>Amid the sea of examples of the ways that facial recognition technology is being used to target, criminalize, and further marginalize already vulnerable populations, the second panel focussed on AI technology used to monitor tenants in the Atlantic Towers in Brownsville, Brooklyn. The panel \u201cTenants Against Facial Recognition\u201d included two Community Activists from the Atlantic Towers Tenants Association, Tranae Moran and Fabian Rogers. Along with Mona Patel, an attorney from Brooklyn Legal Services, they both spoke of the Tenants Association\u2019s case against their landlord for attempting to install facial recognition software in their building without informing or acquiring consent from the tenants. Their case highlights how there is no precedent for legislation on facial recognition technology in housing units. This case will be an important milestone in the fight against surveillance and attacks on privacy, and speaks to the new ways that people will have to fight back against invasions of their privacy and the collection of their data.<br><\/p>\n\n\n\n<p>The session \u201cAutomating Judgement\u201d was a conversation between Jennifer Lord, a lawyer from Michigan, and Kate Crawford. They discussed the automation system MiDAS, which ultimately upended over 20,000 people\u2019s lives by falsely accusing them of employment fraud. Sparking the question, when algorithms fail, who is responsible? Lord spoke to the dangers of outsourcing fraud detection work, as well as outsourcing any social benefit dispersal programs, to machines and algorithms.&nbsp;<br><\/p>\n\n\n\n<p>These three sessions highlighted that when algorithms both work as intended or fail at their task, they have the ability to ruin people\u2019s lives. These panels demonstrated that when these technologies do work, they pose serious threats to marginalized communities by drawing from data sets imbued with racist histories and stereotypes, and also act as unwanted tools of surveillance in marginalized communities. When algorithms don\u2019t work as intended they are able to act as the most brutal bureaucrat and withhold necessary services to citizens and flag them as criminals.<br><\/p>\n\n\n\n<p>The final session, \u201cOrganizing Tech,\u201d brought together organizers from the Awood Center and the New York Taxi Workers Alliance (Abdirahman Muse and Bhairavi Desai, respectively), and Veena Dubal, a lawyer focussed on technological and social issues, in conversation with Meredith Whittaker. This panel highlighted the need for tech workers to connect with workers across both sector and class to make demands of employers. Groups have emerged such as the Tech Workers Coalition which aims to connect various tech related labor, social, and economic movements as a result of the widespread struggles that tech workers have been experiencing.&nbsp;<br><\/p>\n\n\n\n<p>This symposium brought together key figures in ongoing struggles with AI. However these issues are just the tip of the iceberg. As AI becomes more ubiquitous in our culture, and the business model of tech companies continues to exploit both workers and the consumers of their products, the need to hold AI and tech companies accountable will become ever more important. Whittaker and Crawford concluded the symposium with calls to continue to challenge the ways that AI can be used to discriminate, exploit, and cause harm to individuals and communities and to do so by centering the voices and experiences of those who are most affected by these systems.<br><\/p>\n\n\n\n<p>Works referenced&nbsp;<br><\/p>\n\n\n\n<p>AI Now Institute. <a href=\"https:\/\/ainowinstitute.org\/about.html\">https:\/\/ainowinstitute.org\/about.html<\/a><br><\/p>\n\n\n\n<p>Felton, Ryan. 2016. \u201cMichigan unemployment agency made 20,000 false fraud accusations \u2013 report.\u201d The Guardian. Accessed October 2019. <a href=\"https:\/\/www.theguardian.com\/us-news\/2016\/dec\/18\/michigan-unemployment-agency-fraud-accusations\">https:\/\/www.theguardian.com\/us-news\/2016\/dec\/18\/michigan-unemployment-agency-fraud-accusations<\/a><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Michael Shear and Julie Hirschfield Davis. 2019. \u201cShoot Migrants\u2019 Legs, Build Alligator Moat: Behind Trump\u2019s Ideas for Border.\u201d The New York Times. Accessed October 2019. <a href=\"https:\/\/www.nytimes.com\/2019\/10\/01\/us\/politics\/trump-border-wars.html\">https:\/\/www.nytimes.com\/2019\/10\/01\/us\/politics\/trump-border-wars.html<\/a><br><\/p>\n\n\n\n<p>Noble, Safiya. 2016. \u201cChallenging the Algorithms of Oppression.\u201d PDF 2016 Talk. <a href=\"https:\/\/www.youtube.com\/watch?v=iRVZozEEWlE\">https:\/\/www.youtube.com\/watch?v=iRVZozEEWlE<\/a><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>On October 2nd, 2019 the AI Now Institute hosted its 4th annual symposium. Titled \u201cThe Growing Pushback Against Harmful AI,\u201d the symposium brought together lawyers, professors, community advocates, and organizers to discuss the ways that Artificial Intelligence has negatively affected their communities, their work, and their lives. The AI Now Institute is an interdisciplinary research [&hellip;]<\/p>\n","protected":false},"author":691,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-6875","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts\/6875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/users\/691"}],"replies":[{"embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/comments?post=6875"}],"version-history":[{"count":2,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts\/6875\/revisions"}],"predecessor-version":[{"id":6877,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts\/6875\/revisions\/6877"}],"wp:attachment":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/media?parent=6875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/categories?post=6875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/tags?post=6875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}