{"id":1814,"date":"2015-11-18T02:16:04","date_gmt":"2015-11-18T02:16:04","guid":{"rendered":"http:\/\/listheory.prattsils.org\/?p=1814"},"modified":"2015-11-18T02:16:04","modified_gmt":"2015-11-18T02:16:04","slug":"algorithms-and-ethics-brainstorming-solutions-2","status":"publish","type":"post","link":"https:\/\/studentwork.prattsi.org\/foundations\/2015\/11\/18\/algorithms-and-ethics-brainstorming-solutions-2\/","title":{"rendered":"Algorithms and Ethics: Brainstorming Solutions"},"content":{"rendered":"<p>Algorithms are everywhere. Of particular interest, algorithms that are used &#8220;to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions&#8221; can be extraordinarily powerful tools.[1.\u00a0Gillespie T. (2014). <em>The relevance of algorithms.<\/em>\u00a0<span class=\"c12\">Media Technologies: Essays on Communication, Materiality, and Society.<\/span>\u00a0Eds. T. Gillespie, P. Boczkowski, and K. Foot. Cambridge: MIT Press, 167\u2013194.] Such algorithms determine advertisements seen online or received in the mail, posts that appear prominently on social media feeds, even hiring and firing decisions. They impact innumerable aspects of many people&#8217;s daily lives. And, as one recent post from the University of Oxford&#8217;s &#8220;Practical Ethics&#8221; blog noted, the way algorithms &#8220;function and are used . . . whether in computers or as a formal praxis in an organization \u2013 matters morally because they have significant and nontrivial effects.&#8221; [2. Sandberg, A. (2015, Oct. 6). <em>Don&#8217;t write evil algorithms<\/em>. (Web log post). Retrieved from:\u00a0http:\/\/blog.practicalethics.ox.ac.uk\/2015\/10\/dont-write-evil-algorithms\/.]<\/p>\n<p>Many algorithms provide a great benefit to our society, helping human beings to organize and simplify a constantly-expanding and complicated universe of data. In some situations, however, they can also have adverse and inhumane effects\u00a0\u2013 for example, by invading individuals&#8217; privacy or producing results based on incomplete or otherwise flawed data. Accordingly, all involved parties\u00a0\u2013 information technology innovators who create algorithms, corporations that make use of algorithms for business gain, technology consumers whose use of algorithm-enhanced products has catalyzed the present ubiquity of such systems\u00a0\u2013 have an obligation to think about and develop ethical approaches to the current landscape. How do we even begin to approach this enormous task?<\/p>\n<p>In March 2015, the Centre for Internet and Human Rights and the Technical University of Berlin hosted a conference on &#8220;The Ethics of Algorithms,&#8221; at which academics and technology professionals from the United States and Europe grappled with these very issues. A background paper from that conference identified a subset of algorithms that are of the greatest ethical concern, and the specific attributes that require heightened scrutiny: &#8220;complexity and opacity, gatekeeping functions [determining &#8216;what gets attention, and what is ignored&#8217;], and subjective decision-making.&#8221; [3. Center for Internet and Human Rights. \u00a0(March 2015). <em>The ethics of algorithms: from radical content to self-driving cars<\/em>. (Final draft background paper). Retrieved from\u00a0https:\/\/www.gccs2015.com\/sites\/default\/files\/documents\/Ethics_Algorithms-final%20doc.pdf.]<\/p>\n<p>That same paper also proposed a handful of appropriate regulatory responses to problematic algorithms, weighing the pros and cons of each. This provides an excellent starting point for any discussion of the ethical challenges of algorithms.<\/p>\n<p><span style=\"font-weight: 400\">The first proposed response is &#8220;algorithmic transparency and notification.&#8221; Transparency in algorithms is a challenging proposition\u00a0\u2013 in part because most algorithms are so complex that lay people would not be able to understand them even if they were opened up to scrutiny. In addition, many programmers and corporations keep the secrets of their algorithms close to the vest and would not give them up without a colossal fight. While some openness is a fantastic goal and is necessary for a dialogue about ethical algorithms, on its own this is not a realistic or adequate solution. An alternative to full transparency, however, is &#8220;notification,&#8221; which envisions more consumer engagement with the manner and extent of data provided to algorithms: <\/span><span style=\"font-weight: 400\"> \u201cConsumers can demand for control over their personal information that feeds into algorithms which might have a considerable effect on their lives. This includes the rights to correct information and demand their personal information to be excluded from the database of data vendors.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">A second response, \u201calgorithmic accountability,\u201d asks that we question how and why algorithms work as they do: \u201ccausal explanations that link our digital experiences with the data they are based upon [which] can empower individuals to better understand how the algorithms around them are influencing their life-worlds.\u201d Indeed, the conference paper\u00a0describes investigations as to how algorithms produce certain outcomes, even if such investigations do\u00a0not create a definitive explanation, as an \u201cessential precondition for the public scrutiny of algorithms.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">Finally, the paper approaches the possibility of \u201cgovernments directly regulating an algorithm,\u201d with regulation of algorithms in the financial sector as one appropriate example. This regulatory approach becomes more complicated, however, if applied to government regulation of search engines: \u201ceven deciding what would be in the \u2018public interest\u2019 is a complex and contested question, exactly because there is no right answer to how [a search engine] should rank its results.\u201d Such regulation would be controversial and difficult (if not impossible) to manage. It could also serve to discourage innovation in the development of algorithms, at a time when we should foster creativity and flexibility among programmers.<\/span><\/p>\n<p><span style=\"font-weight: 400\">None of these regulatory responses is perfect. What this discussion does make apparent, however, is that algorithms are valuable yet imperfect tools and, especially as they become increasingly central to our lives, they should be scrutinized through a lens of fairness and ethics. \u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Indeed, as the previously referenced University of Oxford blog post puts it: \u201cWe cannot and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">Some awareness of the impact of algorithms on humanity, both positive and negative, can go a very long way, along with consideration of our ethical obligations as the drivers of the algorithm environment. The most important thing is to not fall into a trap of thinking about algorithms \u2013 as autonomous as they may appear when designed skillfully \u2013 as something independent of their human creators, for which humans do not bear full responsibility.<\/span><\/p>\n<p><span style=\"font-weight: 400\">As Tarleton Gillespie recommends in the article <\/span><i><span style=\"font-weight: 400\">The relevance of algorithms<\/span><\/i><span style=\"font-weight: 400\">, we \u201cmust unpack the warm human and institutional choices that lie behind these cold mechanisms . . . to see how these tools are called into being by, enlisted as part of, and negotiated around collective efforts to know and be known.\u201d Such inquiry can help us make ethical choices about our use of algorithms. When used thoughtfully, algorithms can be an extraordinary tool for the common good.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Algorithms are everywhere. Of particular interest, algorithms that are used &#8220;to select what is most relevant from a corpus of data composed of traces of our activities, preferences, and expressions&#8221; can be extraordinarily powerful tools.[1.\u00a0Gillespie T. (2014). The relevance of algorithms.\u00a0Media Technologies: Essays on Communication, Materiality, and Society.\u00a0Eds. T. Gillespie, P. Boczkowski, and K. Foot. [&hellip;]<\/p>\n","protected":false},"author":315,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-1814","post","type-post","status-publish","format-standard","hentry","category-articles"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts\/1814","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/users\/315"}],"replies":[{"embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/comments?post=1814"}],"version-history":[{"count":0,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/posts\/1814\/revisions"}],"wp:attachment":[{"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/media?parent=1814"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/categories?post=1814"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/studentwork.prattsi.org\/foundations\/wp-json\/wp\/v2\/tags?post=1814"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}