FACETS Un-Conference on Art & Artificial Intelligence

 

Blog Post 2 Pic

 

On Saturday, October 28, 2017, the Goethe-Institut in New York City hosted the FACETS Un-Conference on Art & Artificial Intelligence. This year’s topics ranged from ethical machine learning to human-AI communication to data security. I attended the keynote panel discussions on (1) the future of AI research and practice and (2) current issues at the intersection of AI and data security. What I found from these discussions is that there exists a lack of regulation over those who profit from the work of technologists and a general overconfidence in the information that an intelligent machine produces. This brief report will explore the issues raised in these panel discussions in relation to ongoing conversations in the field of information science.

 

What is FACETS?

FACETS is an “interdisciplinary creative coding, interactive art, and videogames un-conference” that focuses on underrepresented individuals in STEM fields and art (http://www.facets-con.com/). This year’s theme was on Art and AI. I attended the keynote panel discussions, “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” and “AI and Data Security.” The first of these discussions was moderated by Kenric McDowell (head of the Art + Machine Intelligence group at Google Research) and accompanied by Ida Momennejad (postdoctoral fellow at Princeton University who uses AI to research human neuropsychology) and Madeleine Elish (cultural anthropology researcher at the Data and Society Institute who focuses on cultural responses to automation and AI). The next discussion was moderated by Caroline Sinders (machine-learning designer and digital anthropologist at WikiMedia who focuses on online harassment) and accompanied by Kate Crawford (co-founder/director of the AI Now Research Institute and 2016 co-chair of the Obama White House symposium on the impacts of AI), Jer Thorp (former Data Artist in Residence at the New York Times and co-founder of The Office for Creative Research), and Chelsea Manning (former intelligence analyst in the U.S. Army who exposed human rights violations committed by the U.S. military in Iraq). The range of voices included in these two discussions led to intriguing conversations between diverse perspectives on the current and near future states of AI.

 

A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?

This panel discussion started with a series of short presentations followed by an open discussion. Presentations covered AI-produced poetry and images, forms of human-like memory and agency in AI models, and potential moral crises of virtually autonomous AI (e.g. self-driving cars). The discussion that followed focused primarily on the concept of human-centrism in technology since the rise of agriculture and who owns (i.e. has a copyright) or is responsible for the actions of seemingly autonomous AI.

While the discussion on human-centrism in technology was interesting, it reached a fairly quick consensus: human-centrism may be useful to keep technology in check, but what we define as human is often discriminatory within our own species (e.g. racism and sexism), so an alternative focus on the results of agency may be more beneficial to the work of ethical technologists. The discussion on the moral ownership of AI actions was a bit more novel since it centered on what may be called the “moral crumple zone” (phrase mentioned by Madeleine Elish during her presentation), which exists when the behavior of an artificial intelligence defies the predictions of its creators because they do not understand all of the possible outcomes of the machine-learning algorithms that they have created. Current legislation firmly places the blame of AI actions on humans, i.e. either the creator or the user (through a contract, such as agreeing to terms and conditions). The moral crumple zone, however, is becoming more apparent as AI becomes increasing prevalent in our contemporary military and consumer landscapes, while life and death decisions are places in the “hands” of technology that even their creators do not fully understand. The panelists expressed some fear and uncertainty about what new legislation could form in the U.S. under the current presidential administration due to the pressures of business and military awareness of the moral crumple zone.

 

AI and Data Security

The panel discussion on AI and Data Security remained solely a discussion – no presentations were needed to dive right into the pressing issues. Through the curiosity of the other panelists and the moderator, and after the panelists’ introductions, Chelsea Manning started off the discussion by running through her top concerns about uses of AI in the military. She illustrated how machine learning used in the field is largely inaccurate: 50% probabilities are usually enough, and even 20% is considered enough in an actual war-zone. However, even more disturbing than these inaccuracies is how predictions in feedback loops are often interpreted in ways to fit a specific narrative. Intelligence analysts might focus on specific predictions that would impress their commanders or create stories or connections between predictions where none exist (this can also be done by the commanders themselves). A breakdown of a prediction of a potentially hostile target, in Manning’s case, was often misunderstood by commanders: explaining to a commander how a target predicted by an AI is simply formed from a specific, and low, degree of certainty based on potentially faulty training sets and field tests is likely to be ignored when another analyst can simple point out a target to the same commander. This point was expanded on by other panelists who expressed concern over how mass media outlets are currently reporting on studies that utilize AI. One example was the recent virality of a study on “AI gaydar,” which has been reported to determine if someone is “gay or straight” by only utilizing facial recognition technology. The panelists were frustrated by how few articles failed to stress the experimental design of the study, which used a limited pool of images from a dating site and was constricted to the site’s inadequate dichotomy of human sexuality. Furthermore, the timeliness of these images made the algorithms rely on visual stereotypes – a problem in facial recognition software that could help to preserve a status quo that would otherwise change, while also boldly assuming that physical traits can reliably expose interiority.

It seems that there is an increasing drive to trust unreliable conclusions of AI in our current economic and political climate, especially when so few regulations and standards are placed on AI research and on businesses and military agencies that utilize this technology. Panelists also expressed concern over the proliferation of black box algorithms (i.e. algorithms who’s inputs and outputs are considered without an understanding of their internal working) in public agencies since they are at odds with due process.

The panelists explored some solutions to their concerns:

  1. Temperature checks during coding to gauge the impact of work, especially for those that work in media sharing (e.g. Facebook timeline engineers) and search engines.
  2. Culture/education change for AI creators so that they are less separated from the real-world consequences of their work.
  3. Paying closer attention to the quality of training sets used in machine learning and putting a greater emphasis on communicating the quality of these training sets when sharing results.
  4. More regulation (through legislation) on AI research in private and public sectors.
  5. More restrictions on what data can be collected and stored by businesses and hold businesses accountable for this data’s security (note: Equifax data breech).
  6. Promote mass concern about the use of AI, especially in business – with regards to personal data – and in the military.

 

What’s Next?

The FACETS Un-Conference on Art & Artificial Intelligence exposed a lot of dire issues that are present in the use and creation of contemporary artificial intelligence. However, all of the panelists’ potential solutions for their concerns during the discussion periods of both panels were quite classic. The critical examination of information as it is collected, reconfigured, and then molded into new information should not be taken for granted in our current era of machine-learning technology. While all of the panelists shared how novel these situations are because of the incredible power of the new technologies that they have worked with, their solutions (such as paying close attention to training set data and expressing results clearly and accurately) are very typical to those in traditional information science professions. There exists a divide between some technologists and those in the humanities and traditional information professions that allows for creators and users of AI to skip the critical steps necessary to ensure a safe, secure, and accurate method of archiving, producing, and sharing information. The panelists at FACETS did an exemplary job at exposing and exploring this divide, and so I recommend that readers of this post who are interested in the intersection of technology, information, and art to check out their next event when it is announced. What I learned from this event makes the mission of Pratt’s School of Information seem even more urgent and timely: we need more information professionals who can critically examine how new technologies change our information landscape in order to better prepare us for the challenges that are already arriving.

 

Works Cited

FACETS. http://www.facets-con.com/. Accessed 28 Oct. 2017.

McDowell, Kenric, et al. “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

Sinders, Caroline, et al. “AI and Data Security.” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

How Online Notices Obscure Privacy and Ownership on the Web

In an information age dominated by digital content, contemporary leisure is conducted on the very same machines once intended solely for work. Personal computers connected to the internet have made many individuals into seemingly nonstop information-producing machines without pay. While we may very well consider the average internet surfer as a consumer of digital information, we actually produce almost as much information as we consume through the creation of Facebook posts, tweets, YouTube comments, emails, and from browsing data that is automatically produced every time a page is opened. This content is often produced within websites that come with no monetary cost for their users to access (i.e. after purchasing a computer and a connection to the internet) and their users’ creations almost always come with no monetary rewards. While some sites do reward their content producers with ad revenue (e.g. YouTube has such rewards for users that post popular videos to its site), incredible amounts of digital content produced without monetary rewards create an unsettling problem. Online notices are often used to rescind a users’ rights to their digital content, eliminating privacy and transferring digital ownership often without the user’s knowledge (McChesney 150-152).

 

Establishing Ownership: Agreements and Notices

There exists a simple concept to signify online ownership of information: the agreement. An agreement on a website usually includes a notice in the form of a popup window or as a page during a profile setup that establishes, usually in legal language, the terms and conditions of a user’s access to a web domain. These terms and conditions describe how the company that runs the web domain can use the information that a user creates – even information that the user is not aware of creating. On the same page as a notice, there is typically an “agree or disagree” option: pressing “agree” grants the user access to the site while pressing “disagree” blocks access. As described, this particular type of online agreement is called an opt-in/opt-out agreement; more specifically, a click-wrap agreement. These agreements include an exorbitant amount of text wrapped down a page that must be scrolled to view in its entirety and are typically responded to by the user without actually having been read. The offline predecessor to this agreement is the shrink-wrap agreement, of which the terms and conditions take effect after the shrink wrap on a box is broken by the recipient. However, some sites today still use an online form of the shrink-wrap agreement called the browse-wrap agreement (“The Origin of Click-Wrap”). If a site ever includes a banner that contains text that equates your use of the site with your consent to its privacy policy, an example of which can be seen in Figure 1.0 below, you have consented to a browse-wrap agreement. In most cases, these click-wrap, shrink-wrap, and browse-wrap agreements are considered sufficient enough by to inform users of how a company will use their information.

Figure 1.0: Example of a shrink-wrap agreement ("The Origin of Click-Wrap").
Figure 1.0: Example of a shrink-wrap agreement (“The Origin of Click-Wrap”).

When scrolling through a timeline on Facebook or through your friends’ tweets on Twitter, you have already agreed to allow sites to track your interactions and use your digital creations. In short, you do not typically own the content that you produce on these sites because you either knowingly or unknowingly gave the companies that own the web domains ownership over your creations and shared data.

 

Why Do We Give Away Our Information?

It seems reasonable to assume that individuals would generally like to keep their digital privacy and ownership over their creations (McChesney 152). Giving away our digital labor for free, even if done through leisure, does not make much sense. It would appear counter-intuitive for so many people to give up these things every day, but this is exactly what happens. There are a few possible reasons that I find for this: (a) existing online agreements include notices that provide either poor discoverability or poor understandability, (b) users cannot properly assess their own value for privacy and ownership at the time of the agreement, and (c) users want instant access to sites regardless of what they are sacrificing.

Poor Discoverability & Understandability

Poorly placed notices are easy to find online. The position of the browse-click agreement notice from Figure 1.0 can be seen in Figure 2.0 below. This notice is placed at the bottom of the web page with a gray background that closely matches the gray colors of the banners near the top of the page. It neither blocks the user from reading or interacting with the content on the page nor does it attract the user’s attention. This notice is not easily discoverable by a user, and so, consent to its contents may be established without the user’s knowledge.

Figure 2.0: Example of a shrink-wrap agreement on a web page ("The Origin of Click-Wrap").
Figure 2.0: Example of a shrink-wrap agreement positioned on a web page (“The Origin of Click-Wrap”).

Oftentimes, notices are simply not understandable to the average user. Notices used for click-wrap agreements are typically long and include legal language. A frustrated user in a hurry might not have the patience to look up legal jargon and spend the time reading multiple pages of text about terms and conditions of use. Even when we know that we are consenting to an agreement online, we may not have the knowledge, or time, to fully understand it.

The Wrong Context

Similar to how lengthy notices are hard to understand because of how long it takes for a user to read them, notices that appear for agreements early in a user’s interaction with a site (typically during registration) do not allow a user to properly assess their importance. For example, it is hard to understand why you might be concerned about how Facebook will use your post of a video to its site before you even learn how to create a Facebook post. Two possible solutions for this problem are just-in-time and visceral notices. A just-in-time notice, as seen in Figure 3.0 below, is a notice that asks for an agreement at the time when a user’s privacy would be intruded or when ownership of content may be transferred (Young, “The FTC Mobile Privacy Staff Report”).

Figure 3.0: An example of a just-in-time notice ().
Figure 3.0: Example of a just-in-time notice (Young, “The FTC Mobile Privacy Staff Report”).

A visceral notice, which may be easier to include when too many just-in-time notices would be required, is a notice that allows a user to experience its contents (Hagan, “Visceral Notice Types”). The reason behind using a visceral notice is that information may be better understood when it is experienced. In the example used earlier of a Facebook post of a video, this may involve a walkthrough of the creation of a Facebook post with the inclusion of clear descriptions, with diagrams, about how that information will be shared with other parties. Just-in-time and visceral notices help provide context about privacy intrusions and ownership transfers of digital content where typical notices provide none.

Instant Access

When a user opens a web page, they intend to use that page immediately. A notice provides a block to that use, a constraint to experiencing the page. The internet, as opposed to a library or museum, seems to promise users quick access to information. When a notice warning of privacy intrusion and ownership transfers appears on a web page, a user will likely have no patience for it. In addition to a user’s drive for instant access on the internet, social and professional pressures to access sites like Facebook, Twitter, YouTube, and LinkedIn further encourage users to click through agreement notices in order to access sites quickly. While this may seem like the fault of an individual, their prevalence in society points more to a social trend, something that can be easily profited off of by an aware domain owner. Pairing this knowledge with poor contexts and poor discoverability and understanding for a notice allows for most users to quickly disregard their right to privacy and ownership of personal information.

 

Looking Forward

Enhancing digital privacy and users’ rights to ownership of their digital creations in the U.S. will require legal pressures that protect user data to be strengthened, especially with regards to Section 5(a) of the FTC Act which states that “unfair or deceptive acts or practices in or affecting commerce…are…declared unlawful” (“A Brief Overview of the Federal Trade”). Profit-driven domain owners have no reason to better design their notices in the ways that I have described above when they are profiting off of a user’s inability to find or understand them. Another concern, however, is that access to certain sites, such as Facebook and Twitter, could be lost if privacy and ownership agreements were better understood by their users. Our free access to sites is often dependent on a domain owner’s ability to sell our content and interactions to ad companies. Nevertheless, from a consumer standpoint, users should be able to decide on their own whether or not sacrificing their privacy and ownership of digital content is worth the access to a specific site – with full knowledge of the consequences of their actions.

 

Works Cited

“A Brief Overview of the Federal Trade Commission’s Investigative and Law Enforcement Authority.” Federal Trade Commission, Jul. 2008, https://www.ftc.gov/about-ftc/what-we-do/enforcement-authority. Accessed 24 Sep. 2017.

Hagan, Margaret. “Visceral Notice Types.” The Program for Legal Tech & Design, http://legaltechdesign.com/GoodNoticeProject/2014/01/22/visceral-notice-types/. Accessed 25 September 2017.

McChesney, Robert W. Digital Disconnect: How Capitalism is Turning the Internet Against Democracy. New York, The New Press, 2014.

“The Origin of Click-Wrap: Software Shrink-Wrap Agreements.” WilmerHale, 22 Mar. 2000, https://www.wilmerhale.com/pages/publicationsandnewsdetail.aspx?NewsPubId=95543. Accessed 24 September 2017.

Young, Michael. “The FTC Mobile Privacy Staff Report.” Data Privacy Monitor. BakerHostetler LLP, 11 Feb. 2013, https://www.dataprivacymonitor.com/behavioral-advertising/the-ftc-mobile-privacy-staff-report/. Accessed 25 Sep. 2017.