Event in Review: NYC Media Lab Summit ‘19

Photo: Janet Liu 2019

My event in review is on the NYC Media Lab Summit that I attended on September 26, 2019. Organized by the NYC Media Lab, the summit brings together people from various industries and universities in NYC to discuss the emerging technologies of today and the future. The event was split into a morning and afternoon session that was held from 8:30 AM to 5:00 PM at the New York City College of Technology (CUNY) and NYU Tandon School of Engineering.

Media 2030 panel led by Justin Hendrix, Executive Director of NYC Media Lab.
Photo: Janet Liu 2019

The morning session began with an innovation panel discussing the challenges and future vision for Media 2030. The list of speakers includes Yaël Eisenstat, R. Luke Duois, Desmond Upton Patton, and Tony Parisi. It was inspiring to hear different professionals’ takes on what they thought will be the most critical challenges facing institutions in 2030. Even though the speakers come from different industries, it was surprising to hear all of their responses towards AI and algorithm bias. This made me think about Posner’s discussion on the inefficiency in having a binary mindset to make sense of the world, and how binary groupings in digital humanities projects are causing further marginalization of groups (Posner, 2016). It is concerning to learn of all of the bias we have in our society today, and how it will remain a critical challenge ten years later.

Following the panel were two keynote presentations given on AI and storytelling. The first was from Amir Baradaranand, an artificial artist and art-based researcher at Columbia University. The second was from Heidi Boisvert, CEO & Founder of futurePerfect Lab and Director of Emerging Media Technology at CUNY. It was fascinating to see AI creating immersive storytelling experiences and artworks. This made me think about Norman’s argument of machines as ‘rigid, inflexible, and fixed’ (Norman, 2018). We can see these traditional views shifting, as innovators like Baradaranand and Boisvert show us a vision where artists, creatives, and AI technologies can work together. Perhaps, as Norman imagined, humans and machines will form a complementary team and take on both a human-centered and machine-centered approach to learning.

The afternoon session began with a Demo Expo that included 100 student prototypes. I was looking forward to this event the most as I wanted to see what kind of emerging technologies students were currently working on and excited about. It was immediately evident that there was a big trend in VR. I saw many VR products used for prototypes such as designing an online retail store, an immersive travel experience, and a chemical lab. One project that really stood out to me was the Hip Hop data visualization project, ‘Mapper’s Delight’ designed by Rap Research Lab. Instead of showing a list of lyrics, the lab explores the “global distances traveled by the lyrics contained in each rap artist’s career while exploring the secret flows of Hip-hop’s spacetime through a panoptic interface.” (“Mappers Delight VR,” 2017). It was cool and clever to see over 2,000 lyrics connected by geography and transformed into a virtual platform, which also brought an emotional engagement as I was able to find lyrics connecting me to Hong Kong. Projects like these make us think about new possible ways to provide meaning and context to big chunks of data.

Stuart Trafford’s workshop, “Magic Leap in the Enterprise: How Spatial Computing is Revolutionizing Education, Media, Entertainment and More.”
Photo: Janet Liu 2019.

The last part of the summit included a hands-on workshop where attendees had the choice of picking one out of the fourteen to attend. I decided to go with Magic Leap, a leading VR company presenting on extended reality, spatial computing, and how it is transforming the industries. I wanted to attend this workshop to understand why there is such a big fascination with these types of products. Stuart Trafford, the Education Lead of Magic Leap introduced its newest product called Magic Leap One, a mixed reality product that creates immersive experiences. One point that stuck with me was when Trafford said the experience of information is changing as technology has allowed these online experiences to be personalized instead of appealing to the masses. It was fascinating to see how MR products can be applied to future industries such as in hospitals and construction sites. This workshop inspired me to write my research paper on VR and understand if there will be a demand for such a product in future museums, as I still find VR products to be very gimmicky.

Overall, I was very impressed with the structure of the summit. I expected more students to attend as tickets cost a hefty $200 but students can attend for $30. I loved the order of presentations. It started with broad topics discussing the challenges and future use of emerging technologies, to the current uses demonstrated by students, and then to workshops that show specific examples of how these types of technologies are used. Also, it is important to note that the event relied on the WHOVA conference app, which allowed you to keep track of the full agenda, learn more about sessions, take notes, chat, and most importantly, sign up for workshops. Even though the app was really convenient, it made me think about the accessibility of information. How will the experience change for people who don’t have the app downloaded and can’t sign up for workshops? Will their experience be different since the event heavily relied on the app to connect with other attendees and speakers?

I appreciated how the summit not only showcased all the fancy cool products but also emphasized on the downsides and challenges technology brings. By doing so, the summit did a good job of providing transparency. One thing that really stuck to me was when Boisvert spoke of her research findings at Limbic Lab that shows how technology is rewiring our brain. As Boisvert comments, it will be important for us to take a human-centered approach to reverse the harmful effects caused by technology. This seems to be a central theme in the summit as well as our discussions from class. As Norman, and what other speakers have repeated throughout the summit, future designers and technologists will not only need training as technicians but will also need to receive training to learn what it means to be ‘human’ (Norman, 2018).

References

Mappers Delight VR. (2017). Retrieved from: https://rapresearchlab.com/#portfolioModal2.

Norman, Don A. (1998). Being Analog. Retrieved from: http://www.jnd.org/dn.mss/being_analog.html.

NYC Media Lab ’19. (n.d.). Retrieved from: https://summit.nycmedialab.org/

Posner, Miriam (2016). What’s next: The radical, unrealized potential of digial humanities. Retrieved from: http://miriamposner.com/blog/whats-next-the-radical-unrealized-potential-of-digital-humanities/.

Event Review: NYC Media Lab Summit

On September 26, 2019, I attended the NYC Media Lab Summit held in downtown Brooklyn. The mainstage program took place at the New York City College of Technology (City Tech CUNY) for the first half of the day. The second half of the day was dedicated to interactive demos and workshops and took place at both City Tech and the New York University (NYU) Tandon School of Engineering.

NYC Media Lab describes itself as dedicated to “driving innovation and job growth in media and technology by facilitating collaboration between the City’s universities and its companies” (About – NYC Media Lab, n.d.) Pratt Institute is part of NYC Media Lab’s consortium with goals “to generate research and development, knowledge transfer, and talent across all of the city’s campuses” (About – NYC Media Lab, n.d.), which also includes The New School, School of Visual Arts, Columbia University, NYU, CUNY, IESE, and the New York City Economic Development Corporation. Member companies of NYC Media Lab include Bloomberg LP, Verizon, The New York Times, and NBCUniversal, to name a few.

The Media Lab Summit held itself like a typical conference, where you check in to receive your name badge upon arrival and are treated to coffee and pastries. Then everyone takes their seats before the main program begins in the auditorium where the Executive Director of the program, Justin Hendrix, makes his welcome address and does introductions.

Innovation Panel discussion

Up first was the Innovation Panel, which featured speakers Yael Eisenstat, R. Luke Dubois, Desmond Patton, and Tony Parisi. The panel featured a mix of academics and professionals who all addressed the topic of artificial intelligence, or AI. It was interesting to hear that everyone agreed that AI is the future but that they all held concerns about whether it will be accessible to all. Another potential issue that was brought up in relation to AI is what seems like our current overdependence on data. One panelist raised serious concerns about this overdependence and worried whether this could lead to the complete disregard of an innate human characteristic, which is critical thinking. All panelists agreed that critical thinking is essential and sees it playing a key role throughout the use of AI and other technological advancements.

What I ultimately took away from this Innovation Panel was that critical thinking is needed now more than ever. I think we have always understood that critical thinking is crucial as it is what keeps us human. AI is capable of making decisions for us, but the ability to be able to critically think about the potential impacts of our decisions and asses our judgments remains entirely human. This emphasis on critical thinking reminded me of the Phoebe Sengers reading in which she also discusses machine culture but stresses that science and the humanities need “to be combined into hybrid forms” as “neither is sufficient alone” (Practices for Machine Culture, n.d.). Like the panelists, Sengers recognizes the strengths in both and how each can complement the other, especially in AI.

Next up were the showcases. The showcases were meant to present and demonstrate projects, prototypes, and startups created by students and faculty from NYC Media Lab programs. Two of the showcases that stood out to me the most were a subway accessibility app for the blind and a retina technology startup.

Access to Places presentation

Students from NYU’s Interactive Telecommunications Program created an app called Access to Places with the goal to make subway stations much more accessible for the blind. The app utilizes iOS’ text-to-speech voiceover technology to provide information such the location of entrances and exits, service delays or changes, and arrival and departure times. Notifications also help the blind to navigate around station layouts.

Retina Technologies presentation

Retina Technologies was formed by medical students at the Icahn School of Medicine at Mount Sinai. The startup aims to change the way people access ophthalmologists in both urban and rural areas. Through the use of virtual reality headsets, the startup hopes to increase access to ophthalmologists for those who cannot easily visit one in rural areas while also improving the patient experience for those in urban areas.

Access to Places and Retina Technologies both stood out to me the most because of the users that they were designing for. Instead of creating a product that catered to the majority of the population, they reached out to those with specific needs that often get neglected in the startup and tech conversations. I immediately thought of the Sasha Costanza-Chock paper on “Design Justice” and the discussion on who designers are actually designing for. The majority of startups and apps tend to assume the average user is able to access or use a product without any accommodations, much like how Costanza-Chock discusses that designers “assume” that “a user has access to a number of very powerful privileges” (2018). Visiting an ophthalmologist or getting onto the subway without any trouble are privileges that most designers tend to assume users have. Access to Places and Retina Technologies decided to instead focus on the needs of these specific user groups rather than create another app or startup that assumed they were just like every other user.

Many innovative and creative projects were demonstrated, and I was in awe over it all, but it was the discussions that were held that enlightened me. What I took to be the overall theme of the Media Lab Summit was accessibility and the continued mission to make this collaboration between media and technology available to all. I still believe that technology has this amazing potential to change and impact lives, but we must make it available to everyone to see it happen. The Media Lab Summit and our class discussions and readings only continue to highlight this necessity and how we as information professionals cannot simply ignore it as technology advances.

References:

About – NYC Media Lab. (n.d.). Retrieved from https://nycmedialab.org/about.

Costanza-Chock, S. (2018). Design Justice: towards an intersectional feminist framework for design theory and practice. DRS2018: Catalyst. doi: 10.21606/drs.2018.679

Sengers, P. (n.d.). Practices for Machine Culture: A Case Study of Integrating Cultural Theory and Artificial Intelligence. Retrieved from http://www.cs.cmu.edu/afs/cs/usr/phoebe/mosaic/work/papers/surfaces99/sengers.practices-machine-culture.html.

Event Review: Museums and AI in the 21st Century

The event taken place at Cooper Hewitt, Smithsonian Design Museum on Sep 16, 2019 mainly discussed the applications of Artificial Intelligence now and future and highlighted the role of museums as making people more self-aware. There were three talks in the event given by three different perspectives (a curator, a computer and a future teller) and a free Q & A session afterwards.

Curator: Andrea Lipps, an associate curator of Contemporary Design, Cooper Hewitt

The talk given by a curator from Cooper Hewitt first discussed the impact of AI on our lives right now. As is known to all that AI could be used in different kinds of fields like education, recreation, medical treatment, marketing automation, etc. AI could analyze large amounts of data in a short period of time and help make quick decisions. The benefits of AI are undoubted and visible. However, the curator also pointed out some questions that could not be ignored:

1.How can we ensure diversity, inclusion, safety and human rights are maintained with AI?

2.What role would AI play in our future?

3.How could museum use AI to represent new things?

There is no right or wrong to these questions and we could interpret the questions from different angles. The curator also provided some frameworks that we could use to think about AI:

1.Is it active or passive? If it is active, do you have a choice? If it is passive, is it being disclosed?

2.Is it being linked to a real-world identity or just used as anonymous ID?

3.Which methods being used when connecting AI with museums?

It’s true that we could only predict the influence and applications of AI in the future but what we should pay attention to right now are our own values and priorities. Because the use of AI is designed by human beings and design is just the externalization of our own desire. “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it… we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Said Norbert Weiner in 1960.

Computer: Harrison Pim, a Data Scientist from Wellcome Trust

The data scientist who represented a computer talked about his work content, that he used machine learning in dealing with loads of images, texts and collections quickly but not analyzing users or visitors, since AI in current period was parasitic on data. He also pointed out that AI was not designed to replace human beings but as tools to be used by people. So, the main point is how to use the tools to better serve people’s needs. The talk given by “the computer” reminded me of what I read in What is Computer Ethics: we are in a conceptional vacuum and policy vacuum world and we need to reexamine the regulations in the past world, from how to define tech-based concepts to create a relatively neutral algorithm. It is impossible to create something absolutely neutral but by creating diversity, the “fundamental vulnerability” could somewhat be relieved.

Creator: Karen Palmer, a storyteller from the future

The future teller first warned everyone that the technology would take over everything and individuals would find themselves lack privacy or security in the near future if we did nothing. We would be derived of the right of telling our own stories and the world was going to be consist of auto-self surveillance, weaponized technology and biased networks.

She used the example of criminal justice system to confirm us that bias would be the biggest problem in AI applications. An example used to support was the UK police using AI to inform custodial decisions which could be discriminating against the poor. Most assumptions made by AI right now were based on false theory while these assumptions are trend to take over our lives. Thus, she concluded that democratizing AI should be what we fight for in the near future.

What she highlighted was the necessary to turn the information age to an age of perception. “Those who tell the stories rule the world.” What museums should do is to make people more self-aware and create more opportunities to arouse citizens’ insights to social issues.

Q & A session

Q: How to apply machine learning in the field of design?

A: To begin with, the interactions between users and products would be changed by new technologies but the role of designers should not be overshadowed by AI. We could use AI to produce products or test prototypes faster. In a word machine learning should serve us but we should not be slaved by it.

Q: What would justice be like in the future and what is the role of machine learning in it?

A: Neither machine learning or artificial intelligence could answer future justice problems. Those concepts should be determined by human beings but not computer technologies. What would happen in the future is the living space AI help to create and people could better understand culture issues in the museums.

Conclusion

Though we have to admit human’s dominant role in the applications of AI, there are other problems about surveillance, power and constraints that could not be ignored. “In an era of extractivism, the real value of that data is controlled and exploited by the very few at the top of the pyramid.” Said Crawford & Joler. The event did not predict how the regulations could be established but just pointed out museums’ future role in arousing people’s awareness, which I think lack enough support and overly optimistic to some extent. Anyway emphasizing museums’ social responsibility is quite necessary right now and all museum practitioners should be prepared for the transformation of exhibition modes.

Reference

Norbert Weiner (1960), Some Moral and Technical Consequences of Automation;https://www.lesswrong.com/posts/2rWfmahhqASnFcYLr/norbert-wiener-s-paper-some-moral-and-technical-consequences

James H. Moor (1985), What is Computer Ethics? 1-2

Tarleton Gillespie (2014), The Relevance of Algorithms, 191; https://www.microsoft.com/en-us/research/wp-content/uploads/2014/01/Gillespie_2014_The-Relevance-of-Algorithms.pdf

Crawford & Joler (2018), Anatomy of AI system; http://www.anatomyof.ai

Emotionally Intelligent Design Workshop

UXPA@Pratt organised the ‘Emotionally Intelligent Design Workshop’ on 16th February 2019. It was conducted by Pamela Pavliscak and the theme of the workshop was ‘Love’.

The motive of the workshop was to give the participants a basic understanding of how emotion-sensitive artificial intelligence works and how to design the same. The session was broken into parts like that of a four-course meal. The participants were divided into pairs to mimic a setting of a date. Each pair was given a topic and a situation for which they had to design an emotionally intelligent device.

Each pair conducted an interview in relation to the situation provided to them, where one played the part of an interviewer and the other, the interviewee. The situations or problems given were all with the context of love like, cohabitation or being single. The devices to be made by the end of the workshop were to solve the given problems faced the by participants.

The workshop was well-structured and all the parts were highlighted right in the beginning. All the problems and solutions were personal and unique because they were in context with the participants’ personal experiences. Ways to uncover the emotions behind every design or prototyping steps taken were shown. Methods to design any device, not only mobile or web-based applications but physical products as well so that they can read and adapt to human emotions, were discussed.

The ways emotional intelligence shapes the future of technology were discussed, where AI would be able to interact with humans on an emotional level and as Sengers describes it “The hope is that rather than forcing humans to interface with machines, those machines may learn to interface with us, to present themselves in such a way that they do not drain us of our humanity, but instead themselves become humanized.”

There has always been a debate, whether AI is a benefit or a risk to the society. But this workshop emphasized on how AI and emotional design could be used to impact society in a positive way. The participants were made to explore the world of ‘Emotional Intelligence’ in a much deeper sense, which resulted in creative and adaptive designs at the end.

References:

  • Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” 

The Feeling of Technology

What makes us feel?

From a biological perspective, it is proven that nerves located at integral parts of our bodies help us interpret external stimuli that come in contact with our body. The amygdala in our brain is a limbic structure that helps us process emotions and is a component that makes humans unique. The way our bodies have evolved have made us into analog creatures that react well to external stimuli in the natural world and this in turn has helped us become highly adaptable to earth’s different environments (Norman, 2008). From a technological standpoint, what happens when we begin to try to build machines to be more like us? What happens when we want our machines to then replicate our innate emotions or our psyche, to perform for us?

These were questions that I thought of when I was attending UXPA’s Emotionally Intelligent Design Workshop on February 16th. During this workshop, Pamela Pavliscak, a specialist that studies the relationship between our emotions and technology, asked us to partner up and design an app or piece of technology with human emotion in mind. We were required to use two themes as the basis of our invention. For myself and my partner, we had to create a dating app for people that are single. To help us create our invention, Pamela offered examples on how the tech industry has already began using forms of emotion, like our gestures and tone of voice, to implement design features that help build programs that react to us. Their reactions to our emotions will then prompt the machine to respond in a way that’s human, but not quite.

An example of this is SimSensei, a virtual human interviewer, which was created as a means to help health care professionals make more informed decisions on their patients based on their responses to the virtual interviewer. SimSensei is represented by a virtual human named Ellie, who is programmed to conduct interviews that help “…create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety, or post-traumatic stress disorder” (DeVault et al, 2014, p. 1061). Essentially, by creating a virtual helper like Ellie, people at risk of certain mental health disorders can feel they can open up to her, and in turn they can receive the right treatment. Patients are often misdiagnosed in the medical field so I think SimSensei has the right programming to flag warning signs of a particular disorder (keep in mind that it is mainly being used in diagnosing mental health issues).

In my honest opinion, it almost feels like Ellie has been programmed to trick patients into thinking they can trust it. During the course of an interview, the patient is being monitored, and every question Ellie asks is to create a response from the patient, either through speech or through facial changes. Here is a YouTube video that will help you see what sort of questions Ellie is programmed to ask to during her interviews and the type of facial tracking the machine uses.

Another great example offered to us is Toyota’s 2017 movie on a futuristic vision of how some cars may be developed (access it here ). The car featured in this short movie is a concept model, along with the AI named “You-ee” that is built into it. We see aspects of the car’s AI offer advice, act as “wing-man”, and my personal favorite – give positive reinforcement. During the workshop, only the clip from 5:45 to 6:34 was shown. Seen in its entirety, we get a glimpse into what an emotionally intelligent system can do for us. By giving something like “You-ee” human-like qualities (like its ability to make a joke out of Noah’s messy hair), it allows us to view the car as an extension of ourselves. More importantly, I think having a dependable AI is something that will allow individuals to flourish and establish better ties with their human counterparts.

Learning about the different types of emotion-based systems that are already on the market reminded me of Phoebe Senger’s remarks on AI being “..autonomous agents, or independent artificial beings” (Senger, 1999, p.10). We can, at this point, say that Ellie is a step away from being an autonomous agent. Although SimSensei is only currently being used to help doctors diagnose mental health patients, won’t this tool eventually be programmed to perform the the diagnosing by itself and then also administering treatment?

After reading Senger’s article, I now understand how the effects of implementing emotion into our programs can push our machines to the next level. Ellie is programmed with a voice and is made to be able to connect to humans so that we can better understand our own species. We will always be building towards the future, but we always want to keep our connections to one another close to us. After all, humans are empathetic and this quality will be incorporated into the things we create. “You-ee” a perfect example of how the relationship between human and AI can potentially be a harmonious union.

At the end of this workshop, all the groups presented their designs and prototypes. My partner and I decided to create a dating app that required all users to scan a full body image of themselves and display it on their page. Since I’ve never used a dating app before, I was never subjected to the cruel reality of them. According to my workshop partner, dating apps can make finding a partner relatively uncomfortable and weird. Therefore, by implementing a way to circumvent the feeling of discomfort and dishonesty, we believed having your entire self displayed is a great way of creating a more open dating world. But you may ask at this point: “Where’s the portion of your app’s design that makes your prototype emotionally intelligent?”.

And I will answer: “We’re not at that point yet”.

References:

  • DeVault, David et al. (2014). SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support. 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014. 2. 1061-1068.
  • Norman, Don A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog
  • Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” Surfaces VIII.


FACETS Un-Conference on Art & Artificial Intelligence

 

Blog Post 2 Pic

 

On Saturday, October 28, 2017, the Goethe-Institut in New York City hosted the FACETS Un-Conference on Art & Artificial Intelligence. This year’s topics ranged from ethical machine learning to human-AI communication to data security. I attended the keynote panel discussions on (1) the future of AI research and practice and (2) current issues at the intersection of AI and data security. What I found from these discussions is that there exists a lack of regulation over those who profit from the work of technologists and a general overconfidence in the information that an intelligent machine produces. This brief report will explore the issues raised in these panel discussions in relation to ongoing conversations in the field of information science.

 

What is FACETS?

FACETS is an “interdisciplinary creative coding, interactive art, and videogames un-conference” that focuses on underrepresented individuals in STEM fields and art (http://www.facets-con.com/). This year’s theme was on Art and AI. I attended the keynote panel discussions, “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” and “AI and Data Security.” The first of these discussions was moderated by Kenric McDowell (head of the Art + Machine Intelligence group at Google Research) and accompanied by Ida Momennejad (postdoctoral fellow at Princeton University who uses AI to research human neuropsychology) and Madeleine Elish (cultural anthropology researcher at the Data and Society Institute who focuses on cultural responses to automation and AI). The next discussion was moderated by Caroline Sinders (machine-learning designer and digital anthropologist at WikiMedia who focuses on online harassment) and accompanied by Kate Crawford (co-founder/director of the AI Now Research Institute and 2016 co-chair of the Obama White House symposium on the impacts of AI), Jer Thorp (former Data Artist in Residence at the New York Times and co-founder of The Office for Creative Research), and Chelsea Manning (former intelligence analyst in the U.S. Army who exposed human rights violations committed by the U.S. military in Iraq). The range of voices included in these two discussions led to intriguing conversations between diverse perspectives on the current and near future states of AI.

 

A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?

This panel discussion started with a series of short presentations followed by an open discussion. Presentations covered AI-produced poetry and images, forms of human-like memory and agency in AI models, and potential moral crises of virtually autonomous AI (e.g. self-driving cars). The discussion that followed focused primarily on the concept of human-centrism in technology since the rise of agriculture and who owns (i.e. has a copyright) or is responsible for the actions of seemingly autonomous AI.

While the discussion on human-centrism in technology was interesting, it reached a fairly quick consensus: human-centrism may be useful to keep technology in check, but what we define as human is often discriminatory within our own species (e.g. racism and sexism), so an alternative focus on the results of agency may be more beneficial to the work of ethical technologists. The discussion on the moral ownership of AI actions was a bit more novel since it centered on what may be called the “moral crumple zone” (phrase mentioned by Madeleine Elish during her presentation), which exists when the behavior of an artificial intelligence defies the predictions of its creators because they do not understand all of the possible outcomes of the machine-learning algorithms that they have created. Current legislation firmly places the blame of AI actions on humans, i.e. either the creator or the user (through a contract, such as agreeing to terms and conditions). The moral crumple zone, however, is becoming more apparent as AI becomes increasing prevalent in our contemporary military and consumer landscapes, while life and death decisions are places in the “hands” of technology that even their creators do not fully understand. The panelists expressed some fear and uncertainty about what new legislation could form in the U.S. under the current presidential administration due to the pressures of business and military awareness of the moral crumple zone.

 

AI and Data Security

The panel discussion on AI and Data Security remained solely a discussion – no presentations were needed to dive right into the pressing issues. Through the curiosity of the other panelists and the moderator, and after the panelists’ introductions, Chelsea Manning started off the discussion by running through her top concerns about uses of AI in the military. She illustrated how machine learning used in the field is largely inaccurate: 50% probabilities are usually enough, and even 20% is considered enough in an actual war-zone. However, even more disturbing than these inaccuracies is how predictions in feedback loops are often interpreted in ways to fit a specific narrative. Intelligence analysts might focus on specific predictions that would impress their commanders or create stories or connections between predictions where none exist (this can also be done by the commanders themselves). A breakdown of a prediction of a potentially hostile target, in Manning’s case, was often misunderstood by commanders: explaining to a commander how a target predicted by an AI is simply formed from a specific, and low, degree of certainty based on potentially faulty training sets and field tests is likely to be ignored when another analyst can simple point out a target to the same commander. This point was expanded on by other panelists who expressed concern over how mass media outlets are currently reporting on studies that utilize AI. One example was the recent virality of a study on “AI gaydar,” which has been reported to determine if someone is “gay or straight” by only utilizing facial recognition technology. The panelists were frustrated by how few articles failed to stress the experimental design of the study, which used a limited pool of images from a dating site and was constricted to the site’s inadequate dichotomy of human sexuality. Furthermore, the timeliness of these images made the algorithms rely on visual stereotypes – a problem in facial recognition software that could help to preserve a status quo that would otherwise change, while also boldly assuming that physical traits can reliably expose interiority.

It seems that there is an increasing drive to trust unreliable conclusions of AI in our current economic and political climate, especially when so few regulations and standards are placed on AI research and on businesses and military agencies that utilize this technology. Panelists also expressed concern over the proliferation of black box algorithms (i.e. algorithms who’s inputs and outputs are considered without an understanding of their internal working) in public agencies since they are at odds with due process.

The panelists explored some solutions to their concerns:

  1. Temperature checks during coding to gauge the impact of work, especially for those that work in media sharing (e.g. Facebook timeline engineers) and search engines.
  2. Culture/education change for AI creators so that they are less separated from the real-world consequences of their work.
  3. Paying closer attention to the quality of training sets used in machine learning and putting a greater emphasis on communicating the quality of these training sets when sharing results.
  4. More regulation (through legislation) on AI research in private and public sectors.
  5. More restrictions on what data can be collected and stored by businesses and hold businesses accountable for this data’s security (note: Equifax data breech).
  6. Promote mass concern about the use of AI, especially in business – with regards to personal data – and in the military.

 

What’s Next?

The FACETS Un-Conference on Art & Artificial Intelligence exposed a lot of dire issues that are present in the use and creation of contemporary artificial intelligence. However, all of the panelists’ potential solutions for their concerns during the discussion periods of both panels were quite classic. The critical examination of information as it is collected, reconfigured, and then molded into new information should not be taken for granted in our current era of machine-learning technology. While all of the panelists shared how novel these situations are because of the incredible power of the new technologies that they have worked with, their solutions (such as paying close attention to training set data and expressing results clearly and accurately) are very typical to those in traditional information science professions. There exists a divide between some technologists and those in the humanities and traditional information professions that allows for creators and users of AI to skip the critical steps necessary to ensure a safe, secure, and accurate method of archiving, producing, and sharing information. The panelists at FACETS did an exemplary job at exposing and exploring this divide, and so I recommend that readers of this post who are interested in the intersection of technology, information, and art to check out their next event when it is announced. What I learned from this event makes the mission of Pratt’s School of Information seem even more urgent and timely: we need more information professionals who can critically examine how new technologies change our information landscape in order to better prepare us for the challenges that are already arriving.

 

Works Cited

FACETS. http://www.facets-con.com/. Accessed 28 Oct. 2017.

McDowell, Kenric, et al. “A (Realistic) Future of AI – What Can/Is AI Research Really Aiming for in the Near Future?” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.

Sinders, Caroline, et al. “AI and Data Security.” FACETS Un-Conference on Art & Artificial Intelligence, 28 Oct. 2017, Goethe-Institut, New York, NY. Keynote Panel Discussion.