Observation at the Whitney Museum of American Art – Programmed: Rules, Codes and Choreographies in Art, 1965-2018

(Photo credits: Christopher Ku, Instgram: cawriskoo)

On March 29th 2019, I visited this exhibit and was transfixed by the main installation, which is a floor to ceiling panel of television sets. It is important to note that when I visited the exhibit, the museum was also showcasing its final weekend of their Andy Warhol exhibit. There was also an organized protest that was taking place at 7pm during the museum’s pay-as-you-wish period. The museum was jam packed with ticket buyers, members, and security staff. Although it was a high capacity evening, my access to the museum’s Programmed exhibit was smooth and calm.

The content in this exhibit celebrates art through programmable codes (or instructions) and how these codes can be used to manipulate the artists’ medium (computerized program or image sequence). All of the pieces in this exhibit were created through various types of computer programs, which were used to establish the structure and color of the piece. They are grouped in one of two sections: “Rule, Instruction, Algorithm”, which focuses on the rule-based conceptual art practices prior to digital art technologies and “Signal, Sequence, Resolution”, which focuses on the coding and manipulation of the moving image. Walking through the exhibit it was hard for me to differentiate between these two groups since nearly every piece has some sort of tech-based manipulation applied to it.

This exhibit is very open and full of content.  Navigating the exhibit can be overwhelming because there was music playing from the main attraction, Nam June Paik’s Fin de Siecle II (pictured above) and other installations around it. Adding to the noise from the installations are the human noises produced by the visitors and employees. I also noticed many people gravitated towards the multimedia content more so than a piece that did not openly appear to have a tech component to it.

Photo credits: Whitney Museum of American Art

Tilted Plane (pictured above) is a great example of one installation receiving more “people time” than other pieces. I think a big part of why this installation attracted more visitors than others is because of how “instagrammable” it looked. Jim Campbell created this piece in 2011 to project a two-dimensional image into a three-dimensional space. To do this, he placed modified LED lights at specific locations on a circuit board to mimic pixels in a low-resolution display. The viewer would enter at the spot Whitney staff has sectioned off as the entrance, which allows you to see the initial sight of birds taking off and landing. But as the viewer moved along, the image becomes distorted, and random, creating the illusion that you are no longer looking at a specific thing, but something abstract.

Being immersed in Tilted Plane not only gave me a serious case of dizziness (a warning sign was placed outside the installation about this) but also immediately reminded me of Bates (2003) and her discussion on natural and represented information. According to Bates, these forms of information allow for organization of knowledge and representation of this knowledge through other means. When applied to how the pieces in this exhibit was created, I can’t help but think about the process in which each artist came up with their initial concept (encoded information), their process of creating such pieces (embodied information), and the completion or exhibiting of their piece (exosomatic information).

Another piece from the exhibit that I found interesting is The Interactions of Coloreds by Mendi + Keith Obadike.  It is important to note that this installation was not as popular as Tilted Plane, but exhibited some important themes that should be looked at. This interactive piece invited the viewers to look at the conceptual website created by the artists to see how skin color has effected online commerce and ad-targeting. However, as the gallery attendant for the exhibition explained to me when I had trouble figuring out how to use the installation, the website built by the artists is not updated in real-time and tends to lag. Their website can be accessed here

Their “product” is a system that can help companies judge their customers or employees based on their hexadecimal color (the HTML equivalent of color). To add an interactive component to their website, they include a link that brings their viewers to a Google Doc questionnaire, which is to be filled out to compile the hex code for the viewer. Compiling this information is no different from Big Data firms collecting information from their users to better direct ads towards them to sell a product or sway them towards voting for a specific party or person. On the darker side of things, sometimes even limiting our access to important resources is a flaw in the types of systems offered to us in the real world. Costanza-Choke (2018) argues about these design injustices, where dominant groups oppress those who are often underrepresented because of their lack of access to resources that will help voice their concerns.

This installation reminds me of Sephora’s Color iQ, a “beauty service” tool that scans the surface of your skin to match makeup users to a host of foundations appropriate for their skin tone and color.  Each Sephora customer that has used this service is then matched up with a 4-digit and letter combination code that is linked to specific shades in the brands they carry. From a consumer point of view, this tool is useful since it gives me a curated look at products from brands that are guaranteed to work for me. But looking at it from an information science student’s point of view, I wonder how that information has been used since then.

Overall, Programmed is an exciting exhibition looking at alternate forms of art through digital manipulation. While pieces that had great aesthetic appeal harnessed more attention from visitors, other pieces had more alluring underlying themes that provoked viewers to look at them more closely.

Tiffany Chan, Info 601 – 01

References:

·           Bates, Marcia J. (2006). “Fundamental forms of information.” Journal of the American Society for Information and Technology 57(8): 1033–1045.

·           Costanza-Chock, Sasha. (2018). “Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice.” Proceedings of the Design Research Society 2018.

The Feeling of Technology

What makes us feel?

From a biological perspective, it is proven that nerves located at integral parts of our bodies help us interpret external stimuli that come in contact with our body. The amygdala in our brain is a limbic structure that helps us process emotions and is a component that makes humans unique. The way our bodies have evolved have made us into analog creatures that react well to external stimuli in the natural world and this in turn has helped us become highly adaptable to earth’s different environments (Norman, 2008). From a technological standpoint, what happens when we begin to try to build machines to be more like us? What happens when we want our machines to then replicate our innate emotions or our psyche, to perform for us?

These were questions that I thought of when I was attending UXPA’s Emotionally Intelligent Design Workshop on February 16th. During this workshop, Pamela Pavliscak, a specialist that studies the relationship between our emotions and technology, asked us to partner up and design an app or piece of technology with human emotion in mind. We were required to use two themes as the basis of our invention. For myself and my partner, we had to create a dating app for people that are single. To help us create our invention, Pamela offered examples on how the tech industry has already began using forms of emotion, like our gestures and tone of voice, to implement design features that help build programs that react to us. Their reactions to our emotions will then prompt the machine to respond in a way that’s human, but not quite.

An example of this is SimSensei, a virtual human interviewer, which was created as a means to help health care professionals make more informed decisions on their patients based on their responses to the virtual interviewer. SimSensei is represented by a virtual human named Ellie, who is programmed to conduct interviews that help “…create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety, or post-traumatic stress disorder” (DeVault et al, 2014, p. 1061). Essentially, by creating a virtual helper like Ellie, people at risk of certain mental health disorders can feel they can open up to her, and in turn they can receive the right treatment. Patients are often misdiagnosed in the medical field so I think SimSensei has the right programming to flag warning signs of a particular disorder (keep in mind that it is mainly being used in diagnosing mental health issues).

In my honest opinion, it almost feels like Ellie has been programmed to trick patients into thinking they can trust it. During the course of an interview, the patient is being monitored, and every question Ellie asks is to create a response from the patient, either through speech or through facial changes. Here is a YouTube video that will help you see what sort of questions Ellie is programmed to ask to during her interviews and the type of facial tracking the machine uses.

Another great example offered to us is Toyota’s 2017 movie on a futuristic vision of how some cars may be developed (access it here ). The car featured in this short movie is a concept model, along with the AI named “You-ee” that is built into it. We see aspects of the car’s AI offer advice, act as “wing-man”, and my personal favorite – give positive reinforcement. During the workshop, only the clip from 5:45 to 6:34 was shown. Seen in its entirety, we get a glimpse into what an emotionally intelligent system can do for us. By giving something like “You-ee” human-like qualities (like its ability to make a joke out of Noah’s messy hair), it allows us to view the car as an extension of ourselves. More importantly, I think having a dependable AI is something that will allow individuals to flourish and establish better ties with their human counterparts.

Learning about the different types of emotion-based systems that are already on the market reminded me of Phoebe Senger’s remarks on AI being “..autonomous agents, or independent artificial beings” (Senger, 1999, p.10). We can, at this point, say that Ellie is a step away from being an autonomous agent. Although SimSensei is only currently being used to help doctors diagnose mental health patients, won’t this tool eventually be programmed to perform the the diagnosing by itself and then also administering treatment?

After reading Senger’s article, I now understand how the effects of implementing emotion into our programs can push our machines to the next level. Ellie is programmed with a voice and is made to be able to connect to humans so that we can better understand our own species. We will always be building towards the future, but we always want to keep our connections to one another close to us. After all, humans are empathetic and this quality will be incorporated into the things we create. “You-ee” a perfect example of how the relationship between human and AI can potentially be a harmonious union.

At the end of this workshop, all the groups presented their designs and prototypes. My partner and I decided to create a dating app that required all users to scan a full body image of themselves and display it on their page. Since I’ve never used a dating app before, I was never subjected to the cruel reality of them. According to my workshop partner, dating apps can make finding a partner relatively uncomfortable and weird. Therefore, by implementing a way to circumvent the feeling of discomfort and dishonesty, we believed having your entire self displayed is a great way of creating a more open dating world. But you may ask at this point: “Where’s the portion of your app’s design that makes your prototype emotionally intelligent?”.

And I will answer: “We’re not at that point yet”.

References:

  • DeVault, David et al. (2014). SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support. 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014. 2. 1061-1068.
  • Norman, Don A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog
  • Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” Surfaces VIII.