On Saturday, February 16, 2019 about thirty Pratt students attended a three-hour workshop at Pratt Manhattan Campus called “Emotionally Intelligent Design” hosted by the school’s chapter of UXPA (User Experience Professionals Association). The event was led by Pamela Pavliscak, founder of Change Sciences—a design research studio focused on emotionally intelligent design, author of the fall 2018 book Emotionally Intelligent Design (O’Reilly), and current faculty member at Pratt Institute.
According to the invitation posted on the Pratt School of Information listserv, the objectives of the workshop were to teach students how emotionally-sensitive AI tools work, as well as methods to prototype, test, and evolve experience with emotional intelligence. During the workshop Pamela shared a statistic that emotion-centered AI products and tools will be a $50 billion industry by 2025 and it will be integrated with most industries. Anecdotally, we are already seeing major trends in this direction, for example, in the realm of online dating, facial recognition technology, voice assistants and chatbots. Yvonne Rogers’ “New Theoretical Approaches for Human-Computer Interaction,” supports this claim by explaining that due to the rapid pace of technological developments, new opportunities are being created to augment, extend, and support user experiences, interactions, and communications. Therefore, designers and technologists now have new methods and practices to conceptualize and evaluate a fuller spectrum of interactive products to support a broader range of goals (e.g. aesthetically pleasing, motivating, fun) and evoke an emotional response from users or participants.
The format of the workshop was students participating in several different activities completed in 2-3-person groups that were interspersed with short presentations about tools demonstrating technology imbued with elements of emotional intelligence. Some examples of technologies introduced included: social robots like Pepper, healthcare support technologies like SimSensei that use facial reading and other biomarkers to sense emotion, CrystalKnows which uses social media and text (i.e. email) data to aid in better communication with coworkers, candidates, etc., Affectiva which enables facial emotion analysis in context, and the Toyota Concept-i car that “anticipates” users’ needs to create a better driving/riding experience.
We began with an ice breaker asking some of the questions to fall in love in our small groups. Once acquainted, each group was assigned a specific context (i.e. conflict) and a challenge (i.e. building empathy) from which we would operate and ideate throughout all of the other activities. My partner and I we completed an interview where we discussed a specific conflict. The scenario that my partner shared was that she and a friend were attempting to find an apartment together while the friend was based in New York and she was out of the city for the summer. It posed a challenge because only the friend was able to view the apartments in person. Communicating about desired apartment features was a challenge as well as being completely transparent about priorities. The situation became so tense and uncertain that, in the end, they eventually decided to not find an apartment together.
This scenario framed our further explorations into sketching and visualizing what happened to the relationship over time and what sensory experiences were involved. By the end of the prototyping, my partner and I had sketched a mobile app complete with front-view and self-view cameras with embedded sentiment analysis software so that a remote person could view a physical space while the person showing the space could get a sense of how the viewer feels about it. In our pitch to the rest of the groups, we said this type of app could help in a number of scenarios: roommate to roommate, realtor to potential tenants, venue managers to clients and more. It would potentially save the time, money, and hassle while offering communication tools and insights to help people make good decisions and become better communicators.
My main takeaway from these somewhat abstract activities were to keep the people and context centered in every part of the process and to allow myself to be surprised in the process of discovering solutions. With this conclusion, I am reminded on Don Norman’s “Being Analog” essay in which he describes a false dilemma: we can continue trying to make people more like computers – precise and logical and unemotional, or to make computers more like humans: creative, resourceful, attentive and able to change. When, in fact, humans and computers can elevate one another to ultimately help humans evolve and deal with the ever-evolving complexity of life.
References
Norman, Don A. (1998) The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog https://jnd.org/being_analog/.
Rogers, Yvonne. (2004) “New theoretical approaches for human-computer interaction.” Annual Review of Information Science and Technology 38: 87-143.
Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” Surfaces VIII.