Event: World Interaction Day hosted by IxDA

Interaction Design Association (IxDA) celebrated World Interaction day on September 24, 2019. World Interaction Day is an annual event hosted at various locations around the globe where designers come together to show how interaction design improves the human condition. Presented in partnership with Adobe, the theme for this year was Trust and Responsibility. I attended this annual event hosted by IxDA in New York.

The event kicked off with an introduction by Scott Belsky, the Chief Product Officer at Adobe. He spoke about how good interactions build trust. As designers, we get to influence a crucial part of the user’s experience therefore when we design we need to take responsibility for the interface we create. Users trust the design. Whenever you click a button on a website you expect a result but when you don’t obtain what you expect, you feel deceived. People are more likely to determine the trustworthiness of a website based on its design than reading the website’s privacy policy. How likely are you to close a website because of the number of click baits on the home screen? Have you ever felt cheated when an ad doesn’t look like an ad? Belsky says that as designers our core obligation is to be the voice of the users. Designers should understand their users well especially their needs and concerns and create interfaces that fulfill these needs and concerns. 

Mark Webster the Director of Product at Adobe spoke about Trust and Responsibility in Voice Design. The adoption of voice technology in virtual assistants is rapidly growing, especially with the emergence of Alexa and Google Home. Most users of voice enhanced technology claim that voice improves their quality of life( 94%). Although more than half of them (around 49%) find using voice technology unintuitive. This reminded me of the last time I used Alexa and asked her to find the English equivalent of ‘dhania’ and she responded with “I don’t understand”. Did I not articulate enough or was she unable to find what I asked? Webster talked about how designers can play an important role in eliminating problems like these. Voice technology can improve the human lifestyle in many ways. It is an opportunity to allow illiterate people access information, it can help the aged and most of all help people with motor disabilities with their day-to-day activities. The only issue is because voice interaction is so unintuitive it results in uncertainty about all of these things. How is voice going to help people with motor disabilities if the user doesn’t know what the virtual assistant has understood? Voice processing works in three parts – The technology senses speech, the speech is processed according to the various algorithms(natural processing) and then and output is delivered. Current technology has been able to implement the first and last part pretty efficiently but ‘natural processing’ still has a lot of limitations. This is where the role of a designer would be crucial as the designer can understand these limitations, combined with the knowledge of the user’s intent, decide on how to make the experience more intuitive thus enable users to build trust with these technologies. 

What do we really mean when we say ethics? Dr. Molly Wright Steenson, Senior Associate Dean for Research in the College of Fine Arts at Carnegie Mellon University elaborates on how designers can incorporate ethics in their design process. Since designers are good at investigating the context of the problem and use human-centric methods to understand the needs of the users and stakeholders, they should be directly involved with data. Framing the design problem is not the only task of the designer but how and what data to collect and use is also a design question. Dr. Steenson emphasizes how the ethics framework should stop looking like a checklist and be a vital part of the product life cycle. This reminded me of the PERCS chart we read earlier which talked about the ethics of fieldwork which according to me can be applied to the design process. Even Costanza-Chock and Sasha’s reading Design Justice talks about how designers should warrant a more equitable distribution of the design’s benefits and burdens and be aware of the cultural and traditional implications of designs.

This talk was followed by Milena Pribic the Advisory Designer of Artificial Intelligence Design at IBM who addressed the issues of ethics in AI. She talked about what it means to build a healthy relationship between two people when one of them is AI. She defined a framework for AI ethics used at IBM that can be incorporated by designers while designing interactions for AI. Trust and transparency are important when it comes to designing for AI. She provides a guideline on how to handle client data and insights to ensure they are protected. They include: 

  • The purpose of AI is to augment human intelligence
  • Data and insights belong to their creator
  • New technology, including AI systems, must be transparent and explainable

The event concluded with questions from the audience regarding the topics discussed. This was an eye-opening event for me because I realized as a designer there are numerous factors I should consider when I create my design. Design is not just about solving a problem but also considering its impact. Am I protecting my client/user information? Am I being inclusive of the different communities affected by my design? Am I able to build trust with the users of my design? As a designer have I successfully addressed the needs as well as the concerns of my users? This event made me realize what my responsibility is as a designer and what measures I should take to ensure my designs are trustworthy. 


  1. Massachusetts Institute of Technology, & Costanza-Chock, S. (2018, June 28). Design Justice: Towards an intersectional feminist framework for design theory and practice. Presented at the Design Research Society Conference 2018. https://doi.org/10.21606/drs.2018.679
  2. Pribić, M. (2018, September 6). Everyday Ethics for Artificial Intelligence. Retrieved October 15, 2019, from Medium website: https://medium.com/design-ibm/everyday-ethics-for-artificial-intelligence-75e173a9d8e8
  3. What We Really Mean When We Say “Ethics”—Molly Wright Steenson | Open Transcripts. (n.d.). Retrieved October 11, 2019, from http://opentranscripts.org/transcript/what-we-really-mean-when-we-say-ethics/
  4. How Good Interaction Design Builds Trust. (2019, September 18). Retrieved October 11, 2019, from Adobe Blog website: https://theblog.adobe.com/how-good-interaction-design-builds-trust/
  5. IBM’S Principles for Data Trust and Transparency. (2018, May 30). Retrieved October 15, 2019, from THINKPolicy website: https://www.ibm.com/blogs/policy/trust-principles/
  6. Voice Assistant Statistics & Trends, 2019—UX Survey. (2019, July 22). Retrieved October 11, 2019, from Adobe Blog website: https://theblog.adobe.com/voice-assistant-statistics-trends-2019

Emotionally Intelligent Design Workshop

UXPA@Pratt organised the ‘Emotionally Intelligent Design Workshop’ on 16th February 2019. It was conducted by Pamela Pavliscak and the theme of the workshop was ‘Love’.

The motive of the workshop was to give the participants a basic understanding of how emotion-sensitive artificial intelligence works and how to design the same. The session was broken into parts like that of a four-course meal. The participants were divided into pairs to mimic a setting of a date. Each pair was given a topic and a situation for which they had to design an emotionally intelligent device.

Each pair conducted an interview in relation to the situation provided to them, where one played the part of an interviewer and the other, the interviewee. The situations or problems given were all with the context of love like, cohabitation or being single. The devices to be made by the end of the workshop were to solve the given problems faced the by participants.

The workshop was well-structured and all the parts were highlighted right in the beginning. All the problems and solutions were personal and unique because they were in context with the participants’ personal experiences. Ways to uncover the emotions behind every design or prototyping steps taken were shown. Methods to design any device, not only mobile or web-based applications but physical products as well so that they can read and adapt to human emotions, were discussed.

The ways emotional intelligence shapes the future of technology were discussed, where AI would be able to interact with humans on an emotional level and as Sengers describes it “The hope is that rather than forcing humans to interface with machines, those machines may learn to interface with us, to present themselves in such a way that they do not drain us of our humanity, but instead themselves become humanized.”

There has always been a debate, whether AI is a benefit or a risk to the society. But this workshop emphasized on how AI and emotional design could be used to impact society in a positive way. The participants were made to explore the world of ‘Emotional Intelligence’ in a much deeper sense, which resulted in creative and adaptive designs at the end.


  • Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” 

Webinar: How Product Insights uses UserTesting

UserTesting is a platform used to receive rapid customer feedback across different interfaces. UserTesting is most commonly known for remote moderated and unmoderated usability studies. I recently watched a webinar by UserTesting titled “How Product Insights uses UserTesting” which essentially explained how their product insights team uses their own platform to scale research within teams as well as within the entire company. 

The webinar was essentially broken into three points: Data Science & User Experience (UX), access to customers, and enabling others. However, I was particularly interested in the first segment – the relationship between Data Science & User experience (UX). There were several connections to topics that we’ve discussed in class. The speaker, Josh Kunz whom is a senior UX researcher at UserTesting, placed most of the emphasis on User Experience. He explained that they attempt to connect data science and UX research in order to ask and answer impactful questions that ultimately affect the human-centered design. The discussion on this topic touched on research methods, human-computer interactions, and human-centered design.

It was interesting to see the different research approaches that the UserTesting team take, in relation to the methods that we’ve discussed in class. The speaker did not make a clear distinction between qualitative and quantitative research approaches, but it was apparent through his explanations. He elaborated on the UX research process, which greatly resembled the qualitative approach – using interviews and focus groups.  Additionally, he briefly discussed the data science approach which was more of a quantitative approach using statistical modeling, predictions, and algorithms to answer and ask questions. However, it seemed that with the data science approach, this team only analyzed large sets of data that they currently have in their database, closely resembling secondary research, verses collecting and then analyzing data as a primary research. 

During the webinar he walked through a scenario in which UX researchers wanted to see if their perception of how customers used UserTesting matched how customers actually use it. This curiosity came about due to an observation by UX researchers noticing that customers would make copies of tests. As discussed in McGrath’s article, “Methodology Matters: Doing research in the behavioral and social science,” we’ve learned that observations are a qualitative research approach (1995). Data scientist then found out that about 80% of tests are copied with then another 80% of those tests are copied again, which is essentially a vast majority of their customers. Data scientist found this information out through a process of modeling and querying data, closely related to quantitative approaches. The UX researchers then performed both in-person interviews and focus groups with their users to gain an understanding of why customers created these copies. Interviewing and focus groups are another qualitative approach that we’ve both read about and discussed in class (McGrath, 1995). They ultimately found that customers create “chains of tests” which data scientist later ran even more statistical modeling resulting a in visualization that showed how all the tests were related. Finally, the UX researchers then performed another round of interviews which essentially acted as a final set of validations for the previous findings. The switch between UX research and data science is closely related to a mixed methods sequential exploratory design, where one team is essentially collecting data and another team is analyzing or validating it (Creswell & Creswell, 2018).

Ultimately, this research helped UserTesting redesign their interface. This is actually related to another set of topics that we’ve touched upon in class: human-centered design & human-computer interactions. For example, the purpose of this iterative process is to ultimately figure out how the user is actually using the product. As I was watching the webinar, I thought of Wilson’s article, “Human Information Behavior,” in that the focus is not on the system but rather the user (2000). I also feel that this process as a whole pull in principles from human-computer interactions. The research is primarily observing human behavior and analyzing it in relation to the interface to design appropriately. At the end of the speaker’s anecdote, he explained that these findings helped them design with multi-study projects in mind, since this is majority of their audience. Additionally, they also adopted the Google “hearts” frameworks, which was an instrument that I was unfamiliar with.

The Google “hearts” framework does an excellent job marrying UX and data science in that is covers five metrics that both teams are able to measure. These metrics are happiness, engagement, adoption, retention, and task success. Engagement, adoption, and retention are metrics that data scientist are able to measure while UX researchers are able to measure happiness and task success. (Click here for an article that explains the framework more in-depth.)

I thoroughly enjoyed this webinar. I thought that it was really interesting to see how they use their own platform to perform research. I also never really thought that UX research and data science would be highly complementary. It makes sense to think of this as a mixed-methods approach in that the strengths of one team will eliminate the weakness from another. For example, data scientist found that a majority of their customers were creating copies of test, but they would not be able to figure out why they were doing this. The UX team are able to take a most human-centric approach to understand this behavior and actions. I suppose that another distinction that could be made would be to say that data science seems product-centered while UX seems human-centered.


Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative & mixed methods approaches. Thousand Oaks, CA: SAGE.

Mcgrath, J. E. (1995). METHODOLOGY MATTERS: DOING RESEARCH IN THE BEHAVIORAL and SOCIAL SCIENCES. Readings in Human–Computer Interaction,152-169. doi:10.1016/b978-0-08-051574-8.50019-4

Wilson, T. D. (2000). Human Information Behavior. Informing Science: The International Journal of an Emerging Transdiscipline,3, 049-056. doi:10.28945/576

The UX of Virtual/VR Tour of Museum

Virtual tour of museums has been around for a while, but it is far from being widespread and popular, which I found it is a pity because it can really benefit a lot of people if we do it right. It is also a perfect category for the recent hottest tech – VR to implement. After browsing some virtual tour project of museums, I found some common issues and drawbacks and a few shining points. As a UX designer, I would like to try on analyzing these projects from a user experience perspective. Below are the key factors I found that matter the most for a good experience of virtual/VR tour of museums.

Smithsonian Museum Virtual Tour

(Typical setting of virtual tour: map, arrow, controller)

To clarify, a virtual tour is a simulation of an existing location, often composed of a sequence of videos or still images that are panorama. If it is still images, users often have the control of the pace and location where they “stand” and look at, the drawbacks are the scene is static and most of the projects are difficult-to-control. If it is video, then the location will be filmed at a walking pace while moving continuously from one point to another, where users have to follow the sequence and won’t have free control.

Virtual reality tours are the virtual tours that can be viewed and experienced by a VR viewer (headset). They are more immersive and have different controllers compared with viewing on a computer, tablet or phone, depending on which headset or app users use.


3D vs 2D

There are two ways these virtual tours present the work in the museum – 2D photos or 3D model.

The most common way is using 2D still photos where users can only see the work from a certain angle, which presents the same scene with the physical museum but the experience is incomplete, such as the one from Smithsonian Nation Museum of Natural History.

Smithsonian National Museum of Natural History

(2D Virtual Tour – Smithsonian Nation Museum of Natural History)

While some project use 3D model to rebuild a virtual museum, where each object is independent, so users can select, zoom in, and rotate to observe closely, such as the project of Ancient Sculptures of Vietnam.


(3D Virtual Tour – Ancient Sculptures of Vietnam)

From the user experience perspective, the experience of 3D is much better than the 2D one, because users can interact with objects. It also has the advantage of better storytelling, since the narrative or commentary can pair with each object, and be presented only when users select the object. Sketchfab, a 3D models platform, also has some exquisite 3D models in different categories including one for cultural heritage & history.

Sketchfab- Cultural Heritage & History


User-control and Interaction

Another issue I found when experiencing the virtual tours is the awkward user-control.

In the most common 2D virtual tour, users can only stand in one location at a scene. The only interactions are rotating the viewing angle and zoom in/out. Since users can’t move horizontally, they can’t see the objects placed in the longest distance clearly. It means what users can see in the virtual tour is partial. The only movement users can take is to move the next scene by clicking the “large arrow” on the ground where the transition isn’t smooth and continuous either. I think the poor performace of user interaction is the most important reason why the virtual tours are not real enough so far.

Smithsonian National Museum of Natural History2


Narrative and Commentary

The accessibility and convenience of narrative or commentary should be the advantage of a virtual tour because users already have a device anyway. However, I can’t find the well-presented narrative or commentary in most of the cases. This is tied to the issues of 2D photos which can’t separate the objects within it.

For cultural heritage like the work in museums, the stories behind them are too important to neglect. On the app or website of Met (Metropolitan Museum of Art), there are great introductions and audio commentary for each work. While in its virtual tour, the narratives are still missing. Since this kind of narrative and commentary resources is already there, I would say adding them to the virtual tour should be the next step to improve the experience.

Met App

(App of The Met)

Searchability and Shareability

Other features that should be the advantages of the digital tour while missing are the searchability and shareability. When people are consuming information, search and share are the two vital parts of their behaviors. (Wilson, T. D., 2000). One happens at the beginning (of information behavior), and one happens in the end.

In the physical museum, people use a map to search and locate the information they want, and they take photos or write notes to share with others. While in the virtual tour, the map (often located on the top right corner of view) is majorly for switching location. There is no search bar or menu as other digital products, and the map is not listing enough details for users to easily locate the things they want.

Virtual Museum Tour Map

(2D Virtual Tour – Smithsonian Nation Museum of Natural History)

In terms of the shareability, if users experience the tours on the computer, tablet or phone, they may be able to take screenshots, although it is not convenient and personalized. If they watch with a VR headset, then there is no way for them to keep a record and share with others. Without the shareability, the virtual tours just lost the free yet powerful marketing opportunities – word-of-mouth.

I believe the virtual tours has great potential because it makes the best work and cultural heritage of the world more accessible to anyone. It has unique advantages compared with physical tour while there are still some gaps it needs to catch with the experience of the physical tour. Hopefully, it will happen soon as the evolvement of virtual reality and 3D modeling.



[1] Dalbello, M. (2009). “Digital cultural heritage: concepts, projects, and emerging constructions of heritage,” Proceedings of the Libraries in the Digital Age (LIDA Conference, 25-30 May, 2009

[2] Wilson, T. D. (2000). “Human information behavior.” InformingScience3(2): 49–56. http://ptarpp2.uitm.edu.my/ptarpprack/silibus/is772/HumanInfoBehavior.pdf.

[3] Virtual tour, Wikipedia https://en.wikipedia.org/wiki/Virtual_tour

[4] How VR Is Changing UX: From Prototyping To Device Design https://uxplanet.org/how-vr-is-changing-ux-from-prototyping-to-device-design-a75e6b45e5f8

[5] Smithsonian Nation Museum of Natural History http://naturalhistory.si.edu/VT3/NMNH/z_NMNH-016.html

[6] First 3D Virtual Museum with 3D scans of ancient relics – Ancient sculptures of Vietnam, http://vr3d.vn/trienlam/virtual-3d-museum-ancient-sculptures-of-vietnam