Event Review: NYC Media Lab Summit

On September 26, 2019, I attended the NYC Media Lab Summit held in downtown Brooklyn. The mainstage program took place at the New York City College of Technology (City Tech CUNY) for the first half of the day. The second half of the day was dedicated to interactive demos and workshops and took place at both City Tech and the New York University (NYU) Tandon School of Engineering.

NYC Media Lab describes itself as dedicated to “driving innovation and job growth in media and technology by facilitating collaboration between the City’s universities and its companies” (About – NYC Media Lab, n.d.) Pratt Institute is part of NYC Media Lab’s consortium with goals “to generate research and development, knowledge transfer, and talent across all of the city’s campuses” (About – NYC Media Lab, n.d.), which also includes The New School, School of Visual Arts, Columbia University, NYU, CUNY, IESE, and the New York City Economic Development Corporation. Member companies of NYC Media Lab include Bloomberg LP, Verizon, The New York Times, and NBCUniversal, to name a few.

The Media Lab Summit held itself like a typical conference, where you check in to receive your name badge upon arrival and are treated to coffee and pastries. Then everyone takes their seats before the main program begins in the auditorium where the Executive Director of the program, Justin Hendrix, makes his welcome address and does introductions.

Innovation Panel discussion

Up first was the Innovation Panel, which featured speakers Yael Eisenstat, R. Luke Dubois, Desmond Patton, and Tony Parisi. The panel featured a mix of academics and professionals who all addressed the topic of artificial intelligence, or AI. It was interesting to hear that everyone agreed that AI is the future but that they all held concerns about whether it will be accessible to all. Another potential issue that was brought up in relation to AI is what seems like our current overdependence on data. One panelist raised serious concerns about this overdependence and worried whether this could lead to the complete disregard of an innate human characteristic, which is critical thinking. All panelists agreed that critical thinking is essential and sees it playing a key role throughout the use of AI and other technological advancements.

What I ultimately took away from this Innovation Panel was that critical thinking is needed now more than ever. I think we have always understood that critical thinking is crucial as it is what keeps us human. AI is capable of making decisions for us, but the ability to be able to critically think about the potential impacts of our decisions and asses our judgments remains entirely human. This emphasis on critical thinking reminded me of the Phoebe Sengers reading in which she also discusses machine culture but stresses that science and the humanities need “to be combined into hybrid forms” as “neither is sufficient alone” (Practices for Machine Culture, n.d.). Like the panelists, Sengers recognizes the strengths in both and how each can complement the other, especially in AI.

Next up were the showcases. The showcases were meant to present and demonstrate projects, prototypes, and startups created by students and faculty from NYC Media Lab programs. Two of the showcases that stood out to me the most were a subway accessibility app for the blind and a retina technology startup.

Access to Places presentation

Students from NYU’s Interactive Telecommunications Program created an app called Access to Places with the goal to make subway stations much more accessible for the blind. The app utilizes iOS’ text-to-speech voiceover technology to provide information such the location of entrances and exits, service delays or changes, and arrival and departure times. Notifications also help the blind to navigate around station layouts.

Retina Technologies presentation

Retina Technologies was formed by medical students at the Icahn School of Medicine at Mount Sinai. The startup aims to change the way people access ophthalmologists in both urban and rural areas. Through the use of virtual reality headsets, the startup hopes to increase access to ophthalmologists for those who cannot easily visit one in rural areas while also improving the patient experience for those in urban areas.

Access to Places and Retina Technologies both stood out to me the most because of the users that they were designing for. Instead of creating a product that catered to the majority of the population, they reached out to those with specific needs that often get neglected in the startup and tech conversations. I immediately thought of the Sasha Costanza-Chock paper on “Design Justice” and the discussion on who designers are actually designing for. The majority of startups and apps tend to assume the average user is able to access or use a product without any accommodations, much like how Costanza-Chock discusses that designers “assume” that “a user has access to a number of very powerful privileges” (2018). Visiting an ophthalmologist or getting onto the subway without any trouble are privileges that most designers tend to assume users have. Access to Places and Retina Technologies decided to instead focus on the needs of these specific user groups rather than create another app or startup that assumed they were just like every other user.

Many innovative and creative projects were demonstrated, and I was in awe over it all, but it was the discussions that were held that enlightened me. What I took to be the overall theme of the Media Lab Summit was accessibility and the continued mission to make this collaboration between media and technology available to all. I still believe that technology has this amazing potential to change and impact lives, but we must make it available to everyone to see it happen. The Media Lab Summit and our class discussions and readings only continue to highlight this necessity and how we as information professionals cannot simply ignore it as technology advances.

References:

About – NYC Media Lab. (n.d.). Retrieved from https://nycmedialab.org/about.

Costanza-Chock, S. (2018). Design Justice: towards an intersectional feminist framework for design theory and practice. DRS2018: Catalyst. doi: 10.21606/drs.2018.679

Sengers, P. (n.d.). Practices for Machine Culture: A Case Study of Integrating Cultural Theory and Artificial Intelligence. Retrieved from http://www.cs.cmu.edu/afs/cs/usr/phoebe/mosaic/work/papers/surfaces99/sengers.practices-machine-culture.html.

Event Review: Museums and AI in the 21st Century

The event taken place at Cooper Hewitt, Smithsonian Design Museum on Sep 16, 2019 mainly discussed the applications of Artificial Intelligence now and future and highlighted the role of museums as making people more self-aware. There were three talks in the event given by three different perspectives (a curator, a computer and a future teller) and a free Q & A session afterwards.

Curator: Andrea Lipps, an associate curator of Contemporary Design, Cooper Hewitt

The talk given by a curator from Cooper Hewitt first discussed the impact of AI on our lives right now. As is known to all that AI could be used in different kinds of fields like education, recreation, medical treatment, marketing automation, etc. AI could analyze large amounts of data in a short period of time and help make quick decisions. The benefits of AI are undoubted and visible. However, the curator also pointed out some questions that could not be ignored:

1.How can we ensure diversity, inclusion, safety and human rights are maintained with AI?

2.What role would AI play in our future?

3.How could museum use AI to represent new things?

There is no right or wrong to these questions and we could interpret the questions from different angles. The curator also provided some frameworks that we could use to think about AI:

1.Is it active or passive? If it is active, do you have a choice? If it is passive, is it being disclosed?

2.Is it being linked to a real-world identity or just used as anonymous ID?

3.Which methods being used when connecting AI with museums?

It’s true that we could only predict the influence and applications of AI in the future but what we should pay attention to right now are our own values and priorities. Because the use of AI is designed by human beings and design is just the externalization of our own desire. “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it… we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Said Norbert Weiner in 1960.

Computer: Harrison Pim, a Data Scientist from Wellcome Trust

The data scientist who represented a computer talked about his work content, that he used machine learning in dealing with loads of images, texts and collections quickly but not analyzing users or visitors, since AI in current period was parasitic on data. He also pointed out that AI was not designed to replace human beings but as tools to be used by people. So, the main point is how to use the tools to better serve people’s needs. The talk given by “the computer” reminded me of what I read in What is Computer Ethics: we are in a conceptional vacuum and policy vacuum world and we need to reexamine the regulations in the past world, from how to define tech-based concepts to create a relatively neutral algorithm. It is impossible to create something absolutely neutral but by creating diversity, the “fundamental vulnerability” could somewhat be relieved.

Creator: Karen Palmer, a storyteller from the future

The future teller first warned everyone that the technology would take over everything and individuals would find themselves lack privacy or security in the near future if we did nothing. We would be derived of the right of telling our own stories and the world was going to be consist of auto-self surveillance, weaponized technology and biased networks.

She used the example of criminal justice system to confirm us that bias would be the biggest problem in AI applications. An example used to support was the UK police using AI to inform custodial decisions which could be discriminating against the poor. Most assumptions made by AI right now were based on false theory while these assumptions are trend to take over our lives. Thus, she concluded that democratizing AI should be what we fight for in the near future.

What she highlighted was the necessary to turn the information age to an age of perception. “Those who tell the stories rule the world.” What museums should do is to make people more self-aware and create more opportunities to arouse citizens’ insights to social issues.

Q & A session

Q: How to apply machine learning in the field of design?

A: To begin with, the interactions between users and products would be changed by new technologies but the role of designers should not be overshadowed by AI. We could use AI to produce products or test prototypes faster. In a word machine learning should serve us but we should not be slaved by it.

Q: What would justice be like in the future and what is the role of machine learning in it?

A: Neither machine learning or artificial intelligence could answer future justice problems. Those concepts should be determined by human beings but not computer technologies. What would happen in the future is the living space AI help to create and people could better understand culture issues in the museums.

Conclusion

Though we have to admit human’s dominant role in the applications of AI, there are other problems about surveillance, power and constraints that could not be ignored. “In an era of extractivism, the real value of that data is controlled and exploited by the very few at the top of the pyramid.” Said Crawford & Joler. The event did not predict how the regulations could be established but just pointed out museums’ future role in arousing people’s awareness, which I think lack enough support and overly optimistic to some extent. Anyway emphasizing museums’ social responsibility is quite necessary right now and all museum practitioners should be prepared for the transformation of exhibition modes.

Reference

Norbert Weiner (1960), Some Moral and Technical Consequences of Automation;https://www.lesswrong.com/posts/2rWfmahhqASnFcYLr/norbert-wiener-s-paper-some-moral-and-technical-consequences

James H. Moor (1985), What is Computer Ethics? 1-2

Tarleton Gillespie (2014), The Relevance of Algorithms, 191; https://www.microsoft.com/en-us/research/wp-content/uploads/2014/01/Gillespie_2014_The-Relevance-of-Algorithms.pdf

Crawford & Joler (2018), Anatomy of AI system; http://www.anatomyof.ai

Event: Why No One is Looking at Your data


Department heads painstakingly compile reports and analyses filled with data which are sent to executives every week. The pages are barely skimmed, if read at all. Network and security operation centers line the walls with giant screens, displaying dashboards powered by expensive big data analytics. No one ever takes more than a passing glance on the way to lunch. If data is so essential, why is it so easily ignored? Many data initiatives fail to make a real impact.”

On April 10, 2019 I attended “Why No One is Looking at Your Data”, an event hosted by Meetup featuring Clare Gollnick, the Director of Data Science at NS1. Clare Gollnick started her career as a Neuroscientist and holds a PhD from Georgia Tech and a BS from UC Berkeley. As an expert on statistical inference and machine learning, she writes and speaks often on the intersection of data, philosophy, and entrepreneurship. She was previously Chief Technology Officer of Terbium Labs, where she led a diverse team of engineers and researchers. The team released novel data intelligence solutions which prevents credit card fraud while still protecting consumer privacy. Clare has published a number of academic papers on information processing within neural networks, validation of new statistical methods and the philosophy of science.

The presentation was focused on the difference between data and information, designing data dashboards and data products and “Why No One is Looking at your Data”.

Clare starts off her presentation with the scene from Douglas Adams’ novel series, The Hitchhiker’s Guide to the Galaxy to demonstrate how difficult it is to understand data without any context. In short, a race of hyper-intelligent, pan-dimensional beings had built an enormous supercomputer named Deep Thought, which calculated over a period of 7.5 million years to answer the meaning of: life, the universe, and everything. After 7.5 million years of calculation, the pan-dimensional people gathered eagerly to watch Deep Thought finally announce the answer they have been waiting for.

Scene from Douglas Adams’ novel series, The Hitchhiker’s Guide to the Galaxy.

The answer was “42”. But what does “42” mean? Data can mean absolutely nothing if you do not provide it with context.  

Practical suggestions were provided from the presentation such as choosing initiatives for investments, and providing valuable data to deliver useful and interpretable information. To understand the personal mental logic process while looking at data, Clare provided a demonstration and suggestions on how to make inferences.

The first and most obvious suggestion was to add units to the data. Using “42” as a random piece of data, adding kilogram (kg) to 42 would make 42 into 42kg. A kilogram is a SI unit of mass, an international standard, which scientists have based their definition of the fundamental unit of mass on a a shining platinum iridium cylinder stored in a locked vault in France. However, most people have not seen this cylinder, and they would be considered non- experts. Yet they are able to develop a concept of what a kilogram means by having shared experiences with other items labeled kilogram throughout their life. For example, people at the gym might have a concept of a kilogram based on the weights they would use for a specific workout done in the past. This method can be applied with any form of information, and can be built upon as well. “Cat” was then added to 42kg, further explaining the mental logic in understanding the context of 42 kg while visualizing a 42 kg cat. Claire further explains that a person with knowledge about cats (an expert) might visualize a larger cat, such as a leopard.

Segmenting the market or the audience into experts and non-experts may be the result of a single piece of data. An expert would be someone who has a solid understanding of the given information, whereas an non-expert would have considerably less knowledge.

Summary Statistics Inform Only Experts
Summary Statistics Inform Only Experts

Data results from the mental model might be different given the knowledge gap between the expert and non-expert. Experts use data more effectively, and therefore reach a “threshold” in which something is actionable. Given a single piece of data, the expert crosses that threshold and catapults into another stratosphere of other questions and other types of issues they might want to know about the data. Meanwhile, non-experts are unable to obtain the minimum knowledge required to alter their action. An expert with more familiarity of the subject might ask for the raw data because they know what to do with it, whereas the non-expert would be clueless. Clare emphasized that this dynamic creates challenges for those trying to design a product. The goal of a product should deliver repeatable and scalable value with consistent outcomes across the entire target market. When you end up in this middle ground, you are stuck with a wall of data which is not viewed or seen. Experts find data at the source while others attempt to interpret information on the dashboard.

Data Does Not Speak For Itself

The presentation take away was the difficulty in making sense of data when it does not speak for itself. This belief is addressed in Boyd and Crawford’s article, Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon”. Where Big Data provides ‘destabilizing amounts of knowledge and information that lack the regulating force of philosophy’ (Berry 2011). To understand data, there needs to be context. As mentioned in the the article Critical Data Studies: An Introduction by Andrew Iliadis and Federica Russo, data is apprehended through various levels of informational abstraction (Floridi, 2011). Big data is framed within levels of informational abstraction, where the product of positionalities constrain and afford a gateway into multiple data roles including abstraction which may be adopted, manipulated, or repurposed for any number of aims. This is a crucial part of giving sense to data. Choosing a level of abstraction from which to view Big Data alters the types of conversations that can be had about data, its aims, and functions (Iliadis, Russo 2016).

Conclusion

When you fail to recognize that data is difficult to understand, you can wind up with mismatched expectations between what is promised and delivered within a data project. Overall, the main suggestion was the need for inferences, or making a comparison to existing knowledge. Data can only inform once you already know something. You need knowledge to gain knowledge.

References:

Berry, D. (2011) ‘The computational turn: thinking about the digital humanities’, Culture Machine, vol. 12, [Online] Available at: http://www.culturemachine.net/index.php/cm/article/view/440/470 (11 July 2011).

Danah Boyd & Kate Crawford (2012): Critical Questions For Big Data, Information, Communication & Society, 15:5, 662-679 http://dx.doi.org/10.1080/1369118X.2012.678878

Floridi, L (2011) The Philosophy of Information, Oxford: Oxford University Press. https://doi.org/10.1111/1467-9973.00221

Gollnick, C. (2019, April 10). Why No One is Looking at Your Data. Lecture presented at Meetup: UX+Data, New York.

Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society. https://doi.org/10.1177/2053951716674238