Computers, A.I., and the Human Brain

Donald Norman, writer and computer science researcher, has emphasized throughout his writings that the key to good design is understandability. But what does understandability mean when talking about the complex technology people are increasingly relying on? In his 1998 publication The Invisible Computer, Norman criticizes the complexity of technology and states that “the dilemma facing us is the horrible mismatch between requirements of these human-built machines and human capabilities. Machines are mechanical, we are biological.” Humans are creative and flexible, we interpret the world around us (often with very little information). We deal with approximations, not the accuracy of computers, and we are prone to error. Norman at first seems to be pitting technology and humans against each other, technology (digital) is precise and rigid, whereas humans (analog) are flexible and adaptive, and raises the question of who/what should dominate, the human or the machine? His conclusion is that computers and humans work well together, that we complement each other, but that we need to move away from the current machine-centered view, and more towards a human-centered approach, making technology more flexible to human requirements. Now almost twenty years later, we are witnessing an evolution in computers, the rise of Artificial Intelligence. To make way for A.I., large internet companies are starting to look towards human biology in the new makeup of computers. In his writing Norman calls on strategies to make the relationship between humans and computers a more cooperative one. Does the current technological evolution mean that those calls have been answered?

Norman compares the computer and the human brain: computers are constructed to perform reliably, accurately, and consistently, all within one main machine, whereas the human brain is much more complex in its computations through the workings of vast amounts of neurons. Outlined in a recent article, “Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains” by Cade Metz, tech companies are starting to realize that due to the decline of Moore’s law, progress is no longer about upgrading the current traditional “single, do-it-all chip – the central processing unit”, it’s about needing more computing power, needing more computers. Traditional chips cannot handle the massive amounts of data to accommodate new technological research like that of A.I. As a new method, specialized chips are being created to work alongside the C.P.U., offloading some of the computing power to various smaller chips. Spreading the work across many tiny chips, makes the machine similar to the brain in energy-efficiency.

In 2011, a top Google engineer, Jeff Dean, started exploring the concept of neural networks, “computer algorithms that could learn tasks on their own” (Metz, C.), which elevated A.I. research in voice, image, and face recognition. These neural networks are similar to our own human abilities to make sense of the world and our surroundings in order to decide what information to attend to and what to ignore. The December 2016 article by Gideon Lewis-Kraus, “The Great A.I. Awakening” details in great depth the trial and error phase in training a neural network. The neural network learns to differentiate between things such as cats, dogs, and various inanimate objects, but all while being supervised by the programmer/researcher who will often correct the machine until it starts producing the proper responses. Once a neural network is trained it can potentially recognize spoken words or faces with more accuracy than the average human. Google Brain, the department that first started working on A.I. within the company, developed the neural network training under the notion that the machine might “develop something like human flexibility” (Lewis-Kraus, G.).

In his argument of humans versus computers, Norman talks about the evolution of human language, and that “communication relies heavily upon a shared knowledge base, intentions, and goals,” resulting in a “marvelously complex structure for social interaction and communication.” But what if a machine could grasp language? A good example of machine evolution in language can be seen with Google Translate. Up until it’s major update in November 2016, Google Translate was only useful in translating basic dictionary definitions between languages. Whole sentences or passages from books would lose their meaning as the words were translated separately and not in the context of the entire passage. But once Google applied its neural network research to Google Translate, the service radically improved overnight. At a conference in London introducing the newer improved machine-translation service, Google’s chief executive, Sundar Pichai, provided this example:

In London, the slide on the monitors behind him flicked to a Borges quote: “Uno no es lo que es por lo que escribe, sino por lo que ha leído.”

Grinning, Pichai read aloud an awkward English version of the sentence that had been rendered by the old Translate system: “One is not what is for what he writes, but for what he has read.”

To the right of that was a new A.I.-rendered version: “You are not what you write, but what you have read.” (Lewis-Kraus, G.)

 

With the A.I. system, Translate’s overnight improvements were “roughly equal” to the total improvements made throughout its entire previous existence.

I would argue that the development of A.I. is taking a more human-centered approach to computers than has ever been seen. The method of using neural networks comes straight from one of our greatest human abilities, which is to learn. A machine that can learn on its own is flexible, adapting to its environment. Norman brings up two different themes in human-computer relationships: one is that of which he believes society to be in (at the time of publication in 1998), the theme of making people more like technology. The other theme is “the original dream behind classical Artificial Intelligence: to simulate human intelligence,” making technology more like people. I believe we are at the gates of that dream of Artificial Intelligence, but instead of trying to make one more like the other, humans and computers are taking an approach that builds off each other’s strengths, through computer logic and human flexibility.

 

References:

Norman, D. A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog http://www.jnd.org/dn.mss/being_analog.html.

Metz, C. (2017). “Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains.” The New York Times, https://www.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?_r=0. September 25, 2017.

Lewis-Kraus, G. (2016). “The Great A.I. Awakening.” The New York Times, https://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html. September 25, 2017.

Leave a Reply

Your email address will not be published. Required fields are marked *