Representation and Power on Wikipedia

Jewish Museum Art+Feminism Wikipedia Edit-a-thon

Introduction

On March 3, 2019 I attended the Jewish Museum’s second Wikipedia Edit-a-thon co-presented with Art+Feminism. In celebration of Women’s History Month and the final day of the exhibition Martha Rosler: Irrespective, the event included a gallery walk-through with catalog designers Mika McGinty and Rebecca Sylvers, and assistant curator of the Jewish Museum, Shira Backer. The event was open to the public and aimed to offer an opportunity for people to learn how to edit and create Wikipedia articles in an effort to improve representation of cis and transgender women, feminism, and the arts on Wikipedia.

Martha Rosler: Irrespective

Martha Rosler: Irrespective was a survey of Martha Rosler’s work over her five decade-long career. Rosler’s work is dynamic and continually evolves and reacts to the social and political issues of today, yesterday, and tomorrow. Her work largely addresses matters related to war, gender roles, and urban gentrification, and throughout her commentaries runs a reflection on feminism that doesn’t shy away from the feminine. As a result, it would be hard to categorize Rosler’s work according to any one theme. People often describe Rosler’s work as “deeply political”, “feminist”, “intentional”, “outward”, and “intersectional”. Rosler fondly describes her own work as “hamfisted”.

The event kicked off with a walkthrough of the gallery led by Shira Backer, Mika McGinty and Rebecca Sylvers. The three designers gave unique insight into the processes of exhibit curation and art book formation – where they meet and where they diverge. They stressed that the book and the exhibition were not made to be one-to-one; they could emphasize different projects due to the constraints or capabilities of the two methods. In other words, the book was an opportunity to cover pieces not highlighted in the exhibition and vice-versa.

The exhibit tried to convey Rosler’s dynamism. There was a fully set dinner table with a voice-over of a woman discussing domesticity and the expectations of French women; a selection of five videos that examine the representation of women in pop culture and American imperialism; a large prosthetic leg swinging from the ceiling to a jaunty rendition of “God Bless America”.

It is interesting to consider the challenges in showcasing and preserving dynamic and ephemeral art like Rosler’s. Rosler continually changes and adds to her work, often including participatory elements to her pieces and installations. As a result, some questions the designers had to consider include: Is the first iteration the most important?; Is repetition valuable?; Does chronology take precedence? But no matter how hard someone tries to accurately preserve some creation, there is no absolute concept such as ‘permanence’. As Cloonan proposes, “the paradox of preservation is that it is impossible to keep things the same forever. To conserve, preserve, or restore is to alter” (Cloonan, 2001). For that matter, it seems to be Rosler’s intention to create ‘mortal’ work. Work that shifts, changes, and ultimately dies. It allows us to question preservation, even our own mortality.   

The curators were evidently aware of their role as history-makers and story-tellers. They cautiously discussed Rosler’s work on her behalf, careful to distinguish between their own interpretations and Rosler’s intentions. In addition, the curators revealed that they frequently worked directly with Rosler. It is important to note that they worked with a contemporary artist who was able to be active in her own storytelling. However, regardless of their efforts, the curators ultimately could only tell a single story of Rosler – their own version – and not Rosler’s whole story.

Wikipedia Edit-a-thon

After the exhibition there was a Wikipedia training course led by Carlos Acevedo, Digital Asset Manager of the Jewish Museum, followed by an open-editing session. The goals of the edit-a-thon were for beginners to learn how to edit on Wikipedia, to improve citations of women artists, and to expand biographies of women artists on Wikipedia (Acevedo, 2019). No prior editing experience was necessary in order to participate in the event. The museum also provided a number of laptops for guests to use. For an event that aimed to increase editing accessibility and improve women’s presence on Wikipedia, providing laptops and promoting a “welcoming spirit” was significant.

The Wikipedia edit-training considered the power and responsibilities that editors have. For example, it was emphasized that articles should be written from a neutral point of view. This is arguably impossible. However, the effort to avoid overly opinionated articles and original thought in edits is a fair endeavor considering the point of a system like Wikipedia is to collect and share existing knowledge as accurately as possible.

Event Stats
  • 25 people attended
  • 2 complete articles created
  • 36 articles edited
  • 145 total edits made

Representation & Closing the Gender-Gap on Wikipedia

Gender bias on Wikipedia is not limited to the underrepresentation of women and nonbinary people on the site, but is also reflected in the fact that a vast majority of editors are cis-male. For that matter, the edit-a-thon was not only an effort to improve coverage of women on Wikipedia, but also an effort to help close the gap in contributions made by women. According to Art+Feminism, a Wikimedia survey showed that less than 10% of Wikipedia’s editors identify as cis or trans women. Moreover, editors who identify as women are far more likely than men to have their edits reverted (Acevedo, 2019). Therefore, encouraging women to participate in editing projects and creating more opportunities to do so are important efforts that may help improve coverage of cis and trans women on Wikipedia.

In Archives, Records, and Power: The Making of Modern Memory, Schwartz and Cook describe the power of archives to shape and direct historical scholarship and our collective memory. They beg archivists to consider the power they have to essentially write history, to privilege and to marginalize. These concepts of power and privilege are not specific to archivists. This power is shared by all who document, curate, store, and share information. The curators of Martha Rosler: Irrespective were aware of this power and therefore worked to acknowledge it. Correspondingly, the Wikipedia training course clearly considered the power held by editors and the source itself.

Just as history has been written in favor of the patriarchy at the expense of women, future of representation of women and other marginalized members of society lies in reclaiming power over the documentary record and the institutions that share information. By recognizing the inherent power in archives, museums, Wikipedia, and other memory-institutions, and using that power to tell and support each other’s stories, cis and trans women can hopefully close the gap in gender representation. As an open access and open source, Wikipedia may be the place to start – the power is literally in our hands.

By Tina Chesterman

References:

Acevedo, C. (2019). Jewish Museum Wikipedia Edit-a-thon co-presented with Art + Feminism. [PowerPoint Slides]. Retrieved from: https://docs.google.com/presentation/d/1F6s9logWLiRTrX9l5Tt9E4GW2VTauLbhSRZBgRQj8QE/edit#slide=id.g51b9607e8b_0_122.

Cloonan M.V. (2001). W(H)ITHER Preservation? The  Library Quarterly, Vol 71, No. 2.

Schwartz, J.M. & Cook, T. (2002). Archives, Records, and Power: The Making of Modern Memory. Archival Science 2.


iFeel: Emotionally Intelligent Design Workshop

On Saturday, February 16, 2019 about thirty Pratt students attended a three-hour workshop at Pratt Manhattan Campus called “Emotionally Intelligent Design” hosted by the school’s chapter of UXPA (User Experience Professionals Association). The event was led by Pamela Pavliscak, founder of Change Sciences—a design research studio focused on emotionally intelligent design, author of the fall 2018 book Emotionally Intelligent Design (O’Reilly), and current faculty member at Pratt Institute.

According to the invitation posted on the Pratt School of Information listserv, the objectives of the workshop were to teach students how emotionally-sensitive AI tools work, as well as methods to prototype, test, and evolve experience with emotional intelligence. During the workshop Pamela shared a statistic that emotion-centered AI products and tools will be a $50 billion industry by 2025 and it will be integrated with most industries. Anecdotally, we are already seeing major trends in this direction, for example, in the realm of online dating, facial recognition technology, voice assistants and chatbots. Yvonne Rogers’ “New Theoretical Approaches for Human-Computer Interaction,” supports this claim by explaining that due to the rapid pace of technological developments, new opportunities are being created to augment, extend, and support user experiences, interactions, and communications. Therefore, designers and technologists now have new methods and practices to conceptualize and evaluate a fuller spectrum of interactive products to support a broader range of goals (e.g. aesthetically pleasing, motivating, fun) and evoke an emotional response from users or participants.

The format of the workshop was students participating in several different activities completed in 2-3-person groups that were interspersed with short presentations about tools demonstrating technology imbued with elements of emotional intelligence. Some examples of technologies introduced included: social robots like Pepper, healthcare support technologies like SimSensei that use facial reading and other biomarkers to sense emotion, CrystalKnows which uses social media and text (i.e. email) data to aid in better communication with coworkers, candidates, etc., Affectiva which enables facial emotion analysis in context, and the Toyota Concept-i car that “anticipates” users’ needs to create a better driving/riding experience.

We began with an ice breaker asking some of the questions to fall in love in our small groups. Once acquainted, each group was assigned a specific context (i.e. conflict) and a challenge (i.e. building empathy) from which we would operate and ideate throughout all of the other activities. My partner and I we completed an interview where we discussed a specific conflict. The scenario that my partner shared was that she and a friend were attempting to find an apartment together while the friend was based in New York and she was out of the city for the summer. It posed a challenge because only the friend was able to view the apartments in person. Communicating about desired apartment features was a challenge as well as being completely transparent about priorities. The situation became so tense and uncertain that, in the end, they eventually decided to not find an apartment together.

This scenario framed our further explorations into sketching and visualizing what happened to the relationship over time and what sensory experiences were involved. By the end of the prototyping, my partner and I had sketched a mobile app complete with front-view and self-view cameras with embedded sentiment analysis software so that a remote person could view a physical space while the person showing the space could get a sense of how the viewer feels about it. In our pitch to the rest of the groups, we said this type of app could help in a number of scenarios: roommate to roommate, realtor to potential tenants, venue managers to clients and more. It would potentially save the time, money, and hassle while offering communication tools and insights to help people make good decisions and become better communicators.

My main takeaway from these somewhat abstract activities were to keep the people and context centered in every part of the process and to allow myself to be surprised in the process of discovering solutions. With this conclusion, I am reminded on Don Norman’s “Being Analog” essay in which he describes a false dilemma: we can continue trying to make people more like computers – precise and logical and unemotional, or to make computers more like humans: creative, resourceful, attentive and able to change. When, in fact, humans and computers can elevate one another to ultimately help humans evolve and deal with the ever-evolving complexity of life.

References

Norman, Don A. (1998) The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog https://jnd.org/being_analog/.

Rogers, Yvonne. (2004) “New theoretical approaches for human-computer interaction.” Annual Review of Information Science and Technology 38: 87-143.

Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” Surfaces VIII.

(A Little) Messiness, (Some) Clutter, and (Ongoing) Revelations(s)

Viégas and Wattenberg, presenting in the clutter

As introduced by Miya Masaoka (Director of the Sound Art Program at Columbia’s School of the Arts and coordinator of the Artists Using Data series), Fernanda Viégas and Martin Wattenberg are “pioneers in data visualization and analytics” who have “shaped the field” and crafted tools and interactions that have been “used by millions of people.” (A similar description is on their collaborative site.)

Working to a more succinct description, their business cards would formally present them as members of Google’s PAIR (People+AI Research) initiative and Big Picture team. As simply defined by themselves at this presentation, they are scientists and artists — a dichotomy that would give many pause, but is no doubt a complementary pairing for many in the field.

While the work of such individuals inevitably links or crosses from one title to another, their presentation on March 8, “Messiness, Clutter, and Revelation,” focused on their work outside of Google, wherein they have explored new ways to use and investigate data, even as they set constraints within which to play with it. In that spirit of limiting parameters (and an eye to brevity), this review will contain itself to their work and philosophies as shared at this event, “an informal talk” in a mid-renovation space — a fitting format and environment for a presentation that ultimately moved toward the playful and iterative natures of both data exploration and artistic expression.

Viégas and Wattenberg essentially took turns presenting a small selection of work, individual and collaborative, all initially produced between 2003 and 2012, but timeless in their foundational role for the artists and others. Collectively, they defined an arc where the titular elements contributed to insights for the makers but also produced pieces of art — or at least design that moonlights as art.

This is evident, perhaps, in a project like History Flow (2003), explored initially as a ‘scientific probe’ in response to the then-new(ish) Wikipedia now residing at MoMA. This elegant visual investigation tracked the editing of Wikipedia entries, including the back and forth of dueling ideologies on matters adorable and political. In sharing examples of internet favorites such as ‘cat’ and hot-button issues like ‘abortion,’ the patterns of editing and relative passion in each is unmistakable. Yet, when asked if a visualization that compared the different kinds of impassioned debates had been attempted to establish a pattern, Wattenberg responded with a simple “No, but someone should do that.”

“Using data to know things” and questions of technological limitations or permutations were explored in The Art of Reproduction (2011); specifically an understanding of “How the internet is lying to you” through the varied representations of a single artwork that can be found online — from the decaying gold of Klimt (26 Danaes) to the deceptive black-and-white of Mapplethrope (8 Kens and Roberts). Which is the true reproduction work? Or aren’t they all?

This compilation of a single vision or a larger ‘truth’ from a messy table was also evident in Flickr Flow (2009),  a commission that sought to visualize the city of Boston, starting with the particularly ‘dirty’ data set of Flickr images tagged simply as “Boston Common.” The duo let those limitations and what they brought guide the form, stating: “Let’s work with that messiness; see what we can find.” The result is an elegant abstraction of ribbons that looks nothing like (but also, somehow, very much like) Boston year-round. In the process of its creation, it investigates what we as a society have preserved.

Other works explored how we search (Web Seer, 2009) , how “the alien mind” thinks (Thinking Machine, 2003) and how music might be visualized (The Shape of Song, 2002), with the last cited as ”an example of clutter yielding something useful.”

The presentation culminated with Wind Map (2012–). Also in MoMA, this piece was the most clear presentation of the pair’s iterative approach to data visualization, a case study that showed explorations from the abstract to the psychedelic, with the ‘final’ version resulting from the addition of a single line of code to an earlier iteration. Starting with the simple, seemingly ethereal (and almost Ono-esque) question “What does the wind look like?” and the desire to “make complex data easily accessible,” they created — they would have us believe unwittingly — a practical and emotional tool.

The resulting real-time visualization was picked up by meteorologists and combined with other data in weather maps, but also caused Louisiana residents in the path of Hurricane Isaac to reach out to the artists as they tracked the storm in real time. It was also picked up by school teachers and other scientists who used it to teach and to study their own passions. None of these were intentions; they were all unexpected results.

The unexpected results of Wind Map

Indeed, through almost all of the works presented, Viégas and Wattenberg seemed to speak to the utility of data, while often deferring to others in taking on that utility. They were readily willing to accept their work in a continuum where others (for example) could then take the code and build something new or explore another facet, as the two moved on to a new subject. They were driven not by “What can this do?” but by “What happens if we do this?” and seemed to view the usefulness of a project like Wind Map with the same wonder that they viewed the initial question of “What does the wind look like?”

Here any observer could be forgiven for a little ambivalence. The work done by Viégas and Wattenberg is — to those who enjoy visualizing data — smart and well crafted. The pieces in MoMA deserve that recognition. They do make one think — but mostly about what can be built upon their efforts.

For those looking to draw the line, this may fall too much on the ‘art’ side. The pair seem decidedly more interested in the baserate questions over the relational ones. (McGrath 160) Before making things, they certainly aren’t asking where the project will be “in ten, twenty, or even fifty years,” and the word “persona” had no place in the conversation. However, they do “value ephemerality and even magic” and seem to believe that “Not everything about a project must be rationalized or demystified.” As seen in those Wind Map explorations, they’re not afraid to “Make a useless [drippy, psychedelic] or disinterested version” of a project. (Sayers)

Looking to Miriam Posner, the pair’s work (again, as presented here) shies away from a critical engagement. They seem content to pose questions and make tools that others may then utilize — but the investigation stops at form and leaves impact to others.

To directly contrast Viégas and Wattenberg with some projects that Posner admires: How does a colorful visualization of Boston compare with Jacqueline Goldsby’s Mapping the Stacks that aims “to describe and arrange collections related to African American History in Chicago?” How does the composition of “8 Kens and Roberts” in the Art of Reproduction compare with David Kim’s ‘Data-izing’ the Images: Process and Prototypes, wherein Kim used the visualization to question the photographer’s categorization of his Native American subjects? Both speak to us about perceptions and ‘lies’ — one through form, the other through culture. (Posner)

Of course not all visualization (or art) must address the bigger issues — and even Posner knows this. Sometimes it just gets the conversation going, placing the data most clearly in front of those with the itch to investigate further. (And it’s another false dichotomy to decide we have to somehow judge one of these approaches over the other.) In the continuum of development, the world needs people who play in the messiness and clutter, those who ask questions, tinker around and leave something half-built on the workbench. Ultimately this research and discovery become tools of their own for those who want to build higher.

– Michael Kelly, Info 601, Professor Chris Alen Sula

References:
• McGrath, Joseph. (1994). “Methodology matters: doing research in the behavioral and social sciences.” Original paper.
• Posner, Miriam (2016). “What’s Next: The Radical, Unrealized Potential of Digital Humanities.” Keystone DH Conference, University of Pennsylvania, July 22, 2015. http://miriamposner.com/blog/whats-next-the-radical-unrealized-potential-of-digital-humanities.
• Sayers, Jentry (2018). “Before You Make a Thing: Some Tips for Approaching Technology and Society.” https://jentery.github.io/ts200v2/notes.html

Webinar: How Product Insights uses UserTesting

UserTesting is a platform used to receive rapid customer feedback across different interfaces. UserTesting is most commonly known for remote moderated and unmoderated usability studies. I recently watched a webinar by UserTesting titled “How Product Insights uses UserTesting” which essentially explained how their product insights team uses their own platform to scale research within teams as well as within the entire company. 

The webinar was essentially broken into three points: Data Science & User Experience (UX), access to customers, and enabling others. However, I was particularly interested in the first segment – the relationship between Data Science & User experience (UX). There were several connections to topics that we’ve discussed in class. The speaker, Josh Kunz whom is a senior UX researcher at UserTesting, placed most of the emphasis on User Experience. He explained that they attempt to connect data science and UX research in order to ask and answer impactful questions that ultimately affect the human-centered design. The discussion on this topic touched on research methods, human-computer interactions, and human-centered design.

It was interesting to see the different research approaches that the UserTesting team take, in relation to the methods that we’ve discussed in class. The speaker did not make a clear distinction between qualitative and quantitative research approaches, but it was apparent through his explanations. He elaborated on the UX research process, which greatly resembled the qualitative approach – using interviews and focus groups.  Additionally, he briefly discussed the data science approach which was more of a quantitative approach using statistical modeling, predictions, and algorithms to answer and ask questions. However, it seemed that with the data science approach, this team only analyzed large sets of data that they currently have in their database, closely resembling secondary research, verses collecting and then analyzing data as a primary research. 

During the webinar he walked through a scenario in which UX researchers wanted to see if their perception of how customers used UserTesting matched how customers actually use it. This curiosity came about due to an observation by UX researchers noticing that customers would make copies of tests. As discussed in McGrath’s article, “Methodology Matters: Doing research in the behavioral and social science,” we’ve learned that observations are a qualitative research approach (1995). Data scientist then found out that about 80% of tests are copied with then another 80% of those tests are copied again, which is essentially a vast majority of their customers. Data scientist found this information out through a process of modeling and querying data, closely related to quantitative approaches. The UX researchers then performed both in-person interviews and focus groups with their users to gain an understanding of why customers created these copies. Interviewing and focus groups are another qualitative approach that we’ve both read about and discussed in class (McGrath, 1995). They ultimately found that customers create “chains of tests” which data scientist later ran even more statistical modeling resulting a in visualization that showed how all the tests were related. Finally, the UX researchers then performed another round of interviews which essentially acted as a final set of validations for the previous findings. The switch between UX research and data science is closely related to a mixed methods sequential exploratory design, where one team is essentially collecting data and another team is analyzing or validating it (Creswell & Creswell, 2018).

Ultimately, this research helped UserTesting redesign their interface. This is actually related to another set of topics that we’ve touched upon in class: human-centered design & human-computer interactions. For example, the purpose of this iterative process is to ultimately figure out how the user is actually using the product. As I was watching the webinar, I thought of Wilson’s article, “Human Information Behavior,” in that the focus is not on the system but rather the user (2000). I also feel that this process as a whole pull in principles from human-computer interactions. The research is primarily observing human behavior and analyzing it in relation to the interface to design appropriately. At the end of the speaker’s anecdote, he explained that these findings helped them design with multi-study projects in mind, since this is majority of their audience. Additionally, they also adopted the Google “hearts” frameworks, which was an instrument that I was unfamiliar with.

The Google “hearts” framework does an excellent job marrying UX and data science in that is covers five metrics that both teams are able to measure. These metrics are happiness, engagement, adoption, retention, and task success. Engagement, adoption, and retention are metrics that data scientist are able to measure while UX researchers are able to measure happiness and task success. (Click here for an article that explains the framework more in-depth.)

I thoroughly enjoyed this webinar. I thought that it was really interesting to see how they use their own platform to perform research. I also never really thought that UX research and data science would be highly complementary. It makes sense to think of this as a mixed-methods approach in that the strengths of one team will eliminate the weakness from another. For example, data scientist found that a majority of their customers were creating copies of test, but they would not be able to figure out why they were doing this. The UX team are able to take a most human-centric approach to understand this behavior and actions. I suppose that another distinction that could be made would be to say that data science seems product-centered while UX seems human-centered.

References:

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative & mixed methods approaches. Thousand Oaks, CA: SAGE.

Mcgrath, J. E. (1995). METHODOLOGY MATTERS: DOING RESEARCH IN THE BEHAVIORAL and SOCIAL SCIENCES. Readings in Human–Computer Interaction,152-169. doi:10.1016/b978-0-08-051574-8.50019-4

Wilson, T. D. (2000). Human Information Behavior. Informing Science: The International Journal of an Emerging Transdiscipline,3, 049-056. doi:10.28945/576

Archiving Colonialism: The Politics and Ethics of the Archive

How does the archive become a space of engagement? What are the ethical obligations of the archive? How do we draw attention to otherwise invisible voices? How does raw data become material for surveillance? Who owns the past? These were the questions that guided “Archiving Colonialism” a panel discussion hosted by Barnard College’s Center for Research on Women, as part of the larger conference “The Politics and Ethics of the Archive.” According to keynote speaker Elizabeth Castelli, the theme was inspired by audio of earlier feminist conferences, and how the process of digitization led to larger questions of use and ownership. As the discussion progressed, it became clear that reaching a final answer to any of these questions cannot and should not be the goal. Rather, archives should be spaces where continuous discussion is encouraged and continuous access fostered.

The archive has long been a site of contention. Once perceived as purely objective towards history, there has been a recent push to consider archives through a post-modernist lense—as fluid spaces of ongoing debate and discussion, rather than static sites of fixed history and narrative. As Joan M. Schwartz and Terry Cook state in Archives, Records, and Power: The Making of Modern Memory, “…by treating records and archives as contested sites of power, we can bring new sensibilities to understanding records and archives as dynamic technologies of rule which actually create the histories and social realities they ostensibly describe” (Schwartz, Cook, 7).

Despite differences in profession, this emphasis on the archive as a device with which to create history was shared by all three panel speakers. Moderated by acclaimed writer Saidiya Hartman, the three speakers included La Vaughn Belle, a multi-medium visual artist, Justin Leroy, a professor and historian, and Cameron Rowland, a visual artist. Notably, the panel featured no archivists, which I found to be compelling. How could the discussion be shaped by people who had a more dynamic relationship with the archive and don’t interact with it on a daily basis? What kind of direction could it go in?

The panel began with Justin, who discussed the relationship of the Black slave to the archive, and the collective cultural assumption that history moves in one direction. Similar to feminist scholarship, the slave’s relationship with the archive is historically one based on absence and the assumption that the voice of the slave carries no significance. He gave the example of a letter that philosopher Georg Wilhelm Friedrich Hegel wrote stating that Africa “is no historical part of the world.” Moving forward from this flawed ideology, Justin explained, the popular notion has been that the recovery of history is necessary to achieve social justice. But, Justin questioned, what is the benefit of being “unfit” for history? What new narratives are uncovered from the vantage point of being outside history?

Approaching the question as a historian rather than an archivist, Justin described the narratives of free slaves as shaped by perpetual subjugation by history. In spite of the technical abolition of slavery, Blacks would continue to be beholden to the oppressive structures of capitalism that underpin American progress. Capitalism and American history run in parallel to one another, with racialized conceptions of monetary value remaining constant. If things exist beyond the simple binary of life and death, it contorts our idea of time as linear. But, as Justin concluded, if we allow other trajectories of history to permeate the cultural understanding, we might be able to “find the language for more aspirational freedom.”

Justin’s idea of taking a more aspirational approach to history, and an eye towards the future as well as the past strongly echoed the writing of Roy Rosenzweig’s Scarcity of Abundance? Preserving the Past in a Digital Era, which urged historians to “shift at least some of their attention from the past to the present and future and reclaim the professional vision that was more prevalent a century ago” (Rosenzweig, 739). It is a disservice to narrow the vision of history into one linear path.

The next speaker, Cameron, shared this idea of the archive and what it represents as being intrinsically limited in Black narratives. His main example was the concept of reparations and how its discourse opposes the historical constructions of time and monetary value. In his art, Cameron uses historical documents to oppose capitalism. He presented one of his most recent works, “Burden of Proof,” which uses maps of 8060 Maxie Road, a property repossessed by former slaves during Reconstruction. The property was purchased in 2018 by a non-profit in order to implement a restrictive covenant so that the land cannot be used again. The land is valued at $0 and cannot be used based on the stipulations of the covenant. How then, Cameron asked, can this force us to rethink the notion of reparations as value-based and relegated to property? The lack of historical documents relating to this property show us the value in a limited archive, Cameron argued. How can we look beyond history to rethink the role of capitalism in reparations?

Scarcity in the archive and the narrative freedom it allows for were the central interests of artist La Vaughn Belle, the next speaker. Primarily focused on the Danish colonization of the Virgin Islands, La Vaughn described the Virgin Islands’ archives as splintered, due to acquisition by the Danish government. Because of this archival scarcity, La Vaughn argued, the memory of the islands had to be reproduced in alternative ways, which she explores in her work. For example, Chaney are fragments of Crucian pottery that often wash up after storms. La Vaughn collected these fragments and used them to create “process paintings,” to fill in the gaps. The lack of completion in the archive allowed her to utilize her imagination, which presents a necessary challenge to colonialism. In order for the archive to be a tool of resistance and fluidity, some scarcity is essential, she argued.

During their discussion with one another, all speakers challenged the idea of the archive as a place of necessary abundance. Justin presented the idea of “reading practice,” a method he uses in teaching, which emphasizes not what is present or absent in research, but what you do with what you find. La Vaughn emphasized the overlap between history and visual arts, and the need to make metaphors in both fields. Cameron added that the idea of accumulation in history is a byproduct of capitalism that should be reconsidered.  The archive, all agreed, should be a space where one can create their own metaphors for the past and future.

In the end, I appreciated that no archivists were included. I felt that by allowing for more creative perspectives, those with a vague understanding of archives could be exposed to a broader view of their purpose. As I left the panel though, I quite honestly felt like I had my work cut out for me. What authority do I have to fill in the blanks of history? As an archivist, do I have the right to incorporate creativity into my work? But as I considered it more, I thought of how archives can never truly be complete. We can never truly possess every artifact of history; why even try? As the speakers showed, archives must have an element of creativity to challenge dominant narratives. Perhaps the point of archives shouldn’t be to merely present history as it was, but to provide an idea of a better future.

By Sarah Goldfarb, Info 601, Professor Chris Alen Sula

  1. Schwartz, Joan M. and Terry Cook. “Archives, Records, and Power: The Making of Modern Memory.” Archival Science 2 (2002): 1-19.
  2. Rosenzweig, Roy. “Scarcity of Abundance? Preserving the Past in a Digital Era.” Oxford University Press (2003): 735-762

Data Stewards and the Conceptuality of Open Data

I attended an Open Data Week event about Data Stewards. I had heard about Open Data Week through the Pratt School of Information google group and just an event in Manhattan at a decent time, 6pm, on my free day and answered the Eventbrite RSVP. I arrived on 21st St in the Flatiron District and my iPhone’s mail app began failing and displayed the subject-line of the RSVP but not the email, despite refreshing and full reception. Ironic for attending an information science event. However about four other people were immediately chatty and introduced themselves when we realized we were all trying to go to the ninth floor in the wrong building. If not for them and their awareness of two locations in association with the event or its sponsors, I probably would have been lost.

The correct building was a stone’s throw down the block. I didn’t try to ask these fellow attendees numerous of questions but did if they worked at the same place, as they clearly knew each other. They said yes and no, equivocally, probably yes at some point, no now, and yes of course to having similar interests in data. We all arrived at the correct floor in the correct building, I believe a WeWork space. My interpretation was most present were some form of software engineer, and some with nonprofits, and showed up from a share of self-interest in access to data for their projects, and what the so-called ‘open data’ landscape looks like and is aspiring towards. 

There were many free sandwiches and beers which gave the event a specialized feel. I had sandwiches and a La Croix. In ten minutes or so of chatting before the panel started I talked with a guy standing near me, slightly older, who said he had a software project he was in the process of boarding to a friend’s private sector company. I asked a few elementary questions informed by my first two months at Pratt like if it was a database, in a cloud, and if SQL. He said it was noSQL. I’ve found even so far there’s a consistency of ideas or themes once you start discussing projects in the data community. 

Additionally a survey was handed out at the beginning, and from its wording intended to be filled out at the panel’s conclusion, though no one called for them and I didn’t see a bin to return them. I still have mine. It’s not specific to this Data Stewards event here but Open Data week in general. One of its most telling response options I thought was the most advanced option for “What is your level of data expertise?” Which goes “I am a data expert with no fears, who is happiest when given a messy dataset to wrangle”. In addition to the rapport of the group this suggests to me the event or week in general is consciously for advanced information engineers.

The overall slant of the panel and attendees, I gathered, was about prying data from the private sector, and that those attending had projects which could use it. However as it went on many comments were made about the private sector as an efficient beast that is ready to sell and even compete with its data. Everyone there wanted ‘data collaboratives,’ (private, nonprofit, government) to become more systematic and sustainable. They wanted more ‘piloting’ and prototyping and predicted a ’reimagining of statistics in the 21st century.’ However there were striking differences between each of the three sectors discussed, several of which openly acknowledged.

The private sector had the most need to reflect on its biases, as its interests could change and such a company would typically also have a desire to ‘get its name out there.’ Sometimes it’s even tricky for them to get involved in a data agreement if it’s more a long than short-term profit. Cubiq, a three-year old startup for location intelligence from consumers, had a representative pesent named Brennan Lake who spoke about its Data for Good program. Using opt-in smartphone app data to supplement a natural disaster response program, and he mentioned in particular a focus on giving to data right to natural disaster professionals who can appropriately use it. 

However it was also acknowledged across the board that access to data can sometimes come before genuine solutions or use protocols. Rules and a contract repository were mentioned as desired. Estonia by contrast already has legislation for data sharing and Denmark, from which a statistician was present, pulls its census results from admin data, employing two people. Nick Eng from LinkedIn also noted using information they already have uses about two analysts compared to an external project. Brennan from Cubiq also spoke about ‘figuring out the ask’ as being as a difficult part. Privacy as a topic of beforehand attention and cost was also highlighted in particular by Nick from LinkedIn. In these upfront negotiations Lake mentioned ‘privacy by design paradigm,’ and Eng emphasized the cost of producing a sharing agreement that is ‘as hard as possible to abuse,’ but that also being the only way they were willing to enter sharing agreements. 

I can think of several connections to design, and identity and concept politics from Foundations course readings. Talja & Hartel in their look at user-centered data favor a turn more to the audience or user in an effort to reflect more realistic demographics, situational contexts, and not just investigate how researchers are using a system and if their ‘needs’ are met. This is similar to a turn to individual researchers, or so called Stewards for private companies, in reflecting on the information they formulate and seek, and their culture. It did feel like a tech culture to me at the event, although the most straightforward panelist I thought was the Adrienne Schmoeker from the Mayor’s Office of Data Analytics, a new office employing about eight people. The Mayor’s Office has an advantage of being an ‘enterprising organization,’ she said, always minding to serve the city’s 8.6M people. Nonprofits by contrast, more like government than private in this respect, can be much less efficient in contract production and may be just trying to keep the lights on in their offices. A private company rather may have more of a sense of ‘giving back,’ for using city services and frequently census data.

It seems like in an imaginable future more companies and even individuals may seek data—Schmoeker from the mayor’s office anticipated eventually having an open help desk for data, but right now they address matters like STEM (Science, Technology, Engineering, Math) funding for schools, free lunches for kids, ambulance speeds and tenant abuse. However as she said earlier, “there’s no ideal dataset,” and just a live stream without history doesn’t highlight that much that is useful. Another panelist echoed that if it’s less private, it’s more futile. It seems to invoke a more conceptual turn in use evaluation, in other words not just “task oriented” (Talja & Hartel, 2007) but turning to users with what to me seems like situational awareness and occasional cynicism. 

Similarly, I can relate information needs or a burgeoning ‘outlook’ methodology to design needs and the idea of a an axis that actually dishes out preference on multiple traits while representing one, as Constanza-Chock says in her piece on design justice. There are, it seems to me, mechanized intersectionalities, like looking more dryly at how people use a system, or what biases are implicit in their needs (looking at private companies or individual researchers) versus conscious intersectionalities, on which Constanza-Chock mounts the identity of Black Feminism, like looking at how users have conceptualized or contextualized their information (needs). Some of this may include parsing hidden intersections. 

To me it seems like there is an interest in both delineating information by designers, in the “supply chain” (Sayers, 2018), as it were, and in allowing researchers and groups to self-pool data and identity that is increasingly, one would hope, less intersected by an axis that addresses that need only in a shadow, as Costanza-Chock references even some particular community centers as sites of oppression and resistance. 

Given the axises already in place, I agree that it depends on a turn from looking at systems to biases in groups, and from that changes is design to deconstruct shadow interests. It was clear even this Open Data Week event existed in a particular culture. I think we are at an excess of intersections with everyone on the web and there is a need, in myself at least, to locate earlier in timelines, and parse interests that are disadvantageously melded. In my experience this has to do with looking and working before and after points of apparent significance. Data professionals are already looking for granularity of information, as Nick Eng from LinkedIn mentioned in preference to surveys. A move toward reflection and granularity in interpreting users (or researchers) is to me what seems most important, as there may be as much to deconstruct there as in a ‘system.’ A heightening of design theory may logically follow. One of the panelists also mentioned the MIT Media Lab, which encourages “anti-disciplinary research” and already tracks mobility data to gauge housing inequality in and around Boston. It was clear and refreshing at any rate that all attending seemed to be geared by outside the box thinking, at least as perceived by me. 

References

  • Talja, Sanna & Jenna Hartel. (2007). “Revisiting the user-centered turn in information science research: an intellectual history perspective,” Information Research 12(4).
  • Costanza-Chock, Sasha. (2018). “Design Justice: Towards an Intersectional Feminist Framework for Design Theory and Practice.” Proceedings of the Design Research Society 2018.
  • Sayers, Jentry (2018). “Before You Make a Thing: Some Tips for Approaching Technology and Society.”

References:

The Feeling of Technology

What makes us feel?

From a biological perspective, it is proven that nerves located at integral parts of our bodies help us interpret external stimuli that come in contact with our body. The amygdala in our brain is a limbic structure that helps us process emotions and is a component that makes humans unique. The way our bodies have evolved have made us into analog creatures that react well to external stimuli in the natural world and this in turn has helped us become highly adaptable to earth’s different environments (Norman, 2008). From a technological standpoint, what happens when we begin to try to build machines to be more like us? What happens when we want our machines to then replicate our innate emotions or our psyche, to perform for us?

These were questions that I thought of when I was attending UXPA’s Emotionally Intelligent Design Workshop on February 16th. During this workshop, Pamela Pavliscak, a specialist that studies the relationship between our emotions and technology, asked us to partner up and design an app or piece of technology with human emotion in mind. We were required to use two themes as the basis of our invention. For myself and my partner, we had to create a dating app for people that are single. To help us create our invention, Pamela offered examples on how the tech industry has already began using forms of emotion, like our gestures and tone of voice, to implement design features that help build programs that react to us. Their reactions to our emotions will then prompt the machine to respond in a way that’s human, but not quite.

An example of this is SimSensei, a virtual human interviewer, which was created as a means to help health care professionals make more informed decisions on their patients based on their responses to the virtual interviewer. SimSensei is represented by a virtual human named Ellie, who is programmed to conduct interviews that help “…create interactional situations favorable to the automatic assessment of distress indicators, defined as verbal and nonverbal behaviors correlated with depression, anxiety, or post-traumatic stress disorder” (DeVault et al, 2014, p. 1061). Essentially, by creating a virtual helper like Ellie, people at risk of certain mental health disorders can feel they can open up to her, and in turn they can receive the right treatment. Patients are often misdiagnosed in the medical field so I think SimSensei has the right programming to flag warning signs of a particular disorder (keep in mind that it is mainly being used in diagnosing mental health issues).

In my honest opinion, it almost feels like Ellie has been programmed to trick patients into thinking they can trust it. During the course of an interview, the patient is being monitored, and every question Ellie asks is to create a response from the patient, either through speech or through facial changes. Here is a YouTube video that will help you see what sort of questions Ellie is programmed to ask to during her interviews and the type of facial tracking the machine uses.

Another great example offered to us is Toyota’s 2017 movie on a futuristic vision of how some cars may be developed (access it here ). The car featured in this short movie is a concept model, along with the AI named “You-ee” that is built into it. We see aspects of the car’s AI offer advice, act as “wing-man”, and my personal favorite – give positive reinforcement. During the workshop, only the clip from 5:45 to 6:34 was shown. Seen in its entirety, we get a glimpse into what an emotionally intelligent system can do for us. By giving something like “You-ee” human-like qualities (like its ability to make a joke out of Noah’s messy hair), it allows us to view the car as an extension of ourselves. More importantly, I think having a dependable AI is something that will allow individuals to flourish and establish better ties with their human counterparts.

Learning about the different types of emotion-based systems that are already on the market reminded me of Phoebe Senger’s remarks on AI being “..autonomous agents, or independent artificial beings” (Senger, 1999, p.10). We can, at this point, say that Ellie is a step away from being an autonomous agent. Although SimSensei is only currently being used to help doctors diagnose mental health patients, won’t this tool eventually be programmed to perform the the diagnosing by itself and then also administering treatment?

After reading Senger’s article, I now understand how the effects of implementing emotion into our programs can push our machines to the next level. Ellie is programmed with a voice and is made to be able to connect to humans so that we can better understand our own species. We will always be building towards the future, but we always want to keep our connections to one another close to us. After all, humans are empathetic and this quality will be incorporated into the things we create. “You-ee” a perfect example of how the relationship between human and AI can potentially be a harmonious union.

At the end of this workshop, all the groups presented their designs and prototypes. My partner and I decided to create a dating app that required all users to scan a full body image of themselves and display it on their page. Since I’ve never used a dating app before, I was never subjected to the cruel reality of them. According to my workshop partner, dating apps can make finding a partner relatively uncomfortable and weird. Therefore, by implementing a way to circumvent the feeling of discomfort and dishonesty, we believed having your entire self displayed is a great way of creating a more open dating world. But you may ask at this point: “Where’s the portion of your app’s design that makes your prototype emotionally intelligent?”.

And I will answer: “We’re not at that point yet”.

References:

  • DeVault, David et al. (2014). SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support. 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2014. 2. 1061-1068.
  • Norman, Don A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog
  • Sengers, Phoebe. (1999). “Practices for a machine culture: a case study of integrating cultural theory and artificial intelligence.” Surfaces VIII.


Digital Afterlives Symposium

I have always been fascinated with what happens to our digital data after our lives end. There were so many questions I had on this topic and found many of them answered when I attended a symposium on Digital Afterlives at the Bard Graduate Center. There were four speakers who presented their papers on digital afterlives: Abby Smith Rumsey, Robin Davis, Tamara Kneese, and Margaret Schwartz. Each presenter explored the various ways that we preserve, resurrect, and prolong the lifespan of digital data. They also delved into the challenges and complexities of technologies and how we understand our mortality.

A Mere Shadow of the Past: How Memory Creates Identity

Abby Smith Rumsey is a historian and archivist and she presented on the way memory defines us in regards to digital data. She first explained that there are two types of memories: 1. Memory that is embedded in our DNA and how humans are able to survive based on recalling information. 2. Acquired memory that we utilize in our day to day life. Rumsey stated that this form of memory leads to predictions on what is going on around us and helps us function in the world.

Rumsey also stated that our imagination is memory in the future tense and that imagination forces us to think outside of our immediate surroundings and past behaviors. Rumsey then went on to discuss how books are the prosthetics of knowledge because they are extensions of our memories which we can return to over and over again. She also believes that the digital space can work in the same way.

She stated that the Web was initially created as a bulletin board, not a memory bank and that if we want to utilize it as a placeholder for memories, each of us needs to be trained in digital literacy to curate our lives. Each of us has to learn the tools needed to take action to preserve our digital memories so that it does not fade away. Rumsey believes that we cannot assume that our digital memories will live on without our active participation in making it so.

I found that Rumsey’s discussion on preserving memory relates to Joan Schwartz and Terry Cook’s article Archives, Records, and Power: The Making of Modern Memory. In this article, Schwartz and Cook state that without preservation “memory falters, knowledgement of accomplishments fades, pride in a shared past dissipates.”1 Whether an individual is uploading travel photos to a website or blogging about a family reunion, the need for preservation is paramount to in order to retain that digital information so that the memory of it is not forgotten.

The Final Death(s) of Digital Scholarship: An Ongoing Case Study of DH2005 Projects

Robin Davis is the Emerging Technologies and Online Learning Librarian at John Jay College of Criminal Justice. For her presentation, Davis focused on the digital afterlife of Digital Scholarship. Firstly, she discussed how the final death of digital data leads to it dissolving and then disappearing. Davis stated that digital data decays faster and digital scholarship requires ongoing active management to keep the website from breaking down.

Davis provided several examples of Digital Scholarship projects from 2005 and tracked their digital decay over a period of years. She demonstrated how each project showed examples of digital decay and an unexpected afterlife. One website had simply disappeared while another website was a fraudulent site where all of the text was copied and pasted from the original 2005 project website. This incident reminded me of the point that Roy Rosenzweig was discussing in his article Scarcity or Abundance? Preserving the Past in a Digital Era when it comes to digital information’s susceptibility to alteration and forgery. Rosenzweig writes “Digital information because it is so easily altered and copied, lacks physical markings of its origins, and, indeed, even the clear notion of an ‘original’”.2 The forgery of another website can lead to confusion for the user especially when trying to deduce an original document from a plausible counterfeit document.

Davis then went on to explain the various reasons why a website for a digital scholarship project can go down such as project team changes, hosting issues, lack of reliable funding, and not updating the Content Management System.

At the end of her presentation, Davis went on to discuss the importance of preservation and that it should start at the beginning of the digital scholarship project. She stated that there is nothing worse than doing all that work to just let a website crumble especially if future users want to utilize the information. She also provided preservation tips such as web recording the website or submitting the URL to the Internet Archive.

Death, Disrupted

Tamara Kneese is an Assistant Professor at the University of San Francisco and for her presentation, she focused on a deceased individual’s social media accounts and the rise of death startups. Kneese began the lecture by stating that social media accounts such as Facebook and Instagram can become treasured family heirlooms. These accounts can become a place where the relatives and friends of a deceased person can celebrate the life of that individual.

Kneese then discussed how Facebook has become a ritual graveyard and that the dead outnumber the living on the social media site. She went on to explain that in the early years of Facebook, the social media site would deactivate a deceased person’s account. After the school shooting at Virginia Tech in 2007, there was a proliferation of memorial pages on Facebook which allowed family members and friends to remember those that were lost. It also allowed journalists to find information on Virginia Tech students who lost their lives in the shooting. In response to this, Facebook created a memorialization feature for users to reconnect with the dead.

Kneese then went on to discuss the rapid rise of digital death apps in the 2000s such as Legacy Locker, DeathSwitch, and VitalLock. These apps were created so that people can plan what to do with passwords, social media accounts, and emails in the event of their death. She discussed how many of these apps quickly became defunct as there weren’t much interest by consumers in death apps.

At the end of her speech, Kneese went on to talk about how people should include their digital data when doing estate planning so that they can preserve and share their online accounts with family members. She discussed the importance of having final wishes when it comes to your passwords, emails, blogs, and websites so that family members can handle your digital remains properly.

The Haptics of Grief: A Taxonomy

Margaret Schwartz is a Professor at Fordham University and her presentation was on the taxonomy of grief and its relationship to the digital space. Schwartz began her lecture on the spectacle of suffering which she linked to deaths as the result of public executions, beatings, and hangings. She pointed to the open-casket viewing of Emmett Till’s mutilated body as a striking exhibition on sorrow.

Schwartz went on to discuss the history of preparing a dead body and stated that throughout the centuries women usually cared for the dead before burial. She explained the meticulous process of how women would wash and wrap the body in cloth. Schwartz then discussed the embalming process and how this changed the body’s physicality by providing a glamour to the corpse.

Schwartz concluded her presentation by sharing her viewpoint that our digital presence should emulate the preparation of a dead body. We should take care when accessing the digital space and treat it with respect. She believes that touch lingers in technological spaces and this is our mode of understanding. Schwartz also stated that the popular conception that the digital is non-physical is woefully inaccurate. She explained that the physicality of the digital is in the form of server farms, packets, and when we are utilizing a computer. Schwartz declared that everything that is online is tactile.

Conclusion

Each speaker brought various insights on the topic of digital afterlives that I found interesting and made me reflect on the steps I should take with my own digital data. I did notice that the key point that was continuously mentioned was the importance of preserving your digital data or memories. Several of the presenters stated that preservation of digital data should be done by everyone because that information may not only have tremendous significance to you but also to your family members and friends after you have departed from this world.

References:

  1. Schwartz, Joan M. and Terry Cook. “Archives, Records, and Power: The Making of Modern Memory.” Archival Science 2 (2002): 1-19.
  2. Rosenzweig, Roy. “Scarcity of Abundance? Preserving the Past in a Digital Era.” Oxford University Press (2003): 735-762

“Advancing Racial Equity in Your Library” Event Response

For the Event attendance, I am reviewing the webinar entitled, “Advancing Racial Equity in Your Library: Case Studies from the Field,” presented by the Race Forward Government Alliance on Race and Equity (GARE) on October 10, 2018.

The two speakers were Gordon Goodwin from the Government Alliance on Race and Equity, and Andrew Harbison from the Seattle Public Library. To help capture the intention of this talk, the goal of GARE was provided as, “to develop the capacity of libraries to make racial equity a priority within our libraries, cities, communities, and national associations.”

The topic of racial equity is a hot one being discussed in the information world, yet it is also a problem that has not been properly addressed still. With libraries being the sole free and accessible information and cultural center for most communities, racial equity needs to be made a priority. Children from poor and under-privileged areas are relying on technology and aid from public libraries more than ever to receive the same quality of education and opportunities as the more privileged communities.

If racial equity does not exist in libraries, it just adds to the problems the socially excluded face in society. Public libraries need to understand the magnitude of what social exclusion can do to a community. John Gehner points out that there are severe consequences to the, “negative perceptions affecting the way that professional librarians approach those who face social exclusion for many different reasons,” explaining what happens when libraries and their staff do not support equality.[1]

Realizing the importance of racial equity, specifically in public libraries, Goodwin and Harbison get right into it. Goodwin begins by laying out the goals they wanted this webinar to help achieve, which was first and foremost: to inform and to motivate the audience into action. I believe this is the purpose behind most, if not all, seminars and webinars dealing with social issues, specifically those about the information world. They are tools to educate and spread awareness for certain topics that are typically ignored, and motivation is the key to enacting change. Goodwin states right away that he wanted to, “Motivate listeners to take action, and how exactly to do that.”

The rest of the objectives for the talk were to understand the critical need for libraries to focus on race and improving equality for all communities, to learn about the ideas and tools that support racial equity, and how libraries are applying all of this to increase access and improve life for the patrons and staff of color.

Next the definition of racial equity is given as, “Closing the gaps so that race does not predict one’s success, while improving outcomes for all.” It is pointed out that race plays an important factor in determining how well someone does in life. Race helps to determine the barriers or blocks that people of color face in their life. Goodwin states that with improved racial equity, libraries can become places that help eliminate those barriers, and help to reduce race as decider for the success of an individual.

Putting race in front of equity targets the strategies for improving the quality of life and reduction of barriers that people of color face. Goodwin wants people to have an understanding that race does not separate humans beyond the superficial level of appearance, and that the practices and enforcement of laws within American society are still discriminatory, despite having removed the legality of segregation and racist discrimination. The cultural institutions we have still contribute to outcomes that disadvantage people of color, and that an awareness of this is an important first step in the right direction.

Briefly, Goodwin acknowledges the equal importance addressing the inequities faced based on gender, class, and disabilities. The intersectionality of these issues is key, as race plays a role even within the context of gender, class, and disabilities. He then continues onto the topic of how to begin achieving racial equity.

Thinking about issues of race and racism, Goodwin states a common tendency to focus on individuals, to immediately think about who is racist and how. It is better to focus on the institutional basis of racism, that changes within the structures of society are more important than individual cases or people. We need to be asking what roots of racism and prejudice exist within the foundations of our government and institutions, our groups of systems that determine how society functions. This will provide more equitable outcomes. Goodwin claims that, “achieving racial equity requires us to target strategies to focus improvements for those worse off,” and that there’s a need to, “Move beyond service provision to focus on changing policies, institutions and structures.”

Going into the history of libraries, and how during the creation of these institutions, people of color were not allowed any access. The roots of many laws and policies made at this time are still prevalent. Thinking about what laws and policies may still be negatively affecting certain communities inadvertently today can begin to address what changes need to be made. This means that there are rules that can unintentionally hurt certain people, such as the targeting of low-income groups that prevents people from participating fully.

Along this vein of thinking, there are also invisible or unknown prejudices ingrained into people by the institutionalized racism of our society that can be very harmful, as this prejudice leads to racist actions that can negatively impact people of color and the opportunities they receive. The example provided for this was a white librarian being more helpful to white patrons, waving their fees more often, and providing them with more renewals or leeway then they do for patrons of color. Collecting the data on the impact of practices and procedures on people of color is important in finding out what changes need to be made the most.

Implementing a racial equity initiative is the proposed solution for effectively changing things at the root of the problem, not just address individual blame or racism. Harbison begins polling the online audience on how many of them represent libraries that already have such an initiative. Case studies are gone through, providing evidence on what these initiatives have been able to successfully achieve and what impact on racial equity they have had. This wrapped up the webinar in an ideal way, as the first objective was to motivate action. Examples of how institutions executed that call for action and created a better society with better opportunities and a more equal treatment of people of color in places so relevant to the social exclusion and information access gap provide the best motivation for the audience, as they are able to see that this action works.

[1] John Gehner. Libraries, Low-Income People, and Social Exclusion, p. 39.

 

-Brianna Martin, Sula Info 601

Webinar can be accessed here: http://www.ala.org/pla/education/onlinelearning/webinars/ondemand/racialequity

Slides provided to accompany presentation found here:http://www.ala.org/pla/sites/ala.org.pla/files/content/onlinelearning/webinars/archive/Advancing-Racial-Equity-in-Public-Libraries_final.pdf

References:

John Gehner (2010): Libraries, Low-Income People, and Social Exclusion, Public Library Quarterly, 29:1, 39-47.