Conflicts with Cataloguing Structures

Emily Drabinski’s article, Queering the Catalog: Queer Theory and the Politics of Correction, demonstrates the challenges presented by our desires to open and unlock the classification and cataloging systems within library structures. Since the late 1960s, scholars and professionals of information studies have challenged the neutrality of the Library of Congress’s traditional classifications and subject headings, demanding that vocabularies be corrected to reflect current social and political contexts. While specific classification and cataloging decisions in library structures have been “fixed”, Drabinski’s queer theory demonstrates that any corrections made are only conditional and never final.

 

Libraries are stable spaces, controlled through traditional library classification structures and vocabularies systems that provide standards and guides for both producers and consumers of information. Consequentially, the static nature of the library makes them resistant to change. As Drabinski argues, this is problematic since libraries are dependent on language. Language transforms over time, it is adapted into new contexts and given new meanings. The information acquired through libraries are therefore organized and identified through classifications and subject headings that become socially and politically incorrect over time. More simply, information and materials within libraries end up being misrepresented.

 

The root of this problem steams from the static quality of hegemonic library classification and cataloging systems. In order to combat this misrepresentation, Drabinski considers continuous revisions and additions to the library’s classifications and subject headings necessary. While she acknowledges that such corrections are adequate, they conform to the hierarchical power structures within the library’s catalog. If we break down this system we can identify that the cataloguer that originally classified and catalogued a material within the organizational system, the critical cataloguer that requested the revision, and the Library of Congress which judges if classifications and subject headings are suitable, all hold significant hegemony in how information is represented. In order to compensate for our inability to dismantle this hierarchy, Drabinski asserts that librarians and catalogers should open and engage in discourse with users on the limitations of our cataloging systems. However, this response is not sufficient. Library’s may not have enough staff as well as resources to fully dive into its specific cataloging and subject heading issues. Users may not seek out library professions to voice their concern, or even have the luxury of time to listen to the history and reasons for the library’s current system. While Drabinski continues to approach the issues of hegemonic cataloging systems head on, I suggest we incorporate a sideways approach.

 

The purpose of forming knowledge organizations and structures within libraries is to enable both producers and consumers of information to navigate and access quality sources of information. How rich and extensive the records are in describing the various materials within the library, will determine how much quality information is communicated. As Christine Pawley states in her article, Information Literacy: A Contradictory Coupling, “The decisions that indexers, catalogers, and classifiers make in providing intellectual access to the contents of books and articles through subject headings, and index terms, and physically or virtually allocating works to particulars areas of the library collection, contribute to the ways in which researchers think” (Pawley, 2003). Pawley recognizes the production of accessible knowledge does not end at the physical or virtual library shelf, nor does it move in one linear direction. It is a process that continues to recontextualize sources, perpetually moving, connecting, and growing. Rather than remain within the confinements of the controlled cataloging structures, we should widen and loosen our perspective. As Pawley notes of Hope Olson’s argument, we must relinquish control and create openings within these structures so that power can leak out as well as in. Therefore when forming classifications and subject headings we cannot use what Ross Todd identifies as a “one-size-fits-all” approach. We must engage in a more critical and collaborative approach that considers all aspects of a source – its content, the context of its production as well as the author, its history (specific to the material item and larger picture), its relationship to works it was inspired by and the ones it inspired. Thus, as these facets change and evolve with time we must continue to engage in this process of reformation and discourse. Our classification systems should always be in flux, evolving, and changing relationships with one another.

 

While this method may be too much work for libraries to continuously manage as well as financially burdening, especially if they have large collections, such a model does exist and has been quite successful in dissolving the rigid structures of our current cataloging system. Artsy is an online art collection that is curated by its own classification system and technological framework, called the “The Art Genome Project”. The Art Genome Project maps characteristics – “genes” – that connect artists, artworks, architecture, and design objects through history, and currently, over 1,000 genes exist within this project. While this system is similar with tagging and mapping local vocabularies concepts, Artsy’s genes are more firmly rooted and cohesive. Artsy’s 1,000+ characteristics are weighted proportionally to one another. For example, categories within the Art Genome Project are displayed as complete list and are organized numerically and alphabetically. Therefore within “B”, we find “Bauhaus” (an artistic movement) is listed below “Bathers” (a subject found within artworks). If we closely examine the art that relates to the gene “Bathers”, Artsy provides a description of this subject matter and its larger history, a searchable list of artworks that contain this subject, as well as a list of related categories and artists. This structure enables users to access and obtain information through this web of related knowledge. Additionally, Artsy’s widen structured approach also allows for collaboration with Artists, Galleries, Museums, Auction Houses, Scholars and Institutions, as well as many others. Such collaboration and discourse ensures that the information within Artsy’s gene web is of quality and its information is accessible. Furthermore, as Artsy collaborates with other leaders in the field, it is continuously acquiring new artworks and information, adding new genes, and restructuring relationships. Although Artsy is a virtual collection, I believe that we can apply the same techniques within the physical spaces of the library. As seen with Artsy, the actual space where the work resides is not crucial. Rather, what is important is the ways in which information is represented within these webs and consequently communicated back to users.

 

Drabinski, Emily. “Queering the Catalog: Queer Theory and the Politics of Correction.” The Library Quartely: Information Community, Policy Vol. 83, No. 2 (April 2013), pp. 94-111. http://www.jstor.org/stable/10.1086/669547

Pawley, Christine. “Information Literacy: A Contradictory Coupling.” The Library Quarterly: Information, Community, Policy, Vol. 73, No. 4 (Oct., 2003), pp. 422-452. http://www.jstor.org/stable/4309685

Rosenzweig, R. (1991). “Politics and anti-politics in librarianship.” in ibid The Progressive Librarian No. 3 (Summer 1991) pp. 2-8. http://www.progressivelibrariansguild.org/PL_Jnl/pdf/PL3_summer1991.pdf

Artsy – The Art Genome Project. https://www.artsy.net/categories

Co-evolution of Humanity and Technology

As much I enjoy reading Norman’s thoughts on the co-evolution of humanity and technology in his article from The Invisible Computer. I find his views on the technological impact on humanity to be a bit pessimistic. He sees technology and machines as some foreign entities that are beyond our control while overlooks many aspects of technology such as its potential as a tool for creation and knowledge transfer.

While Norman makes a good point suggesting people are” forgetful of details, with a poor sense of time, a poor memory for facts and figures, unable to keep attention on a topic for more than a short duration, reasoning by example rather than by logic, and drawing upon our admittedly deficient memories of prior experience.“ I think comparing humans to analog is a questionable analogy. Analog technology was created for the purpose of storing and reproducing information in a systematic way. It is as much machine as any modern technology, but dated. Calling ourselves analog is suggesting we are still living in the past. In Norman’s view, ”people are analog, insensitive to noise, insensitive to error. People extract meanings, and as long as the meanings are unchanged, the details of the signals do not matter. They are not noticed, they are not remembered” I find it hard to agree with this statement. While we do have high tolerance for errors, we are sensitive to noise, sensitive to error. Our accumulated knowledge and experience have taught us noise is detrimental to decision-making. We spend tremendous amount of time and effort to minimize errors through documenting, reproducing and examining data to identify patterns. With the help of technology, we are constantly learning from our mistakes and trying to make sense of world through analyzing new and historic information.

Norman points out that “human beings are the results of millions of years of evolution, where the guiding principle was survival of the species, not efficient, algorithmic computation.“ Norman would be right if we still live in prehistoric times as hunters and gatherers. When people started forming societies that were organized around agriculture and institutions, our priorities shifted. As population grow so does our need for stability and predictability. Our obsession with efficiency and predictability can be traced back to our need to make better forecast to increase food production in order to sustain an ever-growing population. Our obsession with stability is needed for governance and establishing orders. Human beings have “co-evolved with social interaction, cooperation and rivalry, and communication”. Society would not have thrived and progressed without stability and improved efficiency in tools making and resource utilization made possible through technological progress.

Norman points out that technological progress and our obsession of efficiency in production have reduced us to just machines in an assembly line “hence too came the dehumanization of the worker, for now the worker was essentially just another machine in the factory, analyzed like one, treated like one, and asked not to think on the job, for thinking slowed down the action.” While this is somewhat true, technological progress has the potential to save us from dehumanization by automating low skill jobs and give us more time to focus on creative tasks that require cognitive skills. Although there have been numerous debates on the economic implication of job displacement by technology, improved efficiency through automation have make goods and services more affordable and accessible. Advances in technology have also created many job opportunities for the creative industry and information professionals. Norman points out the issue that technology has moved so fast that we are unable to keep up. “The slow evolutionary pace of life is no longer up to the scale and pace of technological change. The accumulation of knowledge is enormous, for it increases with every passing year. Once upon a time, a few years of schooling — or even informal learning — was sufficient. Today, formal schooling is required, and the demands upon it continually increase.”  While I believe Norman’s statement resonates with many of us who are always in pursuit of new knowledge to stay competitive in the world. I feel that knowledge does not accumulate perpetually. Knowledge becomes obsolete as we find better ways of doing things. For example, while it is helpful to understand machine codes, not a lot of software engineers use machine codes for programming. New knowledge supersedes old knowledge. Whatever knowledge we find relevant today may not be relevant a decade later. Our pursuit of knowledge can go as far back as the prehistoric times when we sought ways to identify weather pattern and better farming techniques. Technological advances facilitate the transfer of information to help us stay informed of nascent and relevant knowledge. There is no shortage of vast libraries of digital information and self-guided online education. The sufficiency of education is subjective and highly dependent on individual need. It is up to our individual choices to decide whether to take advantage and adapt to the ever-changing world.

Norman brings up some interesting points but I find his views to be a bit dated. I do agree with the fact that technology needs to be created in a way that should complement us, but I find it questionable that “we are compliant, flexible, tolerant. Yet we people have constructed a world of machines that requires us to be rigid, fixed, intolerant.” Machines do not require us to be rigid and intolerant of errors. Machines are programmable and follow rules set by humans. Machines are as flexible as we build them to be. The way machines are built is a reflection of our capabilities in applying knowledge to build tools to advance our cause. With advances in digital technology, electronic devices have become portable and computer processors have become much more powerful. The costs of producing and storing information become much cheaper and access to information has become much easier. Without accuracy and precision, much of technological progress that we have come to appreciate would not have existed today. Although “digital signals are limited in values”, they have enabled much creativity and information freedom. As complex as computing devices have become, they are still largely single-purpose tools that cannot make decisions and only capable of performing tasks in repetition. Machines are tools that help us to create better tools. While I do agree with the fact that “We have constructed a world of machinery in which accuracy and precision matter. Time matters. Names, dates, facts, and figures matter. Accurate memory matters. Details matter.” I don’t think we have forgotten we are still good at experimenting and inventing through trials and errors.

 

Norman, D. A. (1998). The Invisible Computer: Why Good Products Can Fail, the Personal Computer is So Complex, and Information Appliances are the Solution. MIT Press. Chapter 7: Being Analog

Trickle-down Information: The Enlightenment Model and Information Dissemination in the Modern Library

Note: I believe this subject has the potential for expansion and further investigation. Any feedback, criticism, and questioning would be greatly appreciated as I am considering expanding this essay into a full research topic.

The Library is an establishment intended for the dissemination of information, the modern foundation of which is historically rooted in the age of Enlightenment. As literacy and readership increased, foundations of knowledge and governing bodies began to invest in the construction and design of libraries. [1] The intellectual and wealthy elite of the enlightenment age spurred these modes of knowledge delivery, placing themselves as creators and controllers of information. The library and university were established as a means to circulate created information based on a top-down structure. At one point, this was highly restricted in terms of access, often denying women, people of color, and those in poverty. [2] Today, these are not strictly enforced laws of conduct but the established system continues to place the same types of people at a disadvantage.

Many critics note the power dynamics established in the creation and distribution of knowledge based on the Enlightenment model. The distribution of information from the creator to the consumer continues to enforce this model of dissemination and the related top-down power structure. [3] The researcher, the student, and the public library patron are only able to access the resources their institution can afford or will allow. Libraries emphasize obtaining and providing collections that will meet the needs and expectations of their community. However, the community, as consumers, is not in a position to greatly influence the collection and distribution of information.

The Digital Age is believed to provide greater opportunity for the process of disseminating information; however, most scholarly articles are only available through glass walls. The practice of open access is not a solution to inaccessibility since publishers and institutions often hold most republication rights to any scholarly production. “Library access to electronic resources is another widely acknowledged economic barrier.” [4] Classification and distribution reinforces information as a commodity available for commercialization. [5] Copyright holders limit distribution to specific journals, repositories, and databases. The biggest databases, often with the most diverse amount of publications, are only accessible through educational institutions, including libraries. The consumer is dependent on what institutions they may access and what that institution chooses to make available.

Furthermore, laws such as the Stop Online Privacy Act (SOPA), Protect IP Act (PIPA), and the Research Works Act have often run the risk of further hindering an open access system of information. [6] Opponents to open access often view information as a risk in the wrong hands. Peter Schmidt of The Chronicle of Higher Education criticizes the potential for “the publication of inferior and unreliable journals” and “the risk that research in fields such as medicine will fall into the hands of people who might misuse it.” [7]  Although these bills have not reached the point of becoming law, their proponents echo the power structures and control of information exemplified by the Enlightenment Age.

The Library places great emphasis on obtaining and distributing materials of authority. We continue to see institutions of knowledge, universities and bodies of government, as the authority on particular forms of information. Information produced and distributed through these institutions is considered the voice of scholarly authority. Minority groups are often underrepresented in academic institutions, and sometimes banned from shelves and curriculum. [8]  The continued movements toward open access creates new opportunities for equitable information distribution. In a consumer-based society, it’s not surprising that information is treated as a commodity for trade. Publishers and institutions manage how users access information by selecting exclusive databases to allow distribution. The duty of the modern library is to move away from a neutral stance and defend accessibility, free speech, and the freedom of information. The Library as a disseminator is the door between the creator and consumer. The ethical librarian should provide open access that will benefit and improve the lives of library patrons. The Library, as an institution of authority, should be the voice of dissent toward political campaigns aimed to restrict information access. [9] The dissemination of information via a top-down power structure places those at the bottom under a significant disadvantage. The purchase and exchange of information is designed to benefit the publisher and the distributor, enforcing their authority as the all-knowing-elite. The modern Library holds an institutional responsibility to involve the consumer in the process of information dissemination, providing greater opportunity for information creation and understanding.

 

References

  1. Dahlkild, N. (2011). The Emergence and Challenge of the Modern Library Building: Ideal Types, Model Libraries, and Guidelines, from the Enlightenment to the Experience Economy. Library Trends, 60(1), 11-42.
  2. Pawley, C. (2003, October). Information Literacy: A Contradictory Coupling. The Library Quarterly, 73(4), 422-452.
  3. Ibid.
  4. Pribesh, S., Gavigan, K., & Dickinson, G. (2011). The Access Gap: Poverty and Characteristics of School Library Media Centers. The Library Quarterly, 81(2), 143-160.
  5. Pawley, C. (2003, October). Information Literacy: A Contradictory Coupling. The Library Quarterly, 73(4), 422-452.
  6. Chadwick, R. (2012, December). Protecting Open Access to Taxpayer-Funded Research: The Rise and Defeat of the Research Works Act. The Serials Librarian, 63(3-4), 296-304.
  7. Schmidt, P. (2010, February 14). New Journals, Free Online, Let Scholars Speak Out. from http://www.chronicle.com/article/open-access-journals-break/64143
  8. Reichman, H. (2012, March). Opposition grows to Tucson book removals and ethnic studies ban. Newsletter on Intellectual Freedom, 61, 1-84.
  9. Rosenzweig, R. (1991). Politics and anti-politics in librarianship. Progressive Librarian, 5–8. http://www.progressivelibrariansguild.org/PL_Jnl/pdf/PL3_summer1991.pdf

Librarianship for Social Justice

Personal note: in this blog post, I am trying to think my way through an issue on which I know I need to educate myself more. I am white, with a legacy that includes Southern slaveholders on my father’s side and German Nazis on my mother’s. It is my intention not to center Black Lives Matter around white people or the predominantly white professional fields discussed here, nor to suggest that White Saviors can step in to fix things, nor to pass the buck of responsibility to black activists, but instead to develop some kind of context for using this library degree in a transformative way. I don’t know if I’ve done this well, but I hope it’s better than not addressing the question at all. Continue reading

The Neutrality Illusion and How to Combat it

Robert Jensen brings up an interesting point in his article, “The Myth of the Neutral Professional” from 2004 when he states that an intellectual in any society is not neutral. Intellectual Professionals, such as librarians, serve a function; that function is to solidify the position of the elite. They do this by validating what they choose as important for the masses. Jensen talks about how librarians take on the agenda of the elite through things like acquisitions and programing, but something he does not acknowledge is the tagging system which also confirms the agenda of the elite. Librarians are the gatekeepers of information. Today, patrons have access to sources not kept by librarians for almost any information they like, however, the most valid source of intellectual information is still housed in some form of library. Libraries get their funding from somewhere, which makes them some form of extension of the elite as well. A library may house many voices, but a higher structure chooses those voices. Accessibility has changed how patrons interact with information. Librarians can use this to create a more open library system, and acknowledges its bias.
Intellectuals cannot ignore the interconnectedness of institutions in the United States. Institutional libraries do not stand alone in a web of power structures. A government unit of some kind does fund them. By extension, the rich and powerful elite, to some extent, control said government units. Libraries extend much farther than just career academics and intellectual professionals, especially academic libraries. Today the average millennial has to go to college to be financially secure; therefore the impact of an academic library reaches into more minds than ever before. The impact of so many people having their own perspectives in the social sciences could alter the future of how Americans think. The question is, with so many sources for information accessible, how will the average American react?
Just because there is an option for someone to verse themselves in new ideas, does not mean that, they will not simply narrow their field of view in order to focus on what matters to them. Whether to embrace knowing a little bit about everything, or accept that knowing everything about one thing is impossible seems to be the intellectual conundrum of the 21st century. I feel that in this paradox is where the excuse of neutrality is most dangerous. The idea of neutrality allows for those desiring to narrow their field of view to continue to do so without recognition of the bias they are gaining. By not advocating for new voices, libraries can enable this behavior, “[…] to take no explicit position by claiming to be neutral is also a political choice, particularly when one is given the resources that make it easy to evaluate the consequences of that distribution of power and potentially affect its distribution.” (Jensen, 2004) If you look at the structure of cataloging there is a particular field where this distribution of power is transparent: tagging. In the tags field, the goal is to describe a book in key words, findable to the patron. In a sense the librarian has freedom to tag something as whatever they like, but at the same time that person is limited to the acceptable “neutrality” where they must tag the item with accepted terms recognized by society as associated with the object. Using conventional tags for these materials is good for someone seeking out that information. But it limits the ability for someone to stumble upon this material, exposing them to something new or a new viewpoint on the subject matter. If it became the convention to tag things as related to a field that oppose it, or give a new view on it; less direct tagging, could be a solution to this small scale interest situation. The internalization of people is something that should be acknowledged by the intellectual professional, as well as their own biases. Another solution can be to add a new field to the tagging system recognizing the source’s lens before interacting with the source.
For example, if someone has limited themselves to knowing only about the issue of deforestation of the Amazon, they might limit their keyword search to “Deforestation” and “Amazon” which will educate them on that specific topic. The materials that person gains access to could include animals placed on extinction lists because of the deforestation, active parties causing the deforestation, and what governments might be doing to stop it. On WorldCat there is a field where that person can limit further by ‘Topic’. They can look at their subject of interest through a sociological lens, agricultural, anthropological, and many more. This field is the best solution one can find to the lack of neutrality in the library field. There are still limited available sources about the ‘medicine’ topic as a lens on the subject of the deforestation of the Amazon (one to be exact) but the patron can recognize a different lens on the same subject they have interest in.
The concept of neutrality in a library setting is an excuse for legitimacy at best. It needs to be clear to a patron that there are necessary biases involved when dealing with a body of information, whether that be in a physical library or when accessing an online catalog. As library professionals there are steps we can take to identify our catalog’s limits that will create transparency with patrons. Informing the public that they are exposing themselves to a limited collection of viewpoints at any given time could make that person more open to new voices. It may help that person realize that there will always be another way to view something, which is the true issue of the neutrality illusion; it creates an authority in something that can only honestly claim to be a small collection of intellectual thought.

Jensen, R. (2004). The Myth of the Neutral Librarian. Progressive Librarian, 28-34.

Information Deserts

Access to information is widely viewed as a core principle of democratic society. But what if there are populations who don’t know how to find what they need, or even know that it is available to them? This thought occurred to me as I read Chapter 1 of “The Wealth of Networks” by Yochai Benkler. Benkler, an optimist who believes deeply in the potential power of the internet as a force for good, argues that “From a more substantive and global perspective focused on human development, the freedom to use basic resources and capabilities allows improved participation in the production of information and information-dependent components of human development.” [1] While this is almost certainly true, Benkler’s reasoning relies on the assumption that potential users (and producers) of information know how to access and use it.

As we discussed this topic in class, I thought of the library in my neighborhood, the people who use it, and what they might use it for. The library, obviously, houses a wealth of information, and also provides practical services like help with becoming a citizen and registering to vote. But how do people learn how to access that information? How do people even know where their library is? What if they don’t have one in their neighborhood, or town? I believe that, in fact, there may be vast “information deserts” here in our own city, as well as around this country and the world, where most people are not able to access the resources that are, in theory, available to them.

The idea of an “information desert” is based on the “food desert” concept, defined by the USDA as “…parts of the country vapid of fresh fruit, vegetables, and other healthful whole foods, usually found in impoverished areas…largely due to a lack of grocery stores, farmers’ markets, and healthy food providers.” [2] An information desert, therefore, might refer to both geographic areas without libraries or perhaps internet access, as well as groups of people – the elderly, possibly, or non-English speakers, or people without cell phones or home computers – lacking the ability to access available resources.  

A specific example of the latter concept is discussed by Jeff Cohen in his 2013 article, “Living in a College Information Desert.” Cohen responds to a piece in the New York Times, “Better Colleges Failing to Lure Talented Poor,” which highlights a disturbing statistic: “Only 34 percent of high-achieving high school seniors in the bottom fourth of income distribution attended any one of the country’s 238 most selective colleges.” [3] Cohen argues that “This phenomenon is largely due to a lack of information and access to cultural capital (i.e., knowledge about college and the associated application and financial aid processes)” and that “there are entire neighborhoods and even regions where nobody knows about or has attended selective colleges or, more importantly, that there are meaningful differences between the colleges that one might attend with respect to support, learning environments and graduation rates.” [4]

The effects of this situation are far-reaching. As the Times article points out, the graduation rate for low-income students attending local colleges is only 50 percent, versus 89 percent at selective colleges. [5] This fact alone limits the future prospects of these students, without factoring in that graduates of selective colleges will likely have better job opportunities than those who graduate from local colleges. When high-achieving students don’t attend universities with high academic standards, they are denied opportunities for success – and the world is denied their potential contribution.

The Times article suggests that the onus is on universities to address this issue. [6] Cohen has a number of suggestions, including funding more college counselors and programs that bring graduates from selective colleges to high schools in low-income communities. [7] I think a combination of efforts could, in this case, have a significant effect. I also think there is a role for the government, especially in ensuring that all public high school students know how to apply for financial aid (which may open up more possibilities for them).

More broadly, information deserts affect a variety of populations (but especially those in low-income communities). How, for example, do the unemployed search for jobs? If one has a home computer with internet access, we might say that it’s easy enough to use employment websites. But what if one doesn’t have a computer or internet at home? They can certainly use the library. But what if their community doesn’t have a library, or it’s too far or difficult to reach? This limits their options to a very narrow scope. (And even if they do have internet access, we are assuming that they know what sites to use and how to use them; we assume that they know how to write a resume and cover letter, etc; this is a different kind of information desert, perhaps – an information literacy desert.)

Benkler’s fantasy of the internet as a great equalizer has merit. But we still live in a time when not everyone can access the internet, and not all of those who can know how to use it to their advantage. This will surely change organically over time as our culture becomes more and more “plugged in.” But in the meantime, we must work to ensure that all populations have ways of accessing information that is critical to their lives. This may mean bringing computers into senior centers; providing free wifi in public spaces; advertising campaigns advising people as to where they can find information they need; and any number of other case-specific solutions. Awareness of the issue is the first step towards finding a remedy.

[1] Benkler, Y. (2006). “Introduction: a moment of opportunity and challenge” in The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 1–18.

[2] http://americannutritionassociation.org/newsletter/usda-defines-food-deserts

[3] Leonhardt, Dave. “Better Colleges Failing to Lure Talented Poor.” The New York Times, March 16, 2013.

[4] https://www.fsg.org/blog/living-college-information-desert

[5] Leonhardt, Dave. “Better Colleges Failing to Lure Talented Poor.” The New York Times, March 16, 2013.

[6] Ibid.

[7] https://www.fsg.org/blog/living-college-information-desert

Can Our Stance Toward Facebook Be Critical Enough

 

“Information, knowledge and culture are central to human freedom. How they are produced and exchanged in our society critically affects the way we see the state of the world.” (1). In our electronic world, a huge amount of our information comes from the Internet, and the production of that information is a complex thing, which requires, from the consumer a critical approach. Benkler is very optimistic about the way we use the public Internet space to inform our reading of this information.  This shared public space allows for many different voices to be heard, and Benkler claims that because so much information is available, there emerges ” a more critical and self-reflective culture (Benkler, pg 15) Whether this optimism is justified or not is another matter.

Recent issues with Facebook over trending topics and the its algorithms that monitor site content both support and challenge the idea of the user’s critical view of information from Facebook. How important is Facebook’s influence.  The NY times says Facebook has 1.71 billion members worldwide, and half of American adults get their news from Facebook.(2) This is a new narrowing of a news source. Certainly no single newspaper or single TV news show is relied upon by half the adult population for news. That many people relying on Facebook as their single source for information means everything that Facebook does matters.

Kincheloe talks about electronic media as providing us with a “secondhand culture, filtered and preformed in the marketplace and constantly communicated via cultural and mass media.(3) This is what Facebook is doing in a very concentrated way, feeding us our culture and information, “filtered” as it chooses. Giving us a filtered monoculture.

In May 2016, The Wall Street Journal wrote about bias in trending topics posted by Facebook. They claimed that conservative news is down played, and liberal news stories are chosen and emphasized. Facebook replied that these stories are chosen through algorithms, and so are neutral, and that guidelines are “in place to support consistency and neutrality. (4)

While Facebook claimed neutrality, nonetheless it responded to accusations of bias by changing its in house training program. In June 2016, Facebook included political bias in its standard training sessions, beyond racial, gender, etc bias.(5)

In September Facebook removed a 1972 Vietnam War photograph (that had a depiction of a naked child) from its site. However member response, (thousands of people reposted the picture) was such that Facebook reinstated the picture on its site. In both these instances Facebook responded to user concerns.

What we see here is a constant back and forth between members and Facebook, in a fight over the control of the content.  Members are not passive recipients of filtered information. They do question what they are seeing and reading. This supports Benkler’s idea that “individuals are less susceptible to manipulation by a legally defined class of others- the owners of communications infrastructure and media” (Benkler pg 9) and that there is not control that is gained once, but “hegemonic consent is always in flux.” (Kincheloe, pg. 93.)

However, these two examples, of member push back, what do they really mean? We have two examples of Facebook changing its content because of outside input. Are these two examples exceptions, instances where the members are critical and aware that they are being manipulated?  Is the norm the opposite? Is it true that most of the time Facebook’s readers are not critical?  The bias in trending topics was reported by a former employee (a whistle blower) not by a user. Only once that it was exposed did members push back. Would even the most critical of readers have noticed a bias? Or would the members be more likely to think that just because it was on Facebook it was authoritative?  To draw a parallel, “The information encapsulated in an article stands alone, authoritative by virtue only of its presence in the volume. Legitimacy is conferred by its place on the Library shelf, (6), or to paraphrase, news on Facebook is legitimized by being there. How many users will be able to take the critical stance described by Kincheloe: “critical hermeneutics traces the ways the cultural dynamics (of popular media) position audiences politically in ways that not only shape their political beliefs, but formulate their identities.” pg 103.? How able are we to “trace” the way Facebook “positions” us? Do we have the knowledge to do so. It does not seem that the open common space of the internet, as described by Benkler, is routinely able to give us a “more critical and self-reflective culture.” Perhaps we can only hope that whistle blowers (Snowden) will continue to come along, as we, users might not be able to maintain a critical stance that is educated and robust enough to unearth these manipulations.

 

(1) http://www.benkler.org/Benkler_Wealth_Of_Networks_Chapter_1.pdf, p.1

 

(2) www.nytimes.com/…/facebook-vietnam-war-photo-nudity.html

 

(3)  https://www.researchgate.net/publication/261773451_ Rethinking_Critical_Theory_and_Qualitative_Research

 

(4) http://www.wsj.com/articles/five-things-to-know-about-facebooks-trending-controversy-1462915385

 

(5)   http://www.ibtimes.com/facebook-introduces-political-bias-training-after-trending-topics-controversy-2385911

 

(6)”http://www.jstor.org/stable/4309685 pg 435

 

 

 

Oral Histories

I recently visited the Brooklyn Historical Society to conduct an interview with Brett Dion, whose official title is Oral History Project Archivist. Dion received his library science degree from Pratt, and began working at the New York Transit Museum during his final semester in school. He was there for seven years, at first in the archives and then as a registrar (which involved maintenance of the collection and loan paperwork). During his last couple of years at the Transit Museum, Dion began taking workshops at the New York Writers Coalition, and ended up volunteering with StoryCorps. As Dion describes, StoryCorps is a mutated version of oral history, because of the editing and streamlining process: often, when relating a story, people jump around and meander along in the telling. While from the oral history perspective, the narration should be left in its original state, StoryCorps condenses and edits to create a more straightforward version.

When Dion heard of an opening at the Brooklyn Historical Society in the oral history department last October, he jumped at the chance to interview. He wanted to expand his experience in collections, as he had never worked with audio recordings before. He was hired to stay on for two grant-financed projects; he is still working on the first, which must be 70% complete by this upcoming spring in order for the funding to continue. Much of the work done to preserve or archive collections at BHS is funded by grants, so much so that there is a staff member whose sole job is to apply for funds and write proposals. The project Dion is working on now is with legacy oral histories, which date back to 1973 and are stored on audio cassette. He is digitizing, doing conservation work for the tapes that are falling apart, and making the histories more available than they’ve ever been before. Using a system called the Oral History Metadata Synchronizer (OHMS), developed at the University of Kentucky libraries, Dion will put them online and make them accessible in a novel way: the audio will be synched with the transcripts. When uploading an MP3 and a file of the transcript, it is possible to lock them in place and match up the words that occur at every one-minute mark. This entails Dion (or one of his two interns) listening to each oral history within the system. At every fifty second mark, a bell goes off to tell the listener to pay attention; another bell goes off at sixty seconds, which is when Dion or the intern must highlight the word that is being spoken at that exact time. It then jumps to the next fifty second mark, and so on. Dion says that after an adjustment period, the process can go pretty quickly. Before matching up the audio, there must first be an evaluation of the transcripts themselves, which were created when the histories were recorded, back in the 1970s and 80s. Dion and his interns listen to each recording while carefully going over the accompanying transcript, to make sure that the initial transcription was accurate, and to correct any misspellings. Often the narrators of the histories will share their birthdate, but that information must be bleeped out, as a protection against identity theft. The Oral Historian at BHS, Zaheer Ali, wants the year to be left in, so that listeners can have context about what was happening in the world at the time, so only the day and month are removed (in both the transcripts and the audio).

The tapes are composed of sixty-seven interviews (starting in 1973) of Puerto Ricans, most of whom were born in the 1880s, and emigrated to Brooklyn in the 1920s and 30s. As a result, many were trying to get jobs just as the Great Depression was hitting; not only did they have this going against them, but they were very badly mistreated by other immigrants at the time. Dion says the tapes have survived remarkably well, but that as he is doing the transcription and audio work, he is often struck by the scripted quality of the interviews. He notes that it seems clear the interviewers have some purpose in mind, and are guiding the narrators to tell a specific story. This, of course, made me think of “The Ethics of Fieldwork,” by PERCS, and how one of their specifications is to not ask leading questions (p. 6). I was not able to hear the recordings Dion has been working on, but his description led me to believe that the interviewers at the time were not being as ethical as one may have hoped (of course, at that time, the ethics of fieldwork were much less defined than they are now). We can also only hope that the participants were not chosen to be interviewed because they were seen as “exemplified or erotic,” (p.8) and that it was simply because they were new residents of Brooklyn. Zaheer Ali’s philosophy is much more within the ethical framework PERCS lays out: he usually has a certain theme in mind, which dictates who he conducts the interviews with, but his strategy consists of letting those he speaks with tell him about their lives in whichever way they choose. One such theme revolves around the neighborhood of Crown Heights; in 1991, a riot broke out between the Hasidic Jews and African Americans who lived there. Fatalities occurred, followed by days of violence and unrest. BHS went to the area two years later, to find out how people were healing and recovering. As it is now the 25th anniversary of those riots, BHS is collecting new oral histories, and making the 1993 recordings available to the public.

I see oral histories as giving voice to those who are not always heard, and undermining what Kincheloe and McLaren describe as discursive practices, which they write are “defined as a set of tacit rules that regulate what can and cannot be said; who can speak with the blessings of authority and who must listen; and whose social constructions are valid and whose are erroneous and unimportant” (2002, p. 94). This might be a romanticized view, as the decision of who to interview and why is up to the organization collecting the oral histories; as a result, one person (or, more likely, a committee) is deciding who is worth interviewing. This means that many groups who may greatly benefit from being heard may never get that chance.

References:

Elon University. Program for Ethnographic Research & Community Studies. The ethics of fieldwork module. Retrieved from http://www.elon.edu/docs/e-web/org/ percs/EthicsModuleforWeb.pdf

Kincheloe, J. L. & McLaren, P. (2002). Rethinking critical theory and qualitative research. In Y. Zou & E. T. Trueba (Eds.), Ethnography and schools: Qualitative approaches to the study of education (87-127). Lanham, MD: Rowman & Littlefield Publishers, Inc.

Library As A Political Arena

Ever since their onset as a public institution, libraries have been political in nature. After the death of Alexander the Great and the subsequent break up of Macedonia in the Hellenistic age, there was a boom in the creation of libraries as institutions of the state, where previous collections of merit were kept privately by the elite. Kings recognized the value information had in a world that was continuously vying for power and control. In many instances they would go to great lengths to obtain scrolls and works of prominent thinkers, which in turn would draw scholars and the elite to their libraries, only furthering their prestige and power. Most notable of these first state-funded libraries was the Library of Alexandria, which became a “comprehensive repository of Greek writings as well as a tool for research” under the Ptolemaic dynasty (http://www.atlasobscura.com/articles/the-fierce-forgotten-library-wars-of-the-ancient-world). Threatened by the new and growing status of the Library of Pergamum the Ptolemaic kings adopted strategies of war to ensure that the Library of Alexandria would remain at the top of the pedestal, by cutting off trade of papyrus and imprisoning scholars wishing to trade sides. While by no means as violent or even overt in its undertaking, I would argue that libraries and fields of librarianship remain arenas of political advocation today, whether used as such or not. What is born political, remains political. As Birdsall puts it, modern “libraries are the creation and instrument of public policy derived from political processes” (Birdsall, 2). And it would be more advantageous to embrace this sentiment than attempt a detached stance of neutrality or impartiality.

In the vein of the political, libraries have long been heralded as institutions embodying democratic values. Ideals of intellectual freedom, free and open access, literacy, and inclusion have been championed by public figures like Thomas Jefferson, James Madison and even Keith Richards, with the library specifically in mind (Bushman, 3-4). Many of these ideals are ones that this country claims to be governmentally founded on, but have not been fully realized or enacted until quite recently. In keeping with this tradition and continuing to break with the structure of white, male, elitist hegemony, librarians are in a position to enact change in a professional and academic setting. Whether it is advocating for more politically correct classification and subject headings through the Library of Congress or using displays in local libraries to address social and cultural issues in their specific community, librarians have a great opportunity to channel democratic values, expand perspective, and seek social justice in seemingly small but penetrating ways. There are many in this field who wish to remain apolitical and would like to keep politics out of libraries altogether. This can be exemplified in a fairly recent comment by Debbie Wasserman Schultz, a U.S. Representative of Florida’s 23rd congressional district and member of the Democratic party. In respect to the discussion of the Library of Congress updating the subject heading of ‘illegal alien’ to ‘noncitizen’ or ‘unauthorized immigration,’ she is quoted as saying that the Library of Congress should choose “subject headings without political influence” (http://www.theestablishment.co/2016/07/15/the-surprising-political-power-of-libraries/). I, however, would argue that it is impossible to create an appropriate subject heading for people who wish to be and are not yet citizens of this country without any allusion to politics, just as I would argue that removing politics from the library altogether is impossible. In shadowing both Desmond Tutu and Robert Jensen, the application of neutrality in any professional environment simply does not exist. There will always be a distribution of power and to “either overtly endorse or reject that distribution is, of course, a political choice” (Jensen, 3).  Furthermore, to remain detached from the issue by claiming neutrality is essentially the same as agreeing with the powers that be or the current state of affairs, a specific stance and also a political undertaking. Instead of tiptoeing around this issue or keeping the political nature of ourselves dormant, I propose we embrace it as a catalyst and a much more constructive and productive way to bring about change.

In order to truly and fully express how this can be accomplished, I would like to showcase Jenna Freedman, a blue haired reference librarian at Barnard college, and the work she has done to actively bolster politics in the field of librarianship through her creation of a feminist zine collection at Barnard in 2003. Zines are themselves an anomaly, in that they are an untraditional medium for cataloging in libraries. They are do-it-yourself magazines that run the gamut from handwritten and stapled to professionally printed, serving as a unique form of personal expression on an array of topics and can be considered primary source material about contemporary popular culture. As a self-proclaimed anarchist and punk, it is Freedman’s nature to “critique privileges and challenge social hierarchies,” (Eichhorn, 126) “in favor of egalitarianism and environmentalism and against sexism, racism, and corporate hierarchies” (Eichhorn,126). It is through this lens that she has founded the feminist zine collection at Barnard, which currently consists of more than 1,500 zines in their open stacks collection and over 4,000 zines in their adjoining archive. All of the zines in their open stacks collection are duplicated in their archive for preservation and cataloged in Worldcat, so they are visible to not just Barnard, but the library community at large and available through interlibrary loans (Eichhorn, 128-29). The political and activist nature of this collection is two-fold, encompassing the “actual space of the library and the more conceptual space of the library catalog” (Eichhorn, 129). The fact that Freedman herself is not just a reference librarian, but crosses over the boundaries of special collections librarian, archivist, cataloger, and scholar makes her a defier of professional library tradition within the space of the library. Her decision to catalog the zines was a “way to change the status of the zines,” (Eichhorn, 129) giving them validation and making them as important as any other published material. Additionally, by adding the zines to Worldcat she has given researchers greater access to contemporary feminist material, a “discourse on feminism that, at least until the late 1990’s, was still primarily accessible in private collections” (Eichhorn, 130). There are only a few other collections of zines of this nature, including the Riot Grrrl collection at the New York University and the collection at the Sallie Bingham Center at Duke University. As if this wasn’t enough, Freedman keeps an open dialogue with the producers of the zines in her collection due to the highly personal content in the zines, in the case that they want their name removed or in the case that a female to male transgendered zinester no longer wants their work apart of a feminist collection (Eichhorn 130-31).

In an age where we have just appointed Carla Hayden, a woman and an African-American, as the first person in 214 years to hold the post of Librarian of Congress other than a Caucasian man, it is high time we went the way of Freedman. Her willingness to cross boundaries of librarianship and assert her tenacious beliefs in order to provide greater access to knowledge are most definitely political, as well as something to be admired.

A Place for Google Books in Critical Information Studies?

Having worked at the New York Public Library since January 2010, I was only around to see the end of the library’s active partnership with Google on their ambitious Books project. It would be difficult, if not impossible, to recount the entire history and controversy surrounding the Google Books project, from its informal beginnings in 2002 to today. However, I will try to place the project and the NYPL’s participation in context, but more importantly consider whether such a project could be a beneficial tool for use in Siva Vaidhyanathan’s concept of Critical Information Studies, as well as whether it would raise his concerns about copyright issues.

Google Books began with the idea of doing nothing less than scanning all of the world’s books and making them available and searchable online. Work on the project began in secret in 2002. A small group visited some of the more notable, existing digitization projects, including the Library of Congress’ American Memory Project, Project Gutenberg, and the Universal Library at Carnegie Mellon University, to see how they worked and the challenges they faced.[1] Google co-founder Larry Page also reached out to his alma mater, the University of Michigan, site of digitization projects such as JSTOR. When told it would take an estimated 1,000 years to digitize the university library’s seven million volumes, Page said he believed “Google can help make it happen in six.”[2]

It wasn’t until December 2004 that the NYPL became involved with the Books project as an initial partner in what was then known as the Google Print for Libraries project. The Print for Libraries project was the next big step in the Books project, attempting to scan and make searchable the collections of several major research libraries, including Oxford, Harvard, the University of Michigan, and Stanford.[3]

However, it might have been the announcement of this next big step that led to the controversy and lawsuits that would mire the project for years to come. While the agreements Google made with the research libraries would only allow it to “publish the full text of only those library books old enough to no longer be under copyright,” it also stated that for copyrighted works, ”Google would scan in the entire text, but make only short excerpts available online.”[4] Even if they were only making short excerpts available online, though, the fact that Google was scanning entire texts of copyrighted work without first obtaining the permission of the author or copyright holder led to lawsuits. The most notable of these lawsuits were the ones filed on behalf of authors by the Authors Guild[5] and on behalf of publishing companies by a group consisting of McGraw-Hill, Pearson Education, Penguin Group, Simon & Schuster and John Wiley & Sons.[6]

What, then, would Vaidhyanathan make of the Google Books project, particularly as it relates to his concept of Critical Information Studies (CIS), as well as his concerns about copyright? Just as it would be difficult, if not impossible, to recount the entire history and controversy surrounding the Books project, it would be just as difficult, if not impossible to include Vaidhyanathan’s possible reaction to each point of history and controversy. Instead, I will try to imagine Vaidhyanathan’s opinion of the Books project around the time of the NYPL’s initial involvement and ensuing lawsuits.

On the surface, a project to scan all of the world’s books and make them available and searchable online would be a tremendous tool related to the first field of CIS as described by Vaidhyanathan in his article “Critical Information Studies: A Bibliographic Manifesto.” Vaidhyanathan believes that people should have “the abilities and liberties to use, revise, criticize, and manipulate cultural texts, images, ideas, and information.”[7] Although not a complete fulfillment, Google Books would certainly have allowed people to use and criticize cultural texts in previously unimaginable ways and scale.

However, in a June 23, 2006 conference hosted by the Library and Information Technology Association entitled “Contracting for Content in a Digital World,” then Andrew W. Mellon Director and Chief Executive of the Research Libraries at the NYPL David Ferriero described two troubling anecdotes that seem antithetical to the second field of CIS, quickly ending any possible embrace of Google Books by Vaidhyanathan. Ferriero described Google as “very private about their scanning operations; we’re not allowed to take pictures; they developed their own equipment, their own software for the OCR, etc.”[8] Such a secretive, proprietary process to digitize the books seems at odds with Vaidhyanathan’s support of open source software as a way to “facilitate access to and use of scholarship and information.”[9]

More damning, however, is Ferriero’s explanation that “according to the terms of the agreement, the data cannot be crawled or harvested by any other search engine; no downloading or redistribution is allowed.”[10] While he did note that “the partners and a wider community of research libraries can share the content,” it still goes against not only Vaidhyanathan’s previously mentioned support of open source software but, more importantly, the second field of CIS.[11] Vaidhyanathan believes that users should have the “rights and abilities…to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”[12] With Google limiting access to their search engine and prohibiting downloading or redistribution, the Books project fails to be an appropriate tool for use in Vaidhyanathan’s CIS.

But what about the issues of copyright raised in the lawsuits filed against Google by groups representing authors and publishers? Vaidhyanathan spends time in his article describing everything from the history of copyright in Europe and the US, famous writers’ interest in copyright, theoretical concepts of copyright, to more recent, controversial copyright laws and cases. Despite that, it doesn’t seem that Vaidhyanathan would have as much of an issue with Google Books, at least regarding the copyrights of authors and publishers. It’s not that he wouldn’t want authors to be fairly compensated for or have no control over their work. But in his commitment to the dynamic, interdisciplinary nature of CIS, Vaidhyanathan exhorts that “every scholar committed to CIS should insist on retaining some of her or his rights to publications and making them available as widely and cheaply as possible.”[13]

Where the question of copyright and the Google Books project might become more concerning for Vaidhyanathan is what Tim Wu described in his 2016 The New Yorker article as a possible “monopoly in online, out-of-print books.”[14] To that I would add that through their partnership with many of the world’s leading research libraries, Google would have access to books that, while they may no longer be under copyright, are rare or difficult enough to access that Google would have an effective monopoly on them as well. Combined with their above described walled garden approach of access to information and it would almost give Google an implicit copyright and ability to make money on any out-of-print or rare out-of-copyright books they can scan.

A plan to scan all of the world’s books and make them available and searchable online at first seems like it would be an ideal tool for the scholarly analysis and debate of such dynamic, interdisciplinary fields as those found in Critical Information Studies. However, the restrictions of access placed on the project by Google itself are antithetical to those fields of study, as well as the abuse of copyright with which CIS often concerns itself.

[1] “Google Books History.” Accessed September 9, 2016. https://books.google.com/googlebooks/about/history.html.

[2] Ibid.

[3] Markoff, John, and Edward Wyatt. “Google Is Adding Major Libraries to Its Database.” New York Times, December 14, 2004. Accessed September 9, 2016. http://www.nytimes.com/2004/12/14/technology/google-is-adding-major-libraries-to-its-database.html.

[4] Ibid.

[5] “Authors Guild Sues Google, Citing “Massive Copyright Infringement”.” The Authors Guild. September 20, 2005. Accessed September 9, 2016. https://www.authorsguild.org/industry-advocacy/authors-guild-sues-google-citing-massive-copyright-infringement.

[6] Wyatt, Edward. “Arts, Briefly; Major Publishers Sue Google.” October 20, 2005. Accessed September 9, 2016. http://query.nytimes.com/gst/fullpage.html?res=9C02E4DA123FF933A15753C1A9639C8B63.

[7] Vaidhyanathan, Siva. “Critical Information Studies: A Bibliographic Manifesto.” Cultural Studies 20, nos. 2-3 (March/May 2006): 292-315. http://www.tandf.co.uk/journals.

[8] Stuivenga, Will. “Contracting for Content in a Digital World.” LITA Blog. July 11, 2006. Accessed September 9, 2016. http://litablog.org/2006/07/lita-preconference-contracting-for-content-in-a-digital-world.

[9] Vaidhyanathan, Siva. “Critical Information Studies: A Bibliographic Manifesto.” Cultural Studies 20, nos. 2-3 (March/May 2006): 292-315. http://www.tandf.co.uk/journals.

[10] Stuivenga, Will. “Contracting for Content in a Digital World.” LITA Blog. July 11, 2006. Accessed September 9, 2016. http://litablog.org/2006/07/lita-preconference-contracting-for-content-in-a-digital-world.

[11] Ibid.

[12] Vaidhyanathan, Siva. “Critical Information Studies: A Bibliographic Manifesto.” Cultural Studies 20, nos. 2-3 (March/May 2006): 292-315. http://www.tandf.co.uk/journals.

[13] Ibid.

[14] Wu, Tim. “What Ever Happened to Google Books?” The New Yorker. September 11, 2015. Accessed September 9, 2016. http://www.newyorker.com/business/currency/what-ever-happened-to-google-books.