Why more people should be critical of WhatsApp

One of the theses presented in the Pew Research Center’s report on the Future of the Internet is that by 2025 the Internet will become “invisible” and we will no longer think about “going online”. One clear example of how that has already happened is that of WhatsApp and how widely common it has become in Pakistan. Its ubiquity combined with the advent of 4G network coverage means people now expect to be instantly connected on WhatsApp and there’s no more “going online” as it runs in the background. Is this a good thing for everyone? For this field research, I tried to find people who have actively avoided using WhatsApp in order to understand if it can have any negative effects on its users. The findings show that some people have serious concerns with using WhatsApp but it has become increasingly difficult for them to avoid using it. The conclusion is that it’s important to be critical of the role WhatsApp plays in our public and private conversations.

This research involved four semi-structured interviews. All participants are current students or alumni of my former school. Participants were recruited through a social media group and shortlisted on the criteria that they must be smartphone users and they must have actively deleted or uninstalled WhatsApp from their phones. The goal for this research was to find answers to the following questions:

  • Are there people who have deleted their WhatsApp accounts? What were the reasons that drove them to this point?
  • What was it like for them to quit WhatsApp? What challenges did they face? How did they deal with those challenges?
  • Did they go back to using WhatsApp? Why or why not?

The common reason why all four of my participants had deleted their WhatsApp accounts was because they didn’t want to be “accessible”. They felt that as long as they were active on WhatsApp, they were considered “always available for a chat”. However, the reasons for seeking a break from this constant availability varied according to each person’s context. One of the participants shared that she was struggling with social pressure and anxiety and in all of this WhatsApp became one of the triggers for her panic attacks. She could see a direct correlation between how anxious she felt and whether or not she was using WhatsApp. On the other hand, another participant felt that keeping up with the conversations on WhatsApp took away too much time from her. This manifested in the form of never having time left for her non-work interests and prevented her from finding the right work-life balance ultimately leading to resentment and stress.

“…I just don’t like being that accessible. Unread messages bother me, I have to reply immediately, otherwise I start feeling terrible…its just something that never ends” – M, one of the participants of this study

Despite having such serious concerns with using WhatsApp, none of the participants have been able to stay away from it for too long. In fact, all four of them now have a sporadic on-and-off relationship with WhatsApp wherein they uninstall it from their phones every few months and then eventually ending up coming back. This is primarily because it’s just become prevalent and necessary in professional settings. One of the participants detailed an incident when she was on her longest hiatus from WhatsApp for around three months but she had to make her account again because one of her professors was using it to communicate with her class! Another participant stated that it was needed to communicate with their international team at work. When asked why their team could not use any other tool for this communication, they said the culture of using WhatsApp was already built into their organization and it just wasn’t possible to convince everyone to stop using it. 

All of the participants also talked about how they were pressured into coming back to WhatsApp by their family and friends. WhatsApp becomes the default place where people coordinate their social engagements and share links, files and photos with each other. Even though the participants tried to convert their connections to other solutions like Telegram which is a similar but less common app or websites like FileBin or Google Drive for file sharing or simply going back to email, these were not long-lasting solutions. Other people did not consider the ramifications of using WhatsApp serious enough to convert to these solutions. Consequently, the participants felt a difficult trade-off between their own privacy and peace of mind and keeping in touch with their social connections. 

Nearing the end of the discussion, I asked each participant to think about what they would change in WhatsApp to make it easier to use for themselves. This led to some interesting ideas for potential features. The justification for each feature reflects the kind of problems the participants faced and underpins our conversation on the limitations of the app. Some of the most interesting feature ideas are listed below:

  1. “Ghost mode” – Travel through the app like a ghost so that you can peacefully access all of your conversations, media and documents which are saved on the app but no one should be able to see you online or message you;
  2. Archive forever – Archive a conversation or a group forever which means you will not see them in your immediate chat list but the person or the group on the other end will not know that you have archived them;
  3. One-on-one – Conversations work more like real life so, for instance, users are only able to send one text at a time and have to wait for the other person to respond before they can say something again;
  4. Chat requests – People have to send you a request if they want to be able to chat with you on WhatsApp, they can’t automatically message you when they have your number and you have the power to turn off requests.

This report highlights how the widespread use of WhatsApp and the way it is designed can negatively influence some of its users and contribute to anxiety and stress in their lives. It is, therefore, important that we adopt a critical view of using WhatsApp, becoming aware of its drawbacks, seeking people’s consent before we engage them on it and carefully considering whether it is the best platform for our next public or private conversation.

A Digital Sounding Board: The Internet and Filter Bubbles

Most of us have some sort of daily routine with the Internet. For example, every morning, I get up, take a shower, and then settle down in front of my computer for fifteen to twenty minutes of Internet browsing before I get ready for the day. I check my Facebook, I read the webcomics I follow, I look at my e-mail, I peruse some blogs, and I scan through viral images. Instead of morning coffee, I start my day with a blast of information. But does that blast of information contain a wide range of material from across the Internet or is it made up of content that’s been tailored just for me?

The fact is that the Internet that we see is not pure, unaffected information, but information that has gone through a variety of filters that have been placed in order to ensure that we, as users, will receive the kind of information that we most want to see. These filters involve advertisers, social media companies, search engine developers, and even self-imposed filters that we might not even be conscious of. Together, all of these filters form what author Eli Pariser refers to as “the filter bubble.” In his 2011 TED Talk, Pariser defines the filter bubble as:

Your own personal, unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out. [1. Pariser, E. (Feb. 2011) Eli Praiser: Beware online “filter bubbles”. (Video file). Retrieved from  http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles/]

Sites like Google, Facebook, and even various news sites are in the business of making sure that the content that appears on your screen is the exact content that you want to see. The reason is simple: the more they display content you want, the more time you’re going to spend on their websites, and the more they’re going to profit from the ad revenue that comes from each page you click on. They don’t have an interest in providing you with a diverse array of information, only the information that will make you stay on their site. The result is that many users find limited information, or information that merely supports what they believe/what they want to hear, when they are under the impression that they are receiving unfiltered information. Robert W. McChesney writes that filter bubbles: “keep us in a world that constantly reinforces our known interests and reduces empathy, creativity, and critical thought.”[2. McChesney, R. W. (2013). Digital disconnect: How capitalism is turning the internet against democracy. The New Press: New York.]

Filter bubbles are constructed in a variety of ways, and are designed to keep users reliant on a certain service. Pariser’s initial example is Facebook, where your News Feed is tailored based on the people you interact with on the site. Pariser tells the story of how he started to notice that his friends who posted links to politically conservative information started to vanish off of his News Feed, while the friends who posted links to politically liberal information remained.[3. Pariser, E. (Feb. 2011) Eli Praiser: Beware online “filter bubbles”. (Video file). Retrieved from  http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles/] This is because Facebook employs an algorithm that tracks how often you interact with certain people (clicking links, liking posts, commenting, etc.) and prioritizes your News Feed based on those interactions. For Pariser, who is a self-descried liberal, this meant that his liberal friends stayed on his News Feed because he would more often interact with their posts then he would his with those of his conservative friends. He was exposed less and less to opposing view points, meaning he was offered fewer chances for debate and fewer opportunities to learn something from an unfamiliar source. This does not mean that everything people online say has value, or that all of the conservative information would even be worth Pariser’s time. However, the idea of the Open Internet where all information can be accessed equally is not the Internet we have if sites and advertisers put on our content put more and more filters on our content.

According to Facebook, the News Feed is designed this way because the large numbers of Facebook friends that users have would make the News Feed unwieldily otherwise.[4. Hicks, M. (2010, August 6). Facebook tips: What’s the difference between top news and most recent? (Web log post). Retrieved from https://www.facebook.com/notes/facebook/facebook-tips-whats-the-difference-between-top-news-and-most-recent/414305122130] However, the negative is that users are being exposed to less information that might challenge their way of thinking, and exposed to more information that supports what they already believe. Similar algorithms and personalization techniques are used on Google, and news sites like Huffington Post, Yahoo News, Washington Post, and the New York Times.[5. Pariser, E. (Feb. 2011) Eli Praiser: Beware online “filter bubbles”. (Video file). Retrieved from  http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles/] With all of these filters, how can we consider opposing view points? How can we engage in discussion? How can we learn anything? Pariser argues that because filter algorithms respond to what a user clicks on, that users eventually will only get content that satisfies their immediate wants and whims when online, rather than pushing them to think further. He says:

The best editing gives us a bit of both [thoughtful content and fun content]. It gives us a little bit of Justin Bieber and a little bit of Afghanistan. It gives us some information vegetables; it gives us some information dessert. And the challenge with these kinds of algorithmic filters, these personalized filters, is that, because they’re mainly looking at what you click on first, it can throw off that balance. And instead of a balanced information diet, you can end up surrounded by information junk food.[6. Ibid.]

The problem is, content providers, search engines, and advertisers don’t necessarily see a reason to provide users with the sort of Internet that offers them both ‘vegetables’ and ‘desert’. The system currently in place makes money, and as long as these companies are profiting, they have a limited investment in what content their users are consuming.

The more that individuals are exposed to views and information that validates and enforces their current world view, the harder it becomes to converse with others about those views, especially in a digital format. Filters help convince users that their opinions are more valid than the opinions of others, and people start to create online communities where little debate is welcomed and users mostly share the same opinions. People who don’t share those opinions might be engaged in debate, but a debate that happens face-to-face is a very different kind of debate than the kind that often happens online. So much of Internet debates boil down to people slinging insults, shutting other people down, or overusing the caps lock to make their point. Online discourse is so commonly difficult that if you search “arguing on the internet” you get a slew of images mocking the idea of online debate.[7. Though to be fair, this is possibly affected by Google’s filters on my search.] If online users were exposed to a wider variety of content that challenged their world views, would the nature of online debate change?

Webcomic artist Cameron Davis’ interpretation of online debates. From my own experiences, this certainly doesn’t apply only to men.

The good news is that while content providers and advertisers may not be interested in popping the filter bubble, there are ways that Internet users can lessen the effects that filter bubbles have on their online experience. Pariser’s website, The Filter Bubble, has a list of ten ways to reduce the effect of the filters. These techniques include deleting cookies and browser histories, setting stricter privacy settings, using browsers and sites that allow users to access the internet without providing their IP addresses, and depersonalizing browsers.[8. Pariser, E. (2011) Ten ways to pop your filter bubble. Retrieved from http://www.thefilterbubble.com/10-things-you-can-do] The other helpful thing is to make users aware of the filter bubble. We might be stuck with filters, but if we are aware that they are there and what they are doing to our online experience then we can compensate for those effects and search out information that we might not normally find otherwise. The internet may be a fantastic source of information, but if we do not utilize it properly, what’s the point of having that information source in the first place?