OCTOBER 24, 2013

THE PSYCHOLOGY OF ONLINE COMMENTS

POSTED BY 

online-comments-580.jpeg

Several weeks ago, on September 24th, Popular Science announced that it would banish comments from its Web site. The editors argued that Internet comments, particularly anonymous ones, undermine the integrity of science and lead to a culture of aggression and mockery that hinders substantive discourse. “Even a fractious minority wields enough power to skew a reader’s perception of a story,” wrote the online-content director Suzanne LaBarre, citing a recent study from the University of Wisconsin-Madison as evidence. While it’s tempting to blame the Internet, incendiary rhetoric has long been a mainstay of public discourse. Cicero, for one, openly called Mark Antony a “public prostitute,” concluding, “but let us say no more of your profligacy and debauchery.” What, then, has changed with the advent of online comments?

Anonymity, for one thing. According to a September Pew poll, a quarter of Internet users have posted comments anonymously. As the age of a user decreases, his reluctance to link a real name with an online remark increases; forty per cent of people in the eighteen-to-twenty-nine-year-old demographic have posted anonymously. One of the most common critiques of online comments cites a disconnect between the commenter’s identity and what he is saying, a phenomenon that the psychologist John Suler memorably termed the “online disinhibition effect.” The theory is that the moment you shed your identity the usual constraints on your behavior go, too—or, to rearticulate the 1993 Peter Steiner cartoon, on the Internet, nobody knows you’re not a dog. When Arthur Santana, a communications professor at the University of Houston,analyzed nine hundred randomly chosen user comments on articles about immigration, half from newspapers that allowed anonymous postings, such as the Los Angeles Times and the HoustonChronicle, and half from ones that didn’t, including USA Today and the Wall Street Journal, he discovered that anonymity made a perceptible difference: a full fifty-three per cent of anonymous commenters were uncivil, as opposed to twenty-nine per cent of registered, non-anonymous commenters. Anonymity, Santana concluded, encouraged incivility.

On the other hand, anonymity has also been shown to encourage participation; by promoting a greater sense of community identity, users don’t have to worry about standing out individually. Anonymity can also boost a certain kind of creative thinking and lead to improvements in problem-solving. In a study that examined student learning, the psychologists Ina Blau and Avner Caspi found that, while face-to-face interactions tended to provide greater satisfaction, in anonymous settings participation and risk-taking flourished.

Anonymous forums can also be remarkably self-regulating: we tend to discount anonymous or pseudonymous comments to a much larger degree than commentary from other, more easily identifiable sources. In a 2012 study of anonymity in computer interactions, researchers found that, while anonymous comments were more likely to be contrarian and extreme than non-anonymous ones, they were also far less likely to change a subject’s opinion on an ethical issue, echoing earlier results from the University of Arizona. In fact, as the Stanford computer scientist Michael Bernstein found when he analyzed the /b/ board of 4chan, an online discussion forum that has been referred to as the Internet’s “rude, raunchy underbelly” and where over ninety per cent of posts are wholly anonymous, mechanisms spontaneously emerged to monitor user interactions and establish a commenter’s status as more or less influential—and credible.

Owing to the conflicting effects of anonymity, and in response to the changing nature of online publishing itself, Internet researchers have begun shifting their focus away from anonymity toward other aspects of the online environment, such as tone and content. The University of Wisconsin-Madison study that Popular Science cited, for instance, was focussed on whether comments themselves, anonymous or otherwise, made people less civil. The authors found that the nastier the comments, the more polarized readers became about the contents of the article, a phenomenon they dubbed the “nasty effect.” But the nasty effect isn’t new, or unique to the Internet. Psychologists have long worried about the difference between face-to-face communication and more removed ways of talking—the letter, the telegraph, the phone. Without the traditional trappings of personal communication, like non-verbal cues, context, and tone, comments can become overly impersonal and cold.

But a ban on article comments may simply move them to a different venue, such as Twitter or Facebook—from a community centered around a single publication or idea to one without any discernible common identity. Such large group environments, in turn, often produce less than desirable effects, including a diffusion of responsibility: you feel less accountable for your own actions, and become more likely to engage in amoral behavior. In his classic work on the role of groups and media exposure in violence, the social cognitive psychologist Alfred Bandura found that, as personal responsibility becomes more diffused in a group, people tend to dehumanize others and become more aggressive toward them. At the same time, people become more likely to justify their actions in self-absolving ways. Multiple studies have also illustrated that when people don’t think they are going to be held immediately accountable for their words they are more likely to fall back on mental shortcuts in their thinking and writing, processing information less thoroughly. They become, as a result, more likely to resort to simplistic evaluations of complicated issues, as the psychologist Philip Tetlock has repeatedly found over several decades of research on accountability.

Removing comments also affects the reading experience itself: it may take away the motivation to engage with a topic more deeply, and to share it with a wider group of readers. In a phenomenon known as shared reality, our experience of something is affected by whether or not we will share it socially. Take away comments entirely, and you take away some of that shared reality, which is why we often want to share or comment in the first place. We want to believe that others will read and react to our ideas.

What the University of Wisconsin-Madison study may ultimately show isn’t the negative power of a comment in itself but, rather, the cumulative effect of a lot of positivity or negativity in one place, a conclusion that is far less revolutionary. One of the most important controls of our behavior is the established norms within any given community. For the most part, we act consistently with the space and the situation; a football game is different than a wedding, usually. The same phenomenon may come into play in different online forums, in which the tone of existing comments and the publication itself may set the pace for a majority of subsequent interactions. Anderson, Brossard, and their colleagues’ experiment lacks the crucial element of setting, since the researchers created fake comments on a fake post, where the tone was simply either civil or uncivil (“If you don’t see the benefits … you’re an idiot”).

Would the results have been the same if the uncivil remarks were part of a string of comments on a New York Times article or a Gawker post, where comments can be promoted or demoted by other users? On Gawker, in the process of voting a comment up or down, users can set the tone of the comments, creating a surprisingly civil result. The readership, in other words, spots the dog at the other of the end of the keyboard, and puts him down.

As the psychologists Marco Yzer and Brian Southwell put it, “new communication technologies do not fundamentally alter the theoretical bounds of human interaction; such interaction continues to be governed by basic human tendencies.” Whether online, on the phone, by telegraph, or in person, we are governed by the same basic principles. The medium may change, but people do not. The question instead is whether the outliers, the trolls and the flamers, will hold outsized influence—and the answer seems to be that, even protected by the shade of anonymity, a dog will often make himself known with a stray, accidental bark. Then, hopefully, he will be treated accordingly.

Similar Posts

One Comment

  1. The readership, in other words, spots the dog at the other of the end of the keyboard, and puts him down.

    Sure, and they put down anyone who challenges the conformist view. In a world of ‘nice’ lies produced by mothers with dark secrets to hide, speaking truth is deemed to be rude and offensive by most. In a world where God is believed to be good, everyone is going to be bad.

    The problem isn’t “mean words”. Children are being raised to imagine words can hurt like sticks and stones inside their imaginations. The answer to the problem isn’t censorship. It is inconsiderate to consider the feelings of those who have a problem with reality. The answer, if one is to be found in time to save this sorry species, will be found in a global reappraisal of the value existent in truth.

    “What is truth?”

    Only those who value lies could be confused. The truth is a question of intent rather than content. One could know nothing worth knowing or be completely insane and never once deceive a soul. But in a world where lies are valued and authority / expertise is measured by the capacity to deceive the credulous, all sanity is forfeit.

    All life soon will be.

Leave a Reply

Your email address will not be published. Required fields are marked *