Twitter Responds to Hateful Tweets

It seems like trolls are hiding under every bridge.

On November 13, 2014, the New England Patriots told their 1 million Twitter followers that as a thank you for following the team, users could re-tweet a special thank you message. This would generate a new post from the Patriots’ Twitter account with a picture of a customized Patriots jersey with the user’s Twitter handle on the back.

The Patriots never imagined that this thank you gesture would lead to a firestorm of accusations of racism.

However, that is exactly what happened. One follower with the handle “IHATE[N***ERS]” re-tweeted the message. The Patriots’ Twitter account automatically posted the message, “@IHATE[N***ERS] Thanks for helping us become the first NFL team with 1 million followers! #1MillionPatriots.”

The image of a Patriots jersey with that handle on the back was quickly taken down by the Patriots, who apologized. But by then, it was too late — it had already been re-tweeted over 1,000 times.

While the Patriots clearly did not intend to condone the racist username, its existence rekindled a long-standing debate over each Twitter user’s inability to block abusive users.

Online hate speech is prevalent not solely on Twitter, but also on other websites such as Facebook, anonymous sites like 4chan, Reddit, and ask.fm, and in comment sections underneath many controversial articles.

Hateful comments online are not only full of racism, but can also target other parts of someone’s identity.

Teens who identify as lesbian, gay, bisexual, and transgender (LGBT) are particularly at risk of experiencing online harassment. A study conducted in 2012 by The Gay, Lesbian, and Straight Education Network (GLSEN) that surveyed teens ages 13-18 about their experiences with cyberbullying found that 42% of LGBT teens had experienced online harassment, compared to only 15% of heterosexual teens.

Women are also disproportionately targeted online. A study published on September 6, 2012 by Emily Matthew that surveyed 874 responses from multiple online gaming communities, both on social media and on other sides, found that 63.3% of female respondents had experienced taunting or harassment online, including being called the most commonly reported slurs: “slut,” “whore,” “c*nt,” and “bitch.” Survivors of domestic violence are at an especially high risk. According to the National Network to End Domestic Violence, a 2012 survey of local domestic violence programs revealed that 89% reported that survivors were being intimidated or threatened by their perpetrators over technology (such as through e-mail, social networking sites, and through phone messages).

One particularly publicized incident of sexist online hate speech involves gaming vlogger Antia Sarkeesian, who was scheduled to speak at Utah State University about sexism in video games. However, after receiving numerous graphic sexual assault and death threats, including one that promised a mass shooting if she gave her speech, she was forced to cancel her talk and call the police.

Provocative and crude online behavior is not uncommon. A study released by YouGov published on October 20, 2014 that surveyed 1,125 adults in the United States found that not only did 28% of respondents admitting to “trolling” someone online (defined by YouGov in their October 20 press release as “someone who is deliberately provocative, upsetting others by starting arguments or posting inflammatory messages on online comment sections).

Despite this high rate of online trolling, only 12% of respondents said they have ever had their comments removed by a site moderator.

On December 2, 2014 in response to ongoing complaints about harassment and abuse on Twitter, Twitter’s Director of Product Management, Shreyas Doshi, published a post on Twitter’s blog in which she announced, “Starting today we’re rolling out an improved way to flag abusive Tweets.”

The new feature will allow users to choose whether they are reporting an account for being “annoying,” “spam,” or “abusive.” It will also give users the ability to block accounts they find abusive.

While some will likely see these improved reporting measures as violating the unfiltered free speech that has perhaps led to Twitter’s immense popularity, the statistics regarding online harassment on Twitter and on other websites is immensely concerning, and those experiencing it deserve to be able to use the internet safely.

Doshi concurs, calling the improvements part of Twitter’s “continuing effort to make your Twitter experience safer.”

Posted in Voices | 1 Comment »

About Amelia Roskin-Frazee

Amelia Roskin-Frazee is a senior at Lick-Wilmerding High School. She is the co-Managing Editor of the The Paper Tiger. Outside of school, she is the Founder and President of The Make It Safe Project and is on the National Advisory Council for The Gay, Lesbian, and Straight Education Network. In her remaining free time, Amelia writes novels, plays steel drums, and contemplates how strange it feels to write about herself in the third person.

One Response to Twitter Responds to Hateful Tweets

  1. Pingback: press release distribution press release

Leave a Reply