Twitter's Struggles With Moderating Online Abuse Continue - 4 minutes read
Twitter's Struggles With Moderating Online Abuse Continue
Twitter has thrown a lot at its rampant harassment problem over the years, from slapping on more abuse filters to purchasing an entire company to help them wrangle its online hellscape. But apparently abiding by that old adage “see something, say something” is one step too far for good ol’ Jack.
A Twitter user reported Thursday that, after flagging an abusive tweet, he received a message from Twitter explaining that they couldn’t finish reviewing the content for possible ToS violations until “the person directly affected” by the abuse reported it. In short, if you see someone call your friend a slur, Twitter won’t flag it until your friend—and not you—reports the tweet. Which seems to imply, as user Jamie McGonnnigal tweets, that “it’s cool for someone to use hate speech, so long as they don’t use it toward you.”
According to its rules, Twitter defines abuse as “an attempt to harass, intimidate, or silence someone else’s voice,” while anything that threatens based on “their perceived inclusion in a protected category,” i.e. race, sexual orientation, gender, etc., is considered hateful content. Both are banned on the platform. In the first half of this year, users reported 4.5 million “unique accounts” for abuse and another 5 million for hateful content according to Twitter’s latest biannual transparency report.
Twitter did not immediately respond to Gizmodo’s inquiries, so it’s unclear whether this is a platform-wide policy or a specific response targeted to the reported tweet. Twitter has been adding a lot of new changes lately on the advertising side of things, and it doesn’t take that great a leap to think this may apply across the board: When you report a tweet for abusive content, you’re prompted to select whether it’s directed at yourself, someone you legally represent, or others. So Twitter already has the infrastructure in place to possibly filter reports based on whether they affect the user.
Unfortunately, its terms of service offer some confusing guidance on the subject. Take this paragraph below, for instance:
“Some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation. When we review this type of content, it may not be clear whether it is intended to harass an individual, or if it is part of a consensual conversation. To help our teams understand the context of a conversation, we may need to hear directly from the person being targeted, to ensure that we have the information needed prior to taking any enforcement action.”
It’s ridiculously vague, which may be intentional given that it mandates one extra hoop for users to jump through in order to successfully flag abuse or harassment. Twitter’s entire M.O. is built around shouting into the void of the internet, so by that definition everything posted therein could be considered part of a larger conversation. This policy also seems patronizing to the user who reported the content, as if Twitter’s arguing that they’re just not in on the joke or reference.
Additionally, later on that same ToS page Twitter states that “we review both first-person and bystander reports of such content,” which appears to contradict all this. Though I wouldn’t be surprised if Twitter made the argument that technicallyeven an unfinished review still counts as a review, so it still fulfilled its end of the bargain in regard to the above tweet.
Ironically, subheads throughout Twitter’s regulations make the issue sound simple with titles like, “How to help someone with online abuse,” “Don’t be a bystander,” and “Report content to us.”
“When an account is particularly harassing or threatening, tell us about it by reporting the account or Tweets to us. It will take a few steps, and your report will help us make Twitter a better place,” Twitter’s ToS reads. I’ll be waiting with bated breath for them to add a rather larger asterisk after that.
Source: Gizmodo.com
Powered by NewsAPI.org
Keywords:
Twitter • Cyberstalking • Twitter • Adage • One Step Too Far • Twitter • Twitter • Twitter • Pejorative • Twitter • Twitter • Twitter • Hate speech • Twitter • Social exclusion • Race (human categorization) • Sexual orientation • Gender • Twitter • Transparency report • Twitter • Gizmodo • Twitter • Twitter • Take That • Twitter • Twitter • User (computing) • Terms of service • Twitter • Type–token distinction • Individualism • Consent • Conversation • Context (language use) • Conversation • Person • Information • Intention • Twitter • Into the Void (Nine Inch Nails song) • Internet • Twitter • Web page • Twitter • Twitter • Twitter • Twitter • Cyberstalking • Twitter • Twitter • Twitter • Terms of service • 21 (Adele album) •
Twitter has thrown a lot at its rampant harassment problem over the years, from slapping on more abuse filters to purchasing an entire company to help them wrangle its online hellscape. But apparently abiding by that old adage “see something, say something” is one step too far for good ol’ Jack.
A Twitter user reported Thursday that, after flagging an abusive tweet, he received a message from Twitter explaining that they couldn’t finish reviewing the content for possible ToS violations until “the person directly affected” by the abuse reported it. In short, if you see someone call your friend a slur, Twitter won’t flag it until your friend—and not you—reports the tweet. Which seems to imply, as user Jamie McGonnnigal tweets, that “it’s cool for someone to use hate speech, so long as they don’t use it toward you.”
According to its rules, Twitter defines abuse as “an attempt to harass, intimidate, or silence someone else’s voice,” while anything that threatens based on “their perceived inclusion in a protected category,” i.e. race, sexual orientation, gender, etc., is considered hateful content. Both are banned on the platform. In the first half of this year, users reported 4.5 million “unique accounts” for abuse and another 5 million for hateful content according to Twitter’s latest biannual transparency report.
Twitter did not immediately respond to Gizmodo’s inquiries, so it’s unclear whether this is a platform-wide policy or a specific response targeted to the reported tweet. Twitter has been adding a lot of new changes lately on the advertising side of things, and it doesn’t take that great a leap to think this may apply across the board: When you report a tweet for abusive content, you’re prompted to select whether it’s directed at yourself, someone you legally represent, or others. So Twitter already has the infrastructure in place to possibly filter reports based on whether they affect the user.
Unfortunately, its terms of service offer some confusing guidance on the subject. Take this paragraph below, for instance:
“Some Tweets may seem to be abusive when viewed in isolation, but may not be when viewed in the context of a larger conversation. When we review this type of content, it may not be clear whether it is intended to harass an individual, or if it is part of a consensual conversation. To help our teams understand the context of a conversation, we may need to hear directly from the person being targeted, to ensure that we have the information needed prior to taking any enforcement action.”
It’s ridiculously vague, which may be intentional given that it mandates one extra hoop for users to jump through in order to successfully flag abuse or harassment. Twitter’s entire M.O. is built around shouting into the void of the internet, so by that definition everything posted therein could be considered part of a larger conversation. This policy also seems patronizing to the user who reported the content, as if Twitter’s arguing that they’re just not in on the joke or reference.
Additionally, later on that same ToS page Twitter states that “we review both first-person and bystander reports of such content,” which appears to contradict all this. Though I wouldn’t be surprised if Twitter made the argument that technicallyeven an unfinished review still counts as a review, so it still fulfilled its end of the bargain in regard to the above tweet.
Ironically, subheads throughout Twitter’s regulations make the issue sound simple with titles like, “How to help someone with online abuse,” “Don’t be a bystander,” and “Report content to us.”
“When an account is particularly harassing or threatening, tell us about it by reporting the account or Tweets to us. It will take a few steps, and your report will help us make Twitter a better place,” Twitter’s ToS reads. I’ll be waiting with bated breath for them to add a rather larger asterisk after that.
Source: Gizmodo.com
Powered by NewsAPI.org
Keywords:
Twitter • Cyberstalking • Twitter • Adage • One Step Too Far • Twitter • Twitter • Twitter • Pejorative • Twitter • Twitter • Twitter • Hate speech • Twitter • Social exclusion • Race (human categorization) • Sexual orientation • Gender • Twitter • Transparency report • Twitter • Gizmodo • Twitter • Twitter • Take That • Twitter • Twitter • User (computing) • Terms of service • Twitter • Type–token distinction • Individualism • Consent • Conversation • Context (language use) • Conversation • Person • Information • Intention • Twitter • Into the Void (Nine Inch Nails song) • Internet • Twitter • Web page • Twitter • Twitter • Twitter • Twitter • Cyberstalking • Twitter • Twitter • Twitter • Terms of service • 21 (Adele album) •