Making Twitter safer
November 26, 2016 1:01 AM   Subscribe

So you may have noticed Twitter has a new option to report users for hate speech. How can I help link this to a safer Twitter by getting users banned?

I've been testing this feature by reporting 10 accounts a day for the maximum (5 tweets) when possible, focusing on stuff that is unambiguously hate speech or hate content. Slurs, violent images, threats, etc. Really not nice stuff.

So far I've got a list of 50 accounts, with links to a sample offensive tweet. It's been between 6 days and 24 hours since I reported the users, and only 6 have been suspended so far. I'm waiting to see what happens within a week for these accounts but my hopes aren't super high.

My question #1 is, if getting hate accounts banned is a question of multiple users reporting the content before anything gets done, how would one do that without spreading the hateful content further and exposing oneself to attacks? Make a burner email account and put it up on a blog or a Google spreadsheet or something? Is someone already working on this?

My question #2 is, who should I contact about this? I was thinking of contacting SPLC about the list of users and response from Twitter. Would that just be wasting their time?

For reference: I am already regularly donating a chunk of my income monthly & am focusing on hate speech on Twitter because it seems like something I can do even from abroad. I would love to hear about other online activities that could help.
posted by sacchan to Society & Culture (4 answers total) 5 users marked this as a favorite
Okay - I am right there with you when it comes to clearing the perpetrators of hate speech off Twitter, but I also am a believer in free speech, and so this question is making me a little uneasy. Because think about it - the kind of things that you're talking about doing are all things that Gamergaters or Trump supporters could use against you, and that wouldn't be very good, would it?

And that could explain why you're seeing such a "slow" rate - Twitter is taking the time to investigate, and that's what we want them to do. If Twitter just auto-bans people based on one person's say-so, then there is no longer anything to stop people from making a report againt you if they don't like you.

Maybe that's a good way to think about it - if the action you're considering taking would be something to your own detriment if someone else did it towards you (i.e., someone reports you and then makes a bunch of burner email accounts and reports you from all of them), then don't do it. If the action you're taking would still preserve your own rights if it was turned agianst you (i.e., someone reports you, then lets Twitter take time to investigate), then....go for it.
posted by EmpressCallipygos at 5:51 AM on November 26, 2016 [1 favorite]

The speech I've been reporting is so offensive I was unsure as to whether I can report it here.
"Die n*****r"
"Gas the k****s"
How long would you give Twitter to investigate speech like that? I'm giving them a week. It wouldn't make me uneasy to hope for a shorter time than that.
And what if nothing happens after a week?
posted by sacchan at 7:09 AM on November 26, 2016 [1 favorite]

The best way to go about this would probably be to do three things

1. Focus on more stuff that is actually illegal (hate speech is awful and Twitter has made a stance against it but it's targeted threats that can be against the law) and accounts whose sole purpose is to do this stuff.
2. Create and amplify a blocklist that is just people you've reported to Twitter. This gives people an opportunity to block the people you've identified and sends a message about how you think people should be acting on Twitter
3. Add everyone to a list using a burner (i.e. not your personal) Twitter account so you can keep an eye on what they are doing/saying

I don't think you'd be wasting SPLCs time at all particularly if these were strong examples of racist/bigoted hate speech. The issue you get mixed up with is that things that may seem hateful to some people may not to another and the free speech issue resonates strongly with people in the US in particular. Being intolerant of intolerance is different from targeted harassment directed towards certain individuals or classes of people.

Keep in mind that Twitter's policies are not free speech absolutist policies, at all. People can have a free speech argument with Twitter but they've outlined their terms on this page as stated here. All you're trying to do is get them to uphold their policies.
Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
So yes it will be helpful to have a group of people doing this. I agree it makes sense for Twitter not to auto-ban people. Keep track of the worst offenders and highlight Twitter's lack of action over time which was what WAM did and it was quite effective.
posted by jessamyn at 7:10 AM on November 26, 2016 [5 favorites]

You can also look for the leaders, the people with lots of followers who instigate waves of harassment against targeted individuals. Those people are much more dangerous, and there are far fewer of them.
posted by Winnie the Proust at 8:23 AM on November 26, 2016

« Older How to get rid of an apartment, part two   |   Does my Trump-voting father need to know about my... Newer »
This thread is closed to new comments.