What happens when “free speech” requires censorship?

Twitter’s announcement that it’s doubling its character limit to give users up to 280 characters per tweet made headlines this week. It was a PR campaign to show off new features that they hope will entice new users onto the platform. What it doesn’t address are the challenges facing current users and their safety on Twitter. Underneath the attention-grabbing, cosmetic changes, the platform is grappling with a much more fundamental question: When and why should it regulate hateful speech?

At its inception, Twitter was designed to be a kind of radical experiment in free speech. Users could tweet anonymously, could contact high-profile users without first getting their permission, and rarely had to worry about Twitter censoring their content. In the preamble to Twitter’s original rules, the company stated, “each user is responsible for the content he or she provides … we do not actively monitor and will not censor user content, except in limited circumstances described below.”

That anything-goes approach to building a speech platform made Twitter attractive for political dissidents and citizen journalists, but it also made it a breeding ground for abuse, giving internet trolls a powerful tool to harass their targets.

High-profile users have left the platform over Twitter’s inability to deal with its harassment problem.

That’s put pressure on Twitter to become more involved in moderating its users content, including introducing new rules to combat hate speech and targeted abuse.

But those rules raise broader questions about Twitter’s identity as a platform. If Twitter is willing to start censoring certain users’ content, is it still the free speech platform it set out to be? And if not, what is it? What kind of community is Twitter trying to create, and how confident is it in its ability to protect that Read More Here