Twitter tries, fails to explain policy following Trump retweets

2c3b76c8d3202e94.jpg


Earlier this week, President Donald Trump retweeted a series of disturbing anti-Muslim videos, one of which appears to show a teenage boy being beaten to death. The posts predictably sparked outrage, leading users to question why Twitter would allow them to remain on the feeds of the president’s 43 million followers.

Now Twitter has explained why the videos were left up—and it’s only causing more anger.

Twitter previously justified letting Trump post content—even if it clearly broke the company’s poorly-enforced rules—because of a convenient internal policy that protects “newsworthy” posts.

Twitter has changed that stance and now says a video showing a boy getting murdered doesn’t fall under those guidelines. Instead, and just as alarmingly, the company is referring people to its standard media policy.

The policy includes a vague definition of what constitutes “graphic violence” or “adult language” and appears to skillfully elude taking down images like those reposted by Trump. While users are encouraged to report graphic violence, the rules explain the offending post won’t always be taken down, nor will the account that posted it be punished.

“Some forms of graphic violence or adult content may be permitted in Tweets when they are marked as sensitive media,” the policy says. “However, you may not include this type of content in live video, or in profile or header images.”

Twitter does say it will remove media with “excessively graphic violence out of respect for the deceased and their families” but only “if we receive a request from their family or an authorized representative.”

All other violent images posted in timelines (excluding live videos) are given a “sensitive tag.”

The media rules also begin with a confusing and seemingly unnecessary note, “Please note: this policy will be updated later this year to include hate symbols and hateful imagery.”

By posting the notice, Twitter has already included those categories in its policy, except that it, for whatever reason, appears to be waiting to enforce them. It’s unclear why the publishing of rules about “hateful imagery” would need to be delayed or, perhaps, strategically scheduled.

The company’s “Safety Calendar” also indicates that on Dec. 18, a new rule will expand its policies to include content that glorifies or condones acts of violence that result in death or serious harm.

The site’s CEO, Jack Dorsey, felt compelled to chime in on Friday afternoon, reiterating why Trump’s retweeted posts weren’t taken down. His tweet was met with a cacophony of furious replies, many coming from verified accounts of people who work in media.

Joshua Topolsky, CEO of the Outline, came out firing, asking Dorsey if his beleaguered company relies on Trump.

Dorsey replied with a curt, “No, I don’t.”

Related video

Robots are the next step in using technology to explore sexuality.

Twitter has recently taken a more active approach against accounts that promote violence and hateful speech, verifying and even banning some members. It also changed its policy on verified accounts and can now remove the coveted blue check mark if a user abuses it—a misguided policy that does nothing to address the site’s rampant harassment.

But for all its (mostly) agreeable changes, Twitter still hasn’t addressed how it plans to enforce them. Both the Twitter Safety account and Dorsey said they would welcome feedback. Surely, if that’s true, something must change.



Source link