Read before you re-post (taken from ArsTechnica) >
". . .On Wednesday, members of Twitter's product-design team confirmed that a new automatic prompt will begin rolling out for all Twitter users, regardless of platform and device, that activates when a post's language crosses Twitter's threshold of "potentially harmful or offensive language."
This follows a number of limited-user tests of the notices beginning in May of last year. Soon, any robo-moderated tweets will be interrupted with a notice asking, "Want to review this before tweeting?"
> . . .To sell this nag-notice news to users, Twitter pats itself on the back in the form of data, but it's not entirely convincing.
During the kindness-notice testing phase, Twitter says one-third of users elected to either rephrase their flagged posts or delete them, while anyone who was flagged began posting 11 percent fewer "offensive" posts and replies, as averaged out. (Meaning, some users may have become kinder, while others could have become more resolute in their weaponized speech.) That all sounds like a massive majority of users remaining steadfast in their personal quest to tell it like it is.
Twitter's weirdest data point is that anyone who received a flag was "less likely to receive offensive and harmful replies back." It's unclear what point Twitter is trying to make with that data: why should any onus of politeness land on those who receive nasty tweets?
READ MORE > SPOILER ALERT: Yet this change seems like an undersized bandage to a bigger Twitter problem: how the service incentivizes rampant, timely use of the service in a search for likes and interactions, honesty and civility be damned.
Twitter’s latest robo-nag will flag “harmful” language before you post
Follows Twitter's effort to make you read the news before you share it.
Want to know exactly what Twitter's fleet of text-combing, dictionary-parsing bots defines as "mean"? Starting any day now, you'll have instant access to that data—at least, whenever a stern auto-moderator says you're not tweeting politely.
No comments:
Post a Comment