Twitter today announced that it is testing a new feature called Safety Mode, which is designed to cut down on harassment and unwelcome interactions on the social network.
Users who often get unwanted, spammy, or abusive replies to their tweets can turn on Safety Mode, which will autoblock accounts that use harmful language like insults, or send repetitive, uninvited replies and mentions.
Twitter says that Safety Mode assesses the likelihood of a negative engagement by considering the Tweet’s content and the relationship between the Tweet author and replier. People blocked by Safety Mode will be unable to follow your account, see your Tweets, or send Direct Messages.
With Safety Mode, the autoblock stays on for a minimum of seven days, and autoblocked accounts are listed in the Safety Mode interface. Accounts that a person follows will not be autoblocked.
To develop Safety Mode, Twitter consulted trusted partners with expertise in online safety, mental health, and human rights.
We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks.
Twitter is testing Safety Mode with a small group of people at the current time and will expand the testing group as feedback is received. During the beta testing period, Twitter says it will incorporate improvements and adjustments before releasing the feature for all users.