Twitter tests a warning message that tells users regarding offensive replies
Twitter is experimenting with a new moderation tool that will warn users before they post replies that contain what the company says is “harmful” language. Twitter describes it as a limited experiment, and it’s only going to show up for iOS users. The prompt that is now supposed to pop up in certain situations will give “you the option to revise your reply before it’s published if it uses language that could be harmful,” reads a message from the official Twitter Support channel. The approach isn’t a novel one. It’s been used by quite a few other social platforms before, most prominently Instagram. The Facebook-owned app now warns users before they post a caption with a message that says the caption “looks similar to others that have been reported.” Prior to that change, Instagram rolled out a warning system for comments last summer. It’s not exactly clear how Twitter is labeling harmful language, but the company does have hate speech policies and a broader Twitter Rul...