When identifying if the media has actually been stealthily altered, Twitter will think about aspects like whether a real individual has actually been made. It may flag material if visual or acoustic details (like dubbing) has been included or removed. It will also evaluate the context and whether the deepfake is likely to impact public security or trigger serious harm.
We understand that some Tweets include controlled pictures or videos that can cause people harm. Today we’re introducing a new guideline and a label that will resolve this and provide individuals more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
Starting March 5th, Twitter might identify Tweets with “stealthily transformed or produced material.” It might also show a cautioning to individuals prior to they retweet or like the controlled media, decrease the visibility of the tweet, prevent it from being suggested or supply additional explanations through a landing page.
These changes are the result of an effort to combat deepfakes. Twitter promised these guidelines late in 2015, and it drafted standards based on user feedback. The platform has already prohibited porn deepfakes, and as the 2020 election nears, it’s most likely Twitter wishes to avoid political deepfake scandals and misinformation projects