Twitter drafts a deepfake policy that would label and warn

but not always remove, manipulated media

Twitter drafts,Twitter  last month said it was introducing a new policy to help fight deepfakes and other “manipulated media” that involve photos, videos or audio that’s been significantly altered to change its original meaning or purpose, or those that make it seem like something happened that actually did not. Today, Twitter is sharing a draft of its new policy and opening it up for public input before it goes live. The policy is meant to address the growing problem with deepfakes on today’s internet. Deepfakes have proliferated thanks to advances made in artificial intelligence that have made it easier to produce convincing fake videos, audio and other digital content. Anyone with a computer and internet connection can now create this sort of fake media. The technology can be dangerous when used as propaganda, or to make someone believe something is real which is not. In politics, deepfakes can be used to undermine a candidate’s reputation by making them say and do things they never said or did. A deepfake of Facebook CEO Mark Zuckerberg went viral earlier this year, after Facebook refused to pull down a doctored video that showed House Speaker Nancy Pelosi stumbling over her words was tweeted by Trump. In early October, two members of the Senate Intelligence Committee, Mark Warner (D-VA) and Marco Rubio (R-FL) called on major tech companies to develop a plan to combat deepfakes on their platforms. The senators asked 11 tech companies – including Facebook, Twitter, YouTube, Reddit and LinkedIn — to come up with a plan to develop industry standards for “sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible.” Twitter later in the month announced its plans to seek public feedback on the policy. Meanwhile, Amazon joined up with Facebook and Microsoft to support the DeepFake Detection challenge (DFDC), which aims to develop new approaches to detect manipulated media.