Microsoft Corp is urging Congress to pass comprehensive legislation to address the issue of deepfakes — AI-generated images and audio designed to interfere in elections or harm individuals.
Microsoft president Brad Smith emphasized the need for evolving laws to combat deepfake fraud. He called for a "deepfake fraud statute" to prevent cybercriminals from using this technology to deceive Americans.
The company is advocating for legislation that labels AI-generated content as synthetic and for federal and state laws penalizing the creation and distribution of sexually exploitative deepfakes.
Smith's goal is to protect elections, prevent scams, and safeguard women and children from online abuses. Congress is currently considering several bills to regulate the distribution of deepfakes.
“Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy,” Smith stated. “By fostering transparency and accountability, we can build public trust and confidence in AI technologies.”
Manipulated audio and video technology has already caused controversy in this year’s US presidential campaign. For instance, Elon Musk shared an altered video of Vice-President Kamala Harris criticizing President Joe Biden, without clarifying that it was digitally manipulated.
Key Takeaways:
- Microsoft urges Congress to pass laws addressing deepfake fraud.
- The company calls for labeling AI-generated content as synthetic.
- Proposed legislation aims to protect elections, prevent scams, and safeguard against online abuses.
- Civil society's role is crucial in upholding human rights and building trust in AI technologies.
- Recent controversies highlight the impact of manipulated audio and video in political campaigns.
Comments
Post a Comment