Microsoft wants Congress to pass “a comprehensive deepfake fraud statute” to prevent AI-generated scams and abuse

What you need to know

  • As AI becomes more advanced and sophisticated, deepfakes continue to flood the internet, spreading misinformation.
  • Microsoft calls on the US Congress to pass a comprehensive deepfake fraud statute to prevent cybercriminals from leveraging AI capabilities to cause harm.
  • The new legal framework will provide law enforcers with the basis to prosecute AI-generated scams and fraud.

As a seasoned digital citizen who has navigated the vast expanse of the internet for decades now, I find myself deeply concerned about the rising tide of deepfakes and AI-generated content that are becoming increasingly sophisticated and difficult to distinguish from reality. As someone who has witnessed the evolution of technology firsthand, I can attest that we’ve come a long way since the days of rudimentary chatbots and basic image manipulation tools. But with great power comes great responsibility, and it seems that the rapid advancement of AI is outpacing our ability to regulate its use effectively.


1. With AI tools such as Microsoft Copilot and ChatGPT by OpenAI becoming increasingly complex and capable, there’s a growing trend of deepfake content being widely shared on the internet (as we saw with Elon Musk this week). Beyond the concerns about security and privacy associated with the technology’s advancement, the proliferation of deepfakes poses a challenge to the credibility of online content, making it hard for users to distinguish truth from fiction.

Malicious individuals employ AI technology to create realistic deepfakes for deceitful purposes such as fraud, mistreatment, and manipulation. The absence of comprehensive regulations and safeguards has allowed deepfakes to proliferate widely. Nevertheless, Microsoft’s Vice Chair and President, Brad Smith, has announced plans to implement new protective measures against deepfakes.

Smith points out that Microsoft, along with other major figures in the field, are working diligently to prevent the use of AI-created deepfakes for disseminating false information regarding the upcoming U.S. Presidential election.

Although the company appears to have a good handle on this issue, the leading executive acknowledges that more actions are necessary to deter the misuse of deepfakes in criminal activities. “Passing a comprehensive deepfake fraud law is one crucial step the US can take to obstruct cybercriminals from defrauding ordinary citizens using this technology,” suggested Smith.

It is recommended that legislation be enacted, mandating AI content providers to employ advanced tracking technology for tagging synthetic material. This measure is crucial for fostering faith in the information community, enabling users to distinguish between AI-produced and altered content more effectively.

As a dedicated researcher, I’m advocating for a critical update in our legislative framework. Specifically, I urge policymakers to amend both federal and state laws that safeguard children from sexual exploitation, abuse, and unauthorized sharing of intimate content. Given the rapid advancement and prevalence of Artificial Intelligence (AI), it’s crucial that these updated laws explicitly address AI-generated content as well. This proactive step will help ensure our protective measures stay relevant and effective in today’s digital age.

It all a work in progress

Prior to the US presidential elections, Microsoft CEO Satya Nadella stated that there are sufficient technologies available to prevent deepfakes and misinformation from AI like Copilot. However, following this announcement, concerns were raised after Copilot AI was detected producing false information regarding the upcoming elections.

After AI-created explicit images of singer Taylor Swift appeared online, a Senate bill was approved to tackle this issue. Individuals who find themselves depicted in such AI-generated explicit content have the right to file lawsuits seeking compensation.

In contrast, OpenAI introduced a new approach aimed at assisting users in distinguishing AI-created content. Watermarks have been added to ChatGPT text and DALL-E 3 images, yet OpenAI acknowledges that this is not a definitive solution for authenticity concerns. OpenAI is developing a tool to identify AI-generated images, claiming it will achieve a high level of precision, 99.9%.

Read More

2024-07-30 21:09