A brand new invoice launched by a gaggle of bipartisan Senators seeks to fight the misuse of synthetic intelligence (AI) deep fakes by mandating watermarking of such content material.
The invoice, offered by Senator Maria Cantwell (D-WA), Senator Marsha Blackburn (R-TN), and Senator Martin Heinrich (D-NM), proposes a standardized technique for watermarking content material generated by AI.
Dubbed the Content material Origin Safety and Integrity from Edited and Deepfaked Media Act (COPIED), it’ll bolster creator safety and set up controls on the sorts of content material on which AI might be educated.
Based on Cantwell, the invoice will provide the “much-needed transparency” into AI-generated content material whereas placing “creators, together with native journalists, artists, and musicians, again accountable for their content material.”
If handed, the invoice will even require AI service suppliers like OpenAI to mandate customers to embed info relating to the origin of the content material they generate. Additional, this must be applied in a manner that’s “machine-readable” and can’t be bypassed or eliminated utilizing AI-based instruments.
The Federal Commerce Fee (FTC) will oversee the enforcement of the COPIED Act. The regulator will deal with violations as unfair or misleading acts, much like different breaches beneath the FTC Act.
With the introduction of AI, there was a lot debate round its moral implications, contemplating the know-how’s potential to scrape large volumes of information from throughout the net.
These considerations had been evident when tech big Microsoft stepped again from taking on board seats at OpenAI.
“Synthetic intelligence has given dangerous actors the power to create deepfakes of each particular person, together with these within the artistic neighborhood, to mimic their likeness with out their consent and revenue off of counterfeit content material,” mentioned Senator Blackburn.
The proposed invoice coincides with a 245% surge in frauds and scams utilizing Deepfake content material. A report from Bitget estimates losses from these schemes might be valued at $10 billion by 2025.
Inside the crypto area, scammers have been leveraging AI to impersonate outstanding personalities like Elon Musk and Vitalik Buterin to dupe customers.
In June 2024, a buyer of crypto trade OKX misplaced over $ 2 million after attackers managed to bypass the platform’s safety utilizing deepfake movies of the sufferer. The month earlier than, Hong Kong authorities cracked down on a rip-off platform utilizing Elon Musk’s likeness to mislead traders.
In the meantime, tech behemoth Google was just lately criticized by Nationwide Cybersecurity Middle (NCC) founder Michael Marcotte for insufficient preventive measures in opposition to crypto-targeted deep fakes.