From deep fakes of Taylor Swift to the Hollywood sign in flames preceding the Los Angeles fires, artificial intelligence has acquired new capability levels on social media platforms. Many instances have led prominent celebrities and companies to release statements against false posts associated with them, leading to concerns over security breaching.
With discussions on how to limit the rising influence of AI while still allowing space for innovation, California Governor Gavin Newsom signed Assembly Bill 2355, making creating lifelike pictures of real people that cause emotional distress illegal. This bill is one step toward limiting the power of AI, which in turn can ensure privacy protection, discourage deep fakes and promote public trust.
One prominent reason that AI should be restricted is because of the increase in deep fakes – false images, videos and audios of people online. Although this may not always have a direct impact on viewers, the deep fakes tend to spread misinformation and rumours, sparking unnecessary complications.
There have been instances where companies use celebrities to falsely advertise products through AI generated images. Famous Food Network Chef Michael Smith was used in an AI generated image to promote a false offer of $500 worth of kitchenware in return for financial information. Because the offer seemed to be promoted by a trusted figure, many found the post to be more reputable until Smith publicly dissociated himself from it.
A similar instance occurred in Hong Kong in 2024 when a company suffered a loss of $25 million because a coworker was tricked by a deep fake of his colleagues, according to CNN. This unethical spread of information when seen on a larger scale has even affected prominent events like the elections, with fake endorsements and recordings circulating social media.
Shortly before New Hampshires’ primary election, a forged recording of President Biden saying not to vote in it misled and confused many local voters, Al Jazeera reported. With half-baked truths and glimpses of misleading posts going around social media, many were at risk of making decisions that did not truly represent their stance.
Limiting the influence of AI can also help preserve the protection of privacy. From online ads and enticing offers that are too good to be true, many people fall victim to scams that reveal passwords and personal information to organizations that may take advantage of the intel.
A Pew Research Center poll shows that 53% of Americans believe that AI is misusing their personal data. Not only does this spread distrust among the public and media users, but it also instills security concerns. AI has enabled hackers to utilize an array of technologies to steal information.
The Home Security Heroes conducted a study on an AI password guessing tool called PasGAN, and revealed that 51% of the tested passwords were found within a minute. If AI is allowed to further develop its capacity to gain access to private fields of information, many people on the internet are at risk of being exploited.
Introducing new legislation on the ethical use of AI can help reduce malicious uses of the tool. Currently, of the more than 4,500 technology decision-makers across different sectors, 45% of large companies and 29% of small and medium enterprises said they have adopted the use of AI. While AI has promoted efficiency, there is a wide space for companies to misuse it through unethical practices, such as biased and discriminatory practices.
In 2014, Amazon found that their AI-run recruitment tool was not gender neutral and gave bias to male candidates. Similar experiences have occurred within prominent corporations like Google and Tesla. Establishing regulations could promote transparency within companies and in turn ensure public trust and decrease risks of cyber security.
Although limiting artificial intelligence might be a hindrance to modern innovations and progress within technology, it is important to consider the growing hold it has on the public by defining the kind of information they receive. Moving forward, lawmakers must work on establishing regulations, such as AB 2355, that keep AI within human reach and protect individual rights while still allowing space for improvement.