Government Regulations on AI – What to Expect
2023 has been the “Year of AI”. This year we saw an endless number of new AI companies popping up around the world. However, with their rising influence, the year has also brought about heightened discussions and debates on regulation. As these discussions intensify, the future of these technologies hangs in the balance – are we witnessing the beginning of the end for AI?
Presidential Call for Responsible AI Development
On July 21, 2023, President Joe Biden invited seven of the most prominent AI companies (Open AI, Google, Meta, Microsoft, Amazon, Anthropic, and Inflection) to the White House to discuss safety and transparency regulations on present and future AI software. During this meeting, the companies pledged to comply with the regulations proposed by the U.S. President, aimed at fostering responsible AI development and deployment.
Primarily, these regulations focus on safety concerns related to cybersecurity, foreign powers, and societal risks. However, it also mentions a regulation directed at AI content, where a watermark system would be implemented – to ensure transparency with AI generated content.
President Biden’s commitment to nationwide AI regulation is evident. He is actively engaged in discussions with numerous nations worldwide – across Asia, North America, Europe, and Australia – to establish global AI regulations.
The Uncertain Path of AI Compliance
It is important to note that the regulations drafted and agreed upon by the seven AI companies are preliminary and voluntary. They mark the first step towards comprehensive AI regulations, with an AI Bill of Rights currently under development. I can only assume, since 20+ countries already approved AI regulations in collaboration with the United States, that these same countries will have regulations of their own. This does not necessarily mean that AI content is going to be regulated similarly throughout different countries. So we must be consciously aware of the different laws and global implementations, and how they affect our universal content.
I wish I could predict what those regulations would look like, but if they are anything like the ones proposed above, we as communications professionals need to ask ourselves how these regulations will impact our practices and our work.
Navigating the Waters
As discussed in one of our previous posts, AI Writing: The Good, the Bad, and the Controversy, AI technology is undoubtedly a valuable tool for content writing and communications professionals when used correctly. The key word, however, is tool. It can assist in time management through research and editing assistance, but it should not be a “conveyor belt” for content creation. The new regulations put forth by President Biden do not affect the average citizen. However, they offer valuable insight into the potential direction of the AI Bill of Rights and future regulations regarding AI-generated content.
We can assume from the proposed regulations that transparency will be required from communications professionals and content writers who heavily rely on AI technology. Content created by AI will be watermarked.
Considering the potential stigma attached to watermarked content, is crucial. An excessive number of watermarks on content would raise questions about authenticity, expertise, and credibility. Audiences may question the writer’s true authority on a subject, leading to doubts about the reliability of the content. The risk is that content overloaded with watermarks might be perceived as generic or even obsolete, as anyone could produce similar material with AI.
Content and content consumption rely heavily on the author’s ability to engage and hook the reader. Personalization of content, with the author’s voice, tone, and flow, are crucial to its success. We need to remember that the audience is human, and our human experience contributes greatly to our ability to convey our message – whatever that message may be.
Social Media Also Under Scrutiny
During President Biden’s meeting, he briefly snubbed social media stating, “Social media has shown us the harm that powerful technology can do without the right safeguards in place”. Since this meeting revolved around government regulations surrounding AI, I can only assume that this snub was hinting at social media regulations in the future.
In our post, Will TikTok Security Concerns Cause it to Follow Snapchat’s Trajectory?, we discussed the security concerns in the United States surrounding TikTok. And on March 17, 2023, a congressional hearing with TikTok CEO, Shou Zi Chew, illuminated something much greater – possible further regulations on social media in general. The five hour congressional hearing focused heavily on security issues surrounding TikTok, but it also mentioned a few other topics – addictiveness, children’s safety, and mental health.
In my opinion, these three topics are particularly pertinent in relation to certain areas of content marketing, specifically:
Short form content
Content directed at children/ created by children
Instagram influencers and sponsored content
(i.e, Kardashian Detox Tea Scandal)
While security is undoubtedly a major concern, these specific content related topics must be acknowledged by writers and creators – they will certainly be affected by these topics in the future.
Short form content, which is only gaining popularity, is a great form of content production to convey our key messages and raise awareness of the brands with whom we work. However, specific to TikTok, between 50-53% of children between the ages of 3 to 17 are consuming short-form content on TikTok. This is severely impacting children’s short attention-span by fostering an addictive atmosphere. A 2021 study compares the consumption of short form content to the rush of gambling – scrolling for the next dopamine kick. If addiction in the form of content consumption is becoming a greater federal concern, we have to assume that there is a possibility that government regulations may set out to suppress this content.
Embracing our Evolving Responsibilities
As communications professionals we understand that we have a responsibility to both the brand and their audience. Regulations are not new to us – whether by complying with visual and audio regulations, plagiarism laws, or accurately representing brand and market standards. But, it is safe to say that our responsibility is only becoming greater with the appearance of new platforms and technology.
Regulations set in place by governments all over the world (with AI), and the lingering updated regulations surrounding social media in the United States, do not scare us since they align with our values and ethics as professionals. It is our job to think outside the box and figure out new ways to address and engage our audience.
William Pollard said, “Without change there is no innovation, creativity, or incentive for improvement,” – all things that we continue to strive for.