Many marketers have spent the past year and a half gaining firsthand experience with generative AI tools and establishing their own best practices to manage the risks.
Now, regulation is coming for AI.
Last week, the European Parliament approved the Artificial Intelligence (AI) Act. Five years in the making, the AI Act – the world’s first sweeping AI regulation – may serve as a template for other laws around the world.
Although other countries, including Brazil, China, Israel and Japan, have also drafted legislation, none go as far as the EU’s AI Act. Meanwhile, in the US, President Biden issued an executive order on AI in October, and individual states have proposed laws to govern AI.
Once Europe’s AI Act takes full effect this spring, companies that develop generative AI models will be required, after a year, to abide by EU copyright law and summarize in detail what data they use to train their models. Companies must also label AI-generated content.
Additional rules and restrictions apply for the companies behind the most advanced generative AI models, such as OpenAI’s GPT-4 and Google’s Gemini. These companies must conduct a risk assessment for their models and implement a risk mitigation system. If their model causes a serious health or safety harm, such as a death or injury, they’re required to report it. They also need to have strong cybersecurity measures and divulge how much energy their models use.
But it’s important to note that although this new law bans certain practices – such as using AI systems for cognitive manipulation, social scoring, biometric identification and real-time facial recognition – it’s not a ban on generative AI, said Shailley Singh, EVP of product and COO of IAB Tech Lab.
The AI Act “primarily targets high-risk applications,” he said, and generative AI doesn’t meet that classification.
That said, the law does address consumer concerns about the use of AI-generated content with its requirement for companies to label manipulated images, videos and audio. Nearly 75% of respondents to a recent Gartner survey believe it’s important that brands explicitly disclose AI-generated content.
Nevertheless, brands that wish to use generative AI to support their advertising might balk at the additional precautions.
We asked industry experts: Will the EU’s newly passed AI Act put a damper on the enthusiasm surrounding generative AI in advertising, or is it good to have these guardrails, and why?
- Nicole Greene, VP & analyst, Gartner
- Will Hanschell, co-founder & CEO, Pencil
- Cristian van Nispen, data director, DEPT
- Ian Liddicoat, CTO, Adludio
Nicole Greene, VP & analyst, Gartner
Trust and transparency are fundamental principles for the responsible use of generative AI. These guardrails will be fundamental to helping businesses and consumers navigate a summer where we will see a tsunami of content [in the lead-up to the US presidential election and the Olympics] – some real and some that looks so real it’s hard to tell if it’s fake.
It’s important for both regulators and businesses to be proactive in their efforts to maintain trust and transparency.
We’ve seen the consequences of trying to regulate digital media too late across social media platforms. Regulation always trails technology innovation. Based on how quickly generative AI capabilities in advertising are evolving – from content creation to contextual placements and emotion tracking – the time to act is now.
Will Hanschell, co-founder & CEO, Pencil
The EU’s AI Act is a huge milestone, but overall I don’t think it will be a damper on AI enthusiasm in advertising at all.
For example, the act talks about “disclosing content generated by AI.” This is easy to do, dissuades grey-area AI use cases and shouldn’t deter consumers by seeming “fake” – akin to seeing illustrations in ads versus photographs.
This and the other guardrails provided by the act add clarity and accountability for those of us working to make AI a positive force for creativity, productivity and growth of businesses through advertising.
Cristian van Nispen, data director, DEPT
AI development benefits from clear guidelines on how to apply it – and from active enforcement.
Before GDPR came into effect, one could argue that consumers were aware that their data was being misused. Although we all look for and focus on the opportunity, AI brings similar risks. We see the potential, but we must strive to work with our people and clients to apply the technology responsibly and ethically.
Ian Liddicoat, CTO, Adludio
The AI Act is unlikely to significantly dampen the enthusiasm surrounding generative AI in advertising with one crucial exception: bias management.
The act emphasizes risk management and bias mitigation, particularly in sensitive areas such as race and religion. Recent examples have shown that AI-generated text, images and video can include various unintentional forms of bias, which can have very real consequences for both agencies and brands.
As the act becomes law and regulatory bodies are established, advertisers who are utilizing AI, including generative techniques, will be responsible for demonstrating that their models, training data sets and outputs are free from bias.
Answers have been lightly edited and condensed.