04_Oct_ADPUS_Ad Agecies are Worried of Gen AI in 2024 WEF Survey

Is genAI Advertising Becoming a Concern for Brands? WFA Survey

A recent survey from the World Federation of Advertisers (WFA) dropped a bombshell: most major brands are nervous about the legal, compliance, and ethical minefields that could blow up their reputation if their agencies lean too hard on genAI advertising strategies. But does that mean brands are hitting the brakes on AI altogether? Not really.  

In today’s world, where speed, accuracy, and affordable assets have become the priorities for marketing, 80 percent of brands are expressing concern about marketing agencies and their dependence upon generative AI ads. Forty-eight multinational brands with a cumulative annual marketing spend of $102bn participated in the survey, and their 54 responses indicated insights regarding generative AI.  

Are brands taking any cautionary measures to create a balance between human creativity and generative AI ads? How can brands and agencies navigate their understanding of technology?  

There’s always a swirl of predictions and hot takes regarding generative AI and next-gen tech. But today, we’re peeling back every layer of the WFA survey to see what’s happening behind the scenes. 

Gen AI ad usage is growing every year, even though brands have concerns 

The jump is significant. Last year, WFA reported a 45 percent usage of generative AI, while this year, it has risen to 63 percent. This means that nearly two-thirds of global brand owners are now incorporating generative AI into their marketing strategies.  

Several barriers have given brands cold feet in adopting Gen AI technology in terms of legal, ethical, and reputational risks.  

Sixty-six percent of respondents have expressed that legal arbitrates in Gen AI ads can cause them harm. Most of the legal issues revolve around owning the copyright of the generated AI ad or having IP ownership of the content.  

In addition, 49 percent of brands felt that the technology could generate false and misleading information, causing customers to lose their trust and confidence in them.  

At least 51 percent of the WFA surveyors indicated that using gen AI technology for marketing could lead to ethical issues such as cybersecurity, algorithm bias, and privacy.  

Content creation, ideation, and automation have the most shares of gen AI technology; the need for governance is primary  

Generative AI owns the advertising space, especially regarding content creation, ideation, and task automation. Here’s how it breaks down: a massive 79 percent of brands are using Gen AI to crank out content, 67 percent for sparking fresh ideas, and 54 percent to automate repetitive tasks. It’s clear that this tech is here to save brands time and money, and for over 70 percent of them, that beats ROI concerns any day. 

The big question is: Is gen AI undermining that personal, human touch? Well, brands seem to be leaning more toward efficiency. They’re using AI to create affordable assets on the fly and streamline production, especially when there’s a constant demand to “produce more, and faster.” 

However, the Wild West of AI isn’t without its sheriffs. As more brands dive deeper into generative AI, they also think about governance. A solid 63 percent of brands have put responsible AI principles in place, focusing on privacy, transparency, responsibility, and intellectual property rights. Meanwhile, another 21 percent are still ironing out their policies, trying to balance innovation with ethics. 

So, while gen AI is taking center stage for creativity and efficiency, the need for governance is creeping up, ensuring the tech boom doesn’t become a legal or ethical challenge.  

Marketers are still trying their best to grasp the understanding of AI technology, but it seems a long road ahead of them  

Marketers may have embraced generative AI with open arms, but they’re still fumbling around in the dark, trying to figure out what exactly they’ve adopted. While most brands—even the big players—are all in on gen AI (or at least don’t mind their marketing agencies diving into it), there’s a hidden snag. 

Over half of marketers admit they don’t fully understand the legal and ethical minefield that comes with using Gen AI. And it gets juicier—49 percent have confessed that they lack “AI maturity.” Translation: They’ve got the tech, but they’re still figuring out how to work it without causing chaos. 

So, what does “AI maturity” actually mean? It goes beyond merely possessing advanced tools; it involves mastering the entire process of integrating AI. We’re talking about AI literacy (actually knowing what the tech does), tool integration (getting everything to play nice together), data analysis and integration (making sense of the sea of information AI throws at them), and, of course, performance assessment (is it even working?). 

It’s a long road ahead for marketers, who need more than just AI—they need the skills to manage it responsibly. The first step to AI greatness is admitting there’s a learning curve, right? And with all this fast-moving tech, it looks like brands will spend a little more time sharpening their AI game before they’re fully ready to dominate the space. 

Cut to the chase  

A recent World Federation of Advertisers survey revealed that 80% of brands are concerned about marketing agencies’ use of generative AI ads and tools. While the growth of gen AI tech is undeniable, the lack of governance and AI maturity looms large. Brands are excited by the possibilities but wary of the risks. 

Must Read