Meta’s New Tools Address Marketers’ AI Concerns

Meta's New Tools Address Marketers' AI Concerns

A new image generator relies on assets the advertiser already owns, part of Meta’s efforts to ensure its ad products align with internal brand guidelines.

Meta unveiled a set of new generative artificial intelligence (AI) features for advertisers Tuesday (May 7), including image and text generators that speed up creative production. The offerings address marketers’ demand for a greater sense of control in their AI experiments and a guarantee that the end results will meet what are often specific internal brand guidelines, like ensuring the correct colors are present in ads.

Striking the right balance with generative AI is important for Meta to build confidence in a technology the Instagram and WhatsApp owners believe is key to future growth, but which has become involved in complex ethical and legal tangles.

“Part of the answer for how to grow [generative AI] adoption is making sure it’s not just a black box, but you also have the right levels of control,” said John Hegeman, vice president and head of monetization at Meta, during the Q&A portion of a presentation to reporters. “Brand guidelines are an area where brands are going to continue to have a lot of specific preferences and not want those to be violated.”

At launch, Meta’s image generator only spits out image variations based on assets that the advertiser submits rather than working off of text prompts alone. A business may have an existing picture that shows a cup of coffee against sunny farmland and could ask Meta to show the cup instead “surrounded by coffee beans and lush leaves,” leading the software to produce several potential image ideas that might serve as a replacement.

“Most marketers and advertisers prefer utilizing the assets they’ve provided,” said Alvin Bowles, vice president of Meta’s global business group, during the presentation. “The guesswork, you take some of that out of it because it’s coming from the agency and client.”

Meta will eventually let the image generator function off of text prompts, according to Hegeman, though there’s no set timeline on when that will occur. The ads can be further spruced up with text overlays in a dozen of Meta’s most popular typeface options, while an automated image expansion tool — already used to adjust ads to fit different Meta surfaces like the TikTok lookalike Reels — will ensure the copy is aligned properly.

Meta, at the same time, is expanding its AI-powered text generation to ad headlines in addition to primary text. The company said it factored in feedback from advertisers that wanted more diverse suggestions and ones that better reflected their brand values and product selling points. Outputs could be further improved as Meta transitions to its more advanced Llama 3 large-language models.

The social media giant’s new AI bells and whistles are expected to be available globally by the end of the year. As with many pitches around AI, the goal is to eliminate grunt work and free up more time for performance-driving activities, including creative versioning, that boost ad activity and, ultimately, Meta’s bottom line. Meta’s ad revenue was up 27% in Q1, while the average cost per ad increased 6%, a sign of growing demand.

“We want to free up time for people who are focusing on strategic relationships rather than spending an inordinate amount of time transacting on our platform,” said Bowles.

Striking a balance

Most Meta advertisers have used AI in some shape or form for years, and the firm released an AI Sandbox that leverages generative AI to let brands test out different backgrounds and text in their campaigns. Companies like Meta are now trying to gauge where more hands-off automation makes sense in advertising and what areas are best left under human supervision.

Generative AI has been known to produce bizarre, off-putting images — Facebook is rife with uncanny valley material — and is increasingly subject to questions around copyright and the need for disclosure, such as watermarking that signals a picture was AI-generated. Meta executives were asked about both topics several times during the presentation Tuesday. The Meta announcements also dropped on the same day that OpenAI, a competitor in the tech arms race, announced a new Media Manager that allows artists to opt out of having their works used as training material for generative AI.

Per Hegeman, verticals like political and social issues advertisers will not be allowed to use Meta’s generative AI tools at first. Meta outlined its plans to label AI-generated images in February, though its rules for advertisers are still being ironed out.

“We’re working through some of the specifics in terms of exactly how that works in the context of ads,” said Hegeman.

Also Read: Reminder – Google Is Turning Off All Universal Analytics Services and APIs

Meta will not prefer AI-generated ads over those made by humans, nor will it boost its AI-generated ads over those created with rival software like Midjourney, Hegeman explained. The aim is to prioritize what’s performing best because that drives up the competitiveness of ad auctions. Asked about the cost of accessing these products, Hegeman assured me that Meta would try to keep as many of its ad offerings free as possible.

That could change as generative AI becomes more sophisticated and demands increased computing power. Meta leadership has previously indicated that charging people to use bigger AI models is one way to monetize the incredibly cost-intensive technology. At least for now, Meta is welcoming advertisers of all sizes into the fold.

“We want to make sure that businesses of all sizes can option into this,” said Bowles. “This is meant to democratize storytelling from an advertising lens.”