Google suspended 39.2 million malicious advertisers in 2024 thanks to AI
Google is adding LLMs to everything, including ad policy enforcement.
A publicity shot published by the BBC for Lucy Williamson’s report inside Al-Shifa Hospital in November 2023. (Photo: BBC)
Google may have finally found an application of large language models (LLMs) that even AI skeptics can get behind. The company just released its 2024 Ads Safety report, confirming that it used a collection of newly upgraded AI models to scan for bad ads. The result is a huge increase in suspended spammer and scammer accounts, with fewer malicious ads in front of your eyeballs.
While stressing that it was not asleep at the switch in past years, Google reports that it deployed more than 50 enhanced LLMs to help enforce its ad policy in 2024. Some 97 percent of Google's advertising enforcement involved these AI models, which reportedly require even less data to make a determination. Therefore, it's feasible to tackle rapidly evolving scam tactics.
Google says that its efforts in 2024 resulted in 39.2 million US ad accounts being suspended for fraudulent activities. That's over three times more than the number of suspended accounts in 2023 (12.7 million). The factors that trigger a suspension usually include ad network abuse, improper use of personalization data, false medical claims, trademark infringement, or a mix of violations.
Despite these efforts, some bad ads still make it through. Google says it identified and removed 1.8 billion bad ads in the US and 5.1 billion globally. That's a small drop from 5.5 billion ads removed in 2023, but the implication is that Google had to remove fewer ads because it stopped the fraudulent accounts before they could spread. The company claims most of the 39.2 million suspended accounts were caught before they ran a single ad.
Google is also combating the ways AI can be used to make ads worse. Last year, it assembled a team of 100 experts to help update its misrepresentation policy. The new rules helped Google identify and block 700,000 advertiser accounts, which led to a 90 percent drop in deepfake scams in ads. Google also blocked 1.3 billion pages from showing ads in 2024, with sexual content by far the most common reason for enforcement. That was followed by dangerous or derogatory content and malware.
As we are all keenly aware, LLMs are not infallible. They make mistakes at random, and an incorrect flag that leads to account suspension can be a major pain for an advertiser trying to promote its business or cause. Google says human beings are still involved in the process, but it sells so many ads that it would be impossible for people to check everything manually. With three times the account suspensions, there was certainly the opportunity for false positives to tick upward, but the effect of Google's AI upgrade seems to be a net positive so far.