The behavior by Google here is "Doing everything they can figure out how to do to get the malvertisers off their network without breaking the network itself." It appears that, in the short run, the malvertisers are winning the arms race.
... but if you have any ideas they haven't tried, I suspect they'd love to hear about it in a job interview for any of the openings for ad quality SWE.
Someone else already said this it's not that hard they could simply do a manual review of ads, but that's obviously going to eat into their profits so they will not do it.
I think this really requires governments to step in. I mean one could easily argue that Google is facilitating fraud here, so maybe they should be liable?
Why would I care if this easy for Google? I'm saying that we need to provide a government led "incentive". If Google becomes financially (criminally?) liable for the damage they cause with fraudulent ads, they would quickly implement a way to solve the problem. If I you mean it's difficult to regulate for the government, why? They don't need to find the fraudulent ads, someone who has been affected just needs to provide evidence and get a ruling against Google for "hosting" it.
You could make the same argument about disposing of toxic waste (it's definitely cheaper and easier to just dump it in the river than to deal with the "reality" of processing millions of litres of sludge).
I actually think the problems of dealing with toxic waste are far more tractable than the problems of vetting every ad in a network serving 30 billion impressions a day.
Toxic waste doesn't try and hide from the litmus paper or the geiger counter.
Yet there is still a cost to dealing with toxic waste, which encourages companies to not make any more of it than necessary. There's currently no cost (it's all profit, in fact) to promoting malicious ads in search results, so why wouldn't Google do it?
There is no reason they have to serve 30 billion impressions a day. If vetting takes that down to 1 billion, that's fine. Lower the volumes (and raise the prices to fund manual vetting) until the problem is resolved.
Toxic waste disposal is a solved problem thanks to (enforced!) regulations that force companies to do so under threat of heavy penalties, not altruism or the fact that the waste doesn't hide from a Geiger counter. We need the same for online advertising.
There's no downside to losing trust as long as you have a monopoly and none of the alternatives are any better (and they are not - Bing, Yahoo, etc - all ad-funded search engines will have the exact same problem).
But even if we accept that there is a downside, it's clearly not enough because this problem keeps happening again and again. Whatever downside there is needs to be increased by a few orders of magnitude for them to take the problem seriously.
The problem with gov't regulation is that the pages have a good chance of not being within the jurisdiction of what ever gov't is trying to do the regulating. So unless someone like Uncle Sam is going to say that ISPs must not peer with known places, there's no way that blocking access to these out-of-jurisdiction pages can be stopped.
This is not some impossibly intractable algorithmic problem to solve. Simply actioning malware reports on the ad would be sufficient. If an ad receives hundreds of reports over the course of months, there's problem something wrong with it and should trigger a human review, at which point the malicious intent of the ad is obvious. Why include a report button on ads at all if its effectively a placebo button?
What is your evidence that the malware reports aren't being actioned?
The malware continuing to appear isn't sufficient evidence. Malware moves hosts and ad accounts all the time.
ETA: from the article itself, in 2021 Google "Removed over 3.4 billion ads, restricted over 5.7 billion ads and suspended over 5.6 million advertiser accounts." That's a ton of action, but AdWords alone also serves 29 billion ad impressions a day. It doesn't take more than a few bad actors slipping through the cracks to get seen (and at these orders of magnitude, "a few" is still "millions." Completely impractical for human hand-review).
It's clear this will never be prioritized without regulation as scammers money is as good as anyone else's and open source projects cannot afford to sue Google to force action.
... but if you have any ideas they haven't tried, I suspect they'd love to hear about it in a job interview for any of the openings for ad quality SWE.