Fundamentally it's a fuzzy signal and people shouldn't rely on it. The general public does not understand Boolean logic (oh, so the SynthID is not there, therefore this image is real). The sooner AI watermarking faces its deserved farcical demise the better.
Also something about how AI is not special and we haven't added or needed invisible watermarks for other ways media can be manipulated deceptively since time immemorial, but that's less of a practical argument and more of a philosophical one.
People think that just because they have a way to prove that an image is AI, their worries of misinformation are solved. Better to acknowledge that wherever you look people will be trying to deceive you even if their content won't have as obvious an indicator as SynthID.
Because it’s meaningless for what it’s being marketed for. It’s conceptually inverted. It’s a detector that will detect 100% of the stuff that doesn’t mind being detected, and only the dumbest fraction of stuff that doesn’t want to be detected.
No fault of the extremely smart and capable people who built it. It’s the underlying notion that an imperceptible watermark could survive contact with mass distribution… it gives the futile cat-and-mouse vibes of the DRM era.
Good guys register their guns or whatever, bad guys file off the serial numbers or make their own. Sometimes poorly, but still.
All of which would be fine as one imperfect layer of trust among many (good on Google for doing what they can today). The frustrating/dangerous part is that it seems to be holding itself out as reliable to laypeople (including regulators). Which is how we end up responding to real problems with stupid policy.
People really want to trust “detectors,” even when they know they’re flawed. Already credulous journalists report stuff like “according to LLMDetector.biz, 80% of the student essays were AI-generated.” Jerry Springer built an empire on lie detector tests. British defense contractor ATSC sold literal dowsing rods as “bomb detectors,” and got away with it for a while [2].
It’s backward to “assume it’s not AI-origin unless the detector detects a serial number, since we made the serial number hard to remove.” Instead, if we’re going to “detector” anything, normalize detecting provenance/attestation [e.g. 0]: “maybe it’s an original @alwa work, but she always signs her work, and I don’t see her signature on this one.”
Something without a provable source should be taken with a grain of salt. Make it easy for anyone to sign their work, and get audiences used to looking for that signature as their signal. Then they can decide how much they trust the author.
Do it through an open standards process that preserves room for anyone to play, and you don’t depend on Big Goog’s secret sauce as the arbiter of authenticity.
I hear that sort of thinking is pretty far along, with buy-in from pretty major names in media/photography/etc. The C2PA and CAI are places to look if you’re interested [1].
It would be a better analogy if tobacco companies sold ad space on their packs and chose not to do business with a private for-profit anti-smoking solicitation group.
No it would not. Meta is an advertising company that sells ad space. More specifically, Meta is the dominant firm in the social advertising market which is an oligopoly.
It is "the business", not an imagined side revenue stream.
Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.
This article[0] investigated the payload. It's a RAT, so it's capable of executing whatever shell commands it receives, instead of just stealing credentials.
Ironically Sony wanted those artists online for streaming, and in those days the only way labels had to transport the music to distribution services was sending the CDs. So the CDs landed on my desk because they'd been rejected by the data ingestion teams. I had some more[0] stern words with a very apologetic man from Sony that day.
[0] they were constantly sending CDs that were fucked-up in totally new ways every time
I still haven't bought a Sony labelled product since... though I may or may not have consumed Sony content. They've definitely lost more than they gained.
That's a pretty good sized ego you got yourself there. The number of people that cared about the rootkit in the general populace was insignificant to Sony. Only tech nerds like us even knew about the rootkit or how insane it was to use. Unless you were a huge flagship purchaser of Sony's latest/greatest each year, they don't even notice you when you buy a TV or any other item.
People barely remember the studio getting hacked and releasing a film
They faced multiple lawsuits and had to do product recalls, so clearly they lost something. What exactly did they gain? IIRC you could avoid it by just turning off autoplay in Windows (which any sane person already did, or you could hold shift I think), and they were otherwise valid audio CDs (otherwise they wouldn't work in players), so it did exactly nothing to stop the CDs from being ripped and shared. And back then everyone knew about p2p so it really only took one person ripping it for it to spread. So even ignoring the lawsuits, even one person boycotting them probably makes it a net loss. Actually the development costs probably made it a loss.
Not sure how interpreted what I said as anything other than the implied you. No matter how much money you did or no longer do spend with Sony is not anything they'd notice. The caveat being you were a flagship purchaser from them which I doubt was the case.
You assumed it was a point of ego, even said as much.
I don't have to buy shit from Sony if I don't want to, and you can't make me.
They definitely lost more on potential hardware sales the past few decades than I would have spent on content... even if it's not enough for them to notice.
And honestly this is more than they really should even have to do. I think it does go above their obligation. They're doing Offcom a favor here, they don't even have to figure out how to block it themselves.
> there's a sense that blocking these imports is an affront to base philosophical freedom in a way that prohibiting physical imports isn't.
It would serve UK legislators well to explore that tingling sense some more before they consider any further efforts in this direction, but that's just my two pence.
Code is speech. Open source projects are an exercise in speaking publicly. This law mandates particular speech in your otherwise Free as in freedom code.
How are you not outraged? People are missing the above forest for the "oh but it's a tiny little easy API and I don't see any downsides" trees.
I think those boomer firms are asleep at the wheel and this kind of market engineering will completely blindside them. Vanguard can't even figure out how to show me my cost basis on the same screen as the one where I sell a security. What could they possibly be doing to prepare for this?
Also something about how AI is not special and we haven't added or needed invisible watermarks for other ways media can be manipulated deceptively since time immemorial, but that's less of a practical argument and more of a philosophical one.
reply