If a human is ultimately made up of nothing more than particles obeying the laws of physics, it would be in principle possible to simulate one on paper. Completely impractical, but the same is true of simulating Claude by hand (presuming Anthropic doesn't have some kind of insane secret efficiency breakthrough which allows many orders of magnitude fewer flops to run Claude than other models, which they're cleverly disguising by buying billions of dollars of compute they don't need).
And what special sauce does the web preview use? At some point, someone has to actually parse and process the data. I feel like on a tech site like Hacker News, speculating that Google has somehow done a perfect job of preventing malicious PDFs beckons the question: how do you actually do that and prove that it's safe? And is that even possible in perpetuity?
> how do you actually do that and prove that it's safe?
Obviously you can't. You assume it's best in class based on various factors including the fact that this is the same juggernaut that runs project zero. They also somehow manage to secure their cloud offering against malicious clients so presumably they can manage to parse a pdf to an image without getting pwned.
It would certainly be interesting to know what their internal countermeasures are but I don't know if that's publicized or not.
But you can't even download the allegedly infringing material from the .org site. You can just read about it? So they're abusing the All Writs Act to take down a site that they think is related to some undetermined future nebulously bad thing for their business. If I wasn't on Anna's side before, I sure am now.
Anna's Archive announced they intended to infringe on the label's copyrights by distributing their music without a license. The law allows the court "to prevent or restrain infringement of a copyright" (emphasis mine).
Apparently you can win anything you want in a default judgement, no matter how ridiculous. When you know the other side won't show up because they'd be handcuffed, this is a useful way to achieve your goals.
> think is related to some undetermined future nebulously bad thing for their business
The thing in question being "we copied all your data and are now gonna release it for free". I like what Anna's is doing, but come on! This is dishonest communication if I've ever seen it!
> that they think is related to some undetermined future nebulously bad thing
I mean, Anna's Archive was pretty clear about the future bad thing.
Spotify didn't "think", it wasn't just "related", nothing was "undetermined" or "nebulous".
Anna's Archive explicitly announced they were going to start distributing Spotify's music files. It's not even a case of hosting links to torrents but not seeding -- no, they were going to be doing the seeding too. You can't get more clear-cut than that.
I'm not taking anybody's side here, as to what copyright law ought to be, but Spotify isn't abusing the legal process here.
> Anna's Archive explicitly announced they were going to start distributing Spotify's music files. It's not even a case of hosting links to torrents but not seeding -- no, they were going to be doing the seeding too. You can't get more clear-cut than that.
You can get more "clear cut" than that. You could rule when there were damages or law was actually broken. Committing a crime is not the same as saying you will commit a crime. ie. I will rob the bank on the Chase Bank Kraemer Branch in Orange County. Now try and prosecute me. Yes, I understand this would fall under criminal vs civil. The issue is about the law being applied in the way the benefits the ones with the most money, more often than not, violating equal protections and further eroding public confidence in the US legal system.
> Actually committing a crime is not the same as saying you will commit a crime.
No, but it can have a lot of legal repercussions, like restraining orders, you can be arrested for making a threat, search warrants may be issued... and in the case of corporations, restraining orders and injunctions. Like here. This is all very standard stuff. There's absolutely nothing exceptional about the court process in this particular case.
>On January 2, the music companies asked for a temporary restraining order, and the court granted it the same day.
That pretty much tells me all about what courts care about. Can't get TRO's when the government is attacking its people, but when there's a sniff of sharing music? Instant hammer.
EDIT: to answer a response I got about "courts aren't supposed to 'care'", that's the point of a TRO:
>To obtain a TRO, a party must convince the judge that they will suffer immediate irreparable injury unless the order is issued.
TRO's are rare and losing it just means you need to wait for the actual court case. That's why I'm making such a big deal of this. Getting a TRO the same day because maybe one day some website will have archives of music files just shows how out of touch the justice system is with tech.
> Getting a TRO the same day because maybe one day some website will have archives of music files just shows how out of touch the justice system is with tech.
Huh? It's not a "maybe one day", it was a public announcement by AA that they were absolutely going to do this soon.
And TRO's are exactly for this, when irreparable harm might occur. Nothing out of touch at all.
Now, granted the site still operates under other domains. But it's certainly expected that they would block the domain controlled by a US TLD, i.e. do the little they can. Really, what else would you possibly expect?
>it was a public announcement by AA that they were absolutely going to do this soon.
"soon" isn't good enough for your typical TRO. To emphasize, "immediate, irreparable damage". And even if it was tomorrow, you really need to be unaware of the internet to argue that dumping a few more torrents into the wild is causing "irreparable damage". Do any of us really buy that?
>Really, what else would you possibly expect?
A TRO to be denied as usual because a few more torrents is not going to bankrupt a billion dollar music industry and to proceed at a later time like anyone else in the legal system?
If denying a TRO of someone illegally deported to a foreign prison isn't a high enough bar, you're not convincing me some torrents is.
I guess this is a naive question, but where are the lobbies that care about the people? Or even common decency at this point? It really feels like people are treating the US less as an investment and more like a sinking ship to abandon. And they were the ones that shot the holes to begin with.
There aren't any, everyone's out for themselves, further diluting any soft power the masses had. After like 99% of the population doesn't have a stock value associated, might as well join the Mobile Infantry at this point.
"I'm not taking anybody's side here, as to what copyright law ought to be, but Spotify isn't abusing the legal process here".
Normally, only those who own one or more of the exclusive rights in copyright can actually enforce. Spotify does not own copyright in the music involved in the archive, unless they created some of it (which would be an interesting story, actually - spotify competing with its own artists).
So normally, they would not be able to sue for copyright related violations against anyone.
The other plaintiffs (record companies) are not abusing legal process, but it is unclear what spotify is doing in this lawsuit.
They almost certainly do not own meaningful copyright in the metadata, either, and that would be a bad precedent to see set.
This is not unusual. Spotify is included because it is a relevant source of evidence as the custodian of the data. It improves the narrative that the data wasn't just indexed but obtained illegally.
Perfect I can't wait for the deluge of spam texts with real clickable buttons to trick me instead of just a 320x320 picture of one.
Let's see, RCS:
* needlessly complicated protocol, such that only a behemoth such as Google could administrate it
* Intensely leans on device attestation to even let you on to the network
* Tenfold the multimedia touchpoints as MMS, correspondingly it will have 10x the zero days
* Certainly wiretapped at law enforcement's whim
* Took 15 years to roll out
* And you still get green bubbles on iPhone
I wish we'd all switch to signal and the telcos would get back to being dumb pipes. But no, we need to support Read indicators and ad carousels in our baseband
When you use Whatsapp, you're trusting a single entity to handle your messages (plus your ISP I guess). When you use RCS, your messages go through your mobile operator, google, and perhaps someone else in an overly complicated way that is also built upon SMS (a non-internet thing) for more confusion.
They are, there's a video on YouTube you can find where they interview someone with that job and they test 10,000 a day. Then they mention that they go home and vape some more
Foisting the responsibility of the extremely risky transport industry onto the road developers would certainly prevent all undesirable uses of those carriageways. Once they are at last responsible for the risky uses of their technology, like bank robberies and car crashes, the incentive to build these dangerous freeways evaporates.
I think this is meant to show that moving the responsibility this way would be absurd because we don't do it for cars but... yeah, we probably should've done that for cars? Maybe then we'd have safe roads that don't encourage reckless driving.
But I think you're missing their "like bank robberies" point. Punishing the avenue of transport for illegal activity that's unrelated to the transport itself is problematic. I.e. people that are driving safely, but using the roads to carry out bad non-driving-related activities.
It's a stretched metaphor at this point, but I hope that makes sense (:
It is definitely getting stretchy at this point, but there is the point to be made that a lot of roads are built in a way which not only enables but encourages driving much faster than may be desired in the area where they're located. This, among other things, makes these roads more interesting as getaway routes for bank robbers.
If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.
Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.
I think you are imagining a high-speed chase, and I agree with you in that case.
But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?
(it's a rhetorical question; I'm just trying to clarify the point)
Which in case of digital replicas that can feign real people, may be worth considering. Not a blanket legislation as proposed here, but something that signals the downstream risks to the developer to prevent undesired uses.
Then only foreign developers will be able to work with these kinds of technologies... the tools will still be made, they'll just be made by those outside jurisdiction.
Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.
I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.
It's messy because media impersonation has been a problem since the advent of communication. In the extreme, we're sort of asking "should we make lying illegal?"
The model (pardon) in my mind is like this:
* The forger of the banknote is punished, not the maker of the quill
* The author of the libelous pamphlet is punished, not the maker of the press
* The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop
In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.
With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.
> The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop
In this case the guy making the images isn't doing anything wrong either.
Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†
Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?
(Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)
> then we'd have safe roads that don't encourage reckless driving.
You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?
I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.
Creative work has obvious applications. e.g. AISIS - The Lost Tapes[0] was a sort of Oasis AI tribute album (the songs are all human written and performed, and then the band used a model of Liam Gallagher's mid 90s voice. Liam approved of the album after hearing it, saying he sounded "mega"). Some people have really unique voices and energy, and even the same artist might lose it over time (e.g. 90s vs 00s Oasis), so you could imagine voice cloning becoming just a standard part of media production.
As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)
Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)
But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.
You said you can't see a legitimate use, but clearly there are legitimate uses (the "no legitimate use" idea is used to justify bad drug policy for example, so we should be skeptical of it). As to whether we should allow it, I don't see how we have a choice. The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones, and eventually today's training supercomputers will be tomorrow's commodity. The whole idea of AI "fingerprinting" is bad anyway; you don't fingerprint that something is inauthentic. You sign that it is authentic.
> The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones,
Yes, lets just give up as bad actors undermine society, scam everyone and generally profit from us.
> You sign that it is authentic.
Signing means you denote ownership. A signed message means you can prove where it comes from. A service should own the shit it generates.
Which is the point, because if I cannot reliably see what is generated, how is a normal person able to tell. being able to provide a mechanism for the normal person to verify is a reasonable ask.
You put the bad actors in prison, or if they're outside your jurisdiction, and they're harming your citizens, and you're America, you go murder them. This has to be the solution anyway because the technology is already widely available. You can't make everyone in the world delete the models.
Yes signing so the way you show something is authentic. Like when the Hunter Biden email thing happened I didn't understand (well, I did) why the news was pretending we have no way to check whether they're real or whether the laptop was tampered with. It was a gmail account; they're signed by Google. Check the signatures! If that's his email address (presumably easy enough to corroborate), done. Missed opportunity to educate the public about the fact that there's all sorts of infrastructure to prove you made/sent something on a computer.
> open their password manager which also might need you to authenticate, type in their master password, search for the name of the said website, copy the password, paste it in
This is one way to guarantee you'll eventually fall for a phishing attack. Are we really running URL-unaware password managers in the year 2026?
>Are we really running URL-unaware password managers in the year 2026?
URL-aware browser plugins for autofilling passwords can also make people _more_ susceptible to phishing.
The password managers plugins sometimes not working correctly changes the Bayesian probabilities in the mind such that username/password fields that remain unfilled becomes normal and expected for legitimate websites. If that happens enough, it inadvertently trains sophisticated computer-literate users to lower their guard when encountering true phishing websites in the future. I wrote more on how this happens to really smart technical people: https://news.ycombinator.com/item?id=45179643
Password browser plugins being imperfect can simultaneously increase AND decrease security because of interactions with human psychology.
Even if autofill breaks, the moment it does, if you're security aware, is to actually read the URL you're at, not start copy-pasting like it's the wild west.
> autofilling passwords can also make people _more_ susceptible to phishing
No, it doesn't. What it does, is generally make people _less_ susceptible to phishing, but the moment you stop paying attention when autofill breaks, is the moment you can STILL get phished. But in 90% of the cases, the autofill will HELP you avoid getting phished.
What an absolutely bananas thing to say, that autofilling passwords make people more susceptible to phishing, completely wrong and borderline harmful to spread things like this.
It can also not "break", autofill your credentials, and in submission the data ends up going to the attacker (see my other comment on DOM-based clickjacking)
> The new technique detailed by Tóth essentially involves using a malicious script to manipulate UI elements in a web page that browser extensions inject into the DOM -- for example, auto-fill prompts, by making them invisible by setting their opacity to zero
The website is compromised, all bets are off at that point. Of course a password manager, regardless of how good it is, won't defeat the website itself being hacked before you enter your credentials.
That's not a "hijack of autofill", it's a "attacker can put whatever they want in the frontend", and nothing will protect users against that.
And even if that is an potential issue, using it as an argument why someone shouldn't use a password manager, feels like completely missing the larger picture here.
I never said someone should not use a password manager.
I'm pointing out that password manager autofill can be used in an attack without the person's knowledge.
The site itself does not have to be compromised btw, this could come through the device itself being compromised or a poisoned popup on a website without referrer checks. There are probably quite a few ways I haven't considered to be able to get this to work.
I don't think your other comment supports your assertion. I've experienced Bitwarden failing to auto-fill due to quirks on websites, but I've never seen it fail to identify the domain correctly.
You link to Bitwarden's issues mentioning autofill and while it's true that autofill might break, if you click on the extension icon it's going to present you with a list of credentials for the current domain and give you options to quickly copy the username and password to your clipboard.
If that list is empty then I'm immediately put on high alert for phishing, but so far it's always been due to the website changing its URL/domain. I retrace my steps, make sure I'm on the right domain, then I have to explicitly search for the old entry and update it with the new URL.
That said, I've seen people do: Empty account list -> The darn password manager is misbehaving again -> Search and copy the password. I wouldn't consider those people to be sophisticated users since they're misunderstanding and defying the safety mechanisms.
Wrong. If my password manager doesn't auto-fill I'm am immediately far more wary. If I didn't have any URL matching in the password manager then I would very quickly stop paying close enough attention to the URL because I'd have to do it too frequently.
It’s also an issue that extensions like 1Password are _too_ URL-aware, until recently it tried to use heuristics and ignore subdomains for matching credentials. This meant that we used to get a list of almost a hundred options when logging into our AWS infrastructure. No matter which actual domain used. Someone could have used this vulnerability as part of a phishing campaign.
> extensions like 1Password are _too_ URL-aware, until recently it tried to use heuristics and ignore subdomains for matching credentials
I've used 1Password for years (Linux+Firefox though, FWIW), and this never happened to me or our family. I did discover though that the autofill basically went by hierarchy in the URI to figure out what to show, so if you specify "example.com" and you're on "login.example.com", you'll see everything matching "*example.com" which actually is to be expected. If you only want to see it on one subdomain, you need to specify it in the record/item.
That it ignored the subdomains fully sounds like it was a bug on your particular platform, because 1Password never did that for me, but I remember being slightly confused by the behavior initially, until I fixed my items.
> 1Password currently only suggests items based on the root domain. I can see the value of having 1Password suggest only exact matches based on their subdomain, especially for the use case you have described.
> As it currently stands, 1Password only matches on the second level domain (i.e. sample.com in your example). While I can't promise anything, this is something we've heard frequently, so I'll share your thoughts with the team.
Now it is:
> You’ll see the item as a suggestion on any page that’s part of the website, including subdomains. The item may also be suggested on related websites known to belong to the same organization.
It's that second sentence which is the problem, they "suggested" by being "smart" items from one AWS domain which ought to have never suggested on another unrelated AWS domain.
I work in a company where I have two okta accounts (because hey, why not) on two .okta.com subdomains.
Bitwarden _randomly_ messes up the two subdomains and most of the times (but not always, which seems strange actually), it fills the form with the wrong password. I don’t know why. I know that there is an option to make it stricter on domain matching but you can’t configure it on per item basis, only for the whole vault.
Every browser-based bitwarden client I have used have the option to choose the autofill option on single items as well as the global default. Find the login item, click edit, scroll to autofill options, where each URI is listed with a gear icon next to it. Click the gear and select the appropriate match type.
For the absolute majority of use cases, "host" should be the default, but i have found uses for both "base domain" and "regular expression" in some special cases.
Normal browser extension Bitwarden Ctrl-Shift-L autofill defaults to the most recently used entry when there are multiple matches, afaik.
You can indeed configure it on a per-item basis. The vault-wide setting you found is just the default for ones that don’t have an override set. Click on the domain/url matching setting in the individual credential and you can change it to exact host match.
Which is a legitimate concern since they are a gaping hole in security and isolation. Visiting website should be treated like phone calls from the bank. If you get called/mailed you don't follow the information there but call back / visit the site yourself e.g. from bookmarks or copy url from pw manager.
I am now wondering if Safari's integration with the system-wide password manager is similar to having a 1Password browser extension installed in a chromium browser
Lookup Dom-based clickjacking. It will "autofill" the field but on submission it sends the data to an attacker.
"The new technique detailed by Tóth essentially involves using a malicious script to manipulate UI elements in a web page that browser extensions inject into the DOM -- for example, auto-fill prompts, by making them invisible by setting their opacity to zero.
The research specifically focused on 11 popular password manager browser add-ons, ranging from 1Password to iCloud Passwords, all of which have been found to be susceptible to DOM-based extension clickjacking. Collectively, these extensions have millions of users."
""All password managers filled credentials not only to the 'main' domain, but also to all subdomains," Tóth explained. "An attacker could easily find XSS or other vulnerabilities and steal the user's stored credentials with a single click (10 out of 11), including TOTP (9 out of 11). In some scenarios, passkey authentication could also be exploited (8 out of 11).""
Yes we should run URL-unaware manager, but nearly no one understand security, especially in browser. Let's see the permission asked for the #1 manager in firefox (Authenticator):
Input data to the clipboard
Access your data for sites in the dropboxapi.com domain
Access your data for www.google.com
Access your data for www.googleapis.com
Access your data for accounts.google.com
Access your data for graph.microsoft.com
Access your data for login.microsoftonline.com
Yep! And #2 (2FAS Auth):
Display notifications to you
Access browser tabs
Access browser activity during navigation
Access your data for all websites
Even better, maybe at one point web browser can get their sh* together and build better permission system (and not just disable functions like manifest v3). For now the majority of people trust opaque organization shoving them unknown code their run with way too many permissions on their computers.
Talking about unknown code there is a lot of work to be done on reproducible build as anything touching web has nearly nothing about it.
That's a very smug take, especially when you encounter websites every day that don't autofill for whatever reason (As another poster already showed with some examples) or in my case the 1Password extension in Safari failing to connect to the main 1Password deamon or a number of other issues that make this still common place in 2026.
And that's for me, a technical user using a password manager.
I also find the 1Password browser (Safari) extension to pitifully poor. But there's a neat workaround: set up a hotkey for 'Show Quick Access'. I use Ctrl+Opt+\.
This pops up 1Password's overlay but it is still URL-aware. I find it works almost universally. It'll show you what it's going to fill: just hit Return and it'll be done.
It doesn't even care what browser you're in. Works across the lot. Of course it isn't fully integrated so Passkeys won't work.
I'm using Apple's Password Manager (native app on iOS & macOS), but didn't install its browser extension that can do autofill because for me it wasn't as convenient (it has a bad UX, unreliable autofill, etc.)
So, when I'm prompted to log in somewhere, I open the password manager and repeat the steps you just mentioned. It does add extra steps to the process, but I don't think it makes it less safe than having an autofill extension, which requires a ton of permissions and is more prone to compromises. And yes, my manual method also means I have to rely on me being aware of the URLs I'm on, but I usually bookmark my main services, so it's working fine for me this way. I also treat all emails as spam and/or an attack unless I verify them by the domain, and whether I had just recently requested to log in or requested a password change, etc.
At the end of the day, it boils down to us paying attention to every action we take, regardless of the measures we take, as new and different methods are being deployed to own us every day.
Behavioral (invisible) analytics alone is the secret trillion dollar industry that online advertisers want to distract you from by focusing on the morality of ad blocking.
A good blocker should block many of those scripts too, but there's no stopping server-side analytics at scale.
What is? That you can run us on paper? That seems demonstrably false
reply