Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Foisting the responsibility of the extremely risky transport industry onto the road developers would certainly prevent all undesirable uses of those carriageways. Once they are at last responsible for the risky uses of their technology, like bank robberies and car crashes, the incentive to build these dangerous freeways evaporates.


I think this is meant to show that moving the responsibility this way would be absurd because we don't do it for cars but... yeah, we probably should've done that for cars? Maybe then we'd have safe roads that don't encourage reckless driving.


But I think you're missing their "like bank robberies" point. Punishing the avenue of transport for illegal activity that's unrelated to the transport itself is problematic. I.e. people that are driving safely, but using the roads to carry out bad non-driving-related activities.

It's a stretched metaphor at this point, but I hope that makes sense (:


It is definitely getting stretchy at this point, but there is the point to be made that a lot of roads are built in a way which not only enables but encourages driving much faster than may be desired in the area where they're located. This, among other things, makes these roads more interesting as getaway routes for bank robbers.

If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.

Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.


I think you are imagining a high-speed chase, and I agree with you in that case.

But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?

(it's a rhetorical question; I'm just trying to clarify the point)


We wouldn't have roads at all is my point, because no contractor in their right mind would take on unbounded risk for limited gain.


Which in case of digital replicas that can feign real people, may be worth considering. Not a blanket legislation as proposed here, but something that signals the downstream risks to the developer to prevent undesired uses.


Then only foreign developers will be able to work with these kinds of technologies... the tools will still be made, they'll just be made by those outside jurisdiction.


Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.


I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.


It's messy because media impersonation has been a problem since the advent of communication. In the extreme, we're sort of asking "should we make lying illegal?"

The model (pardon) in my mind is like this:

* The forger of the banknote is punished, not the maker of the quill

* The author of the libelous pamphlet is punished, not the maker of the press

* The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.

With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.


> The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop

In this case the guy making the images isn't doing anything wrong either.

Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†

https://www.youtube.com/watch?v=p7FCgw_GlWc

Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?

(Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)


Selling anything takes on unbounded risk for limited gain. That’s why the limited liability company exists.

Risk becomes bound to the total value of the company and you can start acting rationally.


Historically it's the other way around - limited liability for corporations let juries feel free to award absurdly high judgments against them.


And I am talking about user-facing app development specifically, which has a different risk profile compared to automative or civil engineering.


> then we'd have safe roads that don't encourage reckless driving.

You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?

I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.


Creative work has obvious applications. e.g. AISIS - The Lost Tapes[0] was a sort of Oasis AI tribute album (the songs are all human written and performed, and then the band used a model of Liam Gallagher's mid 90s voice. Liam approved of the album after hearing it, saying he sounded "mega"). Some people have really unique voices and energy, and even the same artist might lose it over time (e.g. 90s vs 00s Oasis), so you could imagine voice cloning becoming just a standard part of media production.

[0] https://www.youtube.com/watch?v=whB21dr2Hlc


So can image gen systems.

As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)

Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)

But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.


You said you can't see a legitimate use, but clearly there are legitimate uses (the "no legitimate use" idea is used to justify bad drug policy for example, so we should be skeptical of it). As to whether we should allow it, I don't see how we have a choice. The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones, and eventually today's training supercomputers will be tomorrow's commodity. The whole idea of AI "fingerprinting" is bad anyway; you don't fingerprint that something is inauthentic. You sign that it is authentic.


> The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones,

Yes, lets just give up as bad actors undermine society, scam everyone and generally profit from us.

> You sign that it is authentic.

Signing means you denote ownership. A signed message means you can prove where it comes from. A service should own the shit it generates.

Which is the point, because if I cannot reliably see what is generated, how is a normal person able to tell. being able to provide a mechanism for the normal person to verify is a reasonable ask.


You put the bad actors in prison, or if they're outside your jurisdiction, and they're harming your citizens, and you're America, you go murder them. This has to be the solution anyway because the technology is already widely available. You can't make everyone in the world delete the models.

Yes signing so the way you show something is authentic. Like when the Hunter Biden email thing happened I didn't understand (well, I did) why the news was pretending we have no way to check whether they're real or whether the laptop was tampered with. It was a gmail account; they're signed by Google. Check the signatures! If that's his email address (presumably easy enough to corroborate), done. Missed opportunity to educate the public about the fact that there's all sorts of infrastructure to prove you made/sent something on a computer.


> You put the bad actors in prison,

how do you detect it?


People who get scammed make police reports, same as without voice models.


Well it would also apply to bike lanes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: