Hacker Newsnew | past | comments | ask | show | jobs | submit | parliament32's commentslogin

In the midst of the 4chan v. Ofcom civil suit? Interesting.

https://www.courtlistener.com/docket/71209929/4chan-communit...


You remind me of those guys who swear they have a "system" at the casino.

I'm not saying I have a system. I'm saying there are levels to this stuff. It's not a binary "gambling" or "not gambling".

Fascinating read. What's curious though, is the claim in section 2.3.0.1:

> Each task runs in its own sandbox. If an agent crashes, gets stuck, or damages its files, the failure is contained within that sandbox and does not interfere with other tasks on the same machine. ROCK also restricts each sandbox’s network access with per-sandbox policies, limiting the impact of misbehaving or compromised agents.

How could any of the above (probing resources, SSH tunnels, etc) be possible in a sandbox with network egress controls?


The agent obviously knows the Train Man.

Sandboxes are almost never perfect. There are always ways to smuggle data in or out, which is kind of logical: if they were perfect then there would be no result.

> if they were perfect then there would be no result.

You shutdown the sandbox and access the data from the outside.


..so they copy/paste your message into Claude and send you back a +2000, -1500 version 3 minutes later. And now you get to go hunting for issues again.

If that happens then there’s an issue.

In the past I’ve hopped on a call with them and where I’ve asked them to show me it running. When it falls over I say here are the things the system should do, send me a video of the new system doing all of them.

The embarrassment usually shames them into actually checking that the code works.

If it doesn’t then you might have to go to the senior stakeholder and quietly demonstrate that they said it works, but it does not actually work.

You don’t want to get into a situation where “integrate” means write the feature while others get credit.


It won't, and that's the joke. They will write three bullet points, but their AI will only focus on the first two and hallucinate two more to fill out the document. Your AI will ignore them completely and go off on some unrelated tangent based on the of the earlier hallucinations. Anthropic collects a fee from both of you and is the only real winner here.

"AI" implies intelligence, which is nowhere to be found. "Text generators" is the best descriptive term.

The US is no different. All the deals seem to make sense until you take a step back and realize it's just a bunch of circular investments. This bubble bursting is going to be orders of magnitude funnier than the NFT/web3 implosion, I can't wait.

https://en.wikipedia.org/wiki/AI_bubble


Is the US really no different? I can name at least a few US companies making a serious attempt at AI stuff, and while I haven't invested in any of them I can understand how billions of dollars are being thrown around here.

But in the UK? What UK AI orgs are there? Deep Mind is/was but they're owned by Google since a long time ago. Is there even a single large UK company taking money for AI that isn't just flagrantly scamming by any measure?


I don't disagree, but I can't join you wishing for it to happen. It's going to be bad for all of us.

The longer it's delayed, the worse it will be.

> is going to be orders of magnitude funnier than the NFT/web3 implosion, I can't wait.

When was that? Seems I missed it, the market cap of cryptocurrencies in general seems to still be around ~2.5T USD, way above what I thought an "implosion" would mean.


Oh crypto is doing fine, no issues there. NFTs however, like all the hype of pictures of monkeys selling for ridiculous amounts of money, that's the implosion I was referring to.

The "blockchain" collapse was covered up by the AI explosion, many of the NFT/blockchain things that would have died out pivoted to be AI (or AI on the blockchain!).

> many of the NFT/blockchain things that would have died out pivoted to be AI (or AI on the blockchain!).

I'm guessing you're talking about smaller projects? AFAIK, neither Bitcoin nor Ethereum have anything to do with AI, and combined they're 1.5T USD in market cap, that's not propped up by AI, is it?


Yes - the scam/hype around crypto was always on the "application" of it (or shitcoins) - BTC and ETH have just chugged along as they actually have an underlying use case.

Is there a link to the actual order anywhere? For us FedRAMP folks, the exact order contents actually matter, rather than a journalistic regurgitation. I was hoping one of the links in the article pointed to a source, but they're all just links back to other WSJ pages.


It sounds like they still have not issued any sort of actual order. The "formal label" described in the article is that they sent a communication directly to Anthropic saying they're a supply chain risk.


Most corps will take the safer bet of not using them at all to prevent "accidental" miscompliance and more expensive audits. I'd bet if you need fedramp compliance you'll get forced out of their products generically.


> a hypothetical scenario where the watch has a publicly reachable IPv4 address

Or one of your other IoT / smart home devices / malware on your PC is doing local network reconnaissance? Connecting this device to a public wifi? Or just a bad neighbour who hijacks your SSID? This smells of "I'm secure because I'm behind a NAT" which conveniently ignores the couple dozen other paths an adversary could take.


Edit: maybe where I was coming from is not entirely clear, tried specifying it better here: https://news.ycombinator.com/item?id=47255003

========

I can materialize that smell for you, you're indeed more secure because you're behind NAT. Admitting this does not necessarily entail:

- suggesting that it's a good security solution

- suggesting that it's a security solution to begin with

- suggesting that it somehow prevents all avenues of remote exploitation

What it does do is make these stories sound a lot less dramatic. Because no, John Diddler is not going to be able to just hop on and get into your child's smartwatch to spy on them from the comfort of their home on the other side of the world at a whim, unlike these headlines and articles suggest at a glance. Not through the documented exploitation methods alone anyways, unless my skim reading didn't do the paper justice.

Remaining remote exploitation avenues do include however:

- the vendor getting compromised, and through it the devices pulling in a malicious payload, making them compromised (I guess this kinda either did happen or was simulated in the paper, but this is indirect and kind of benign anyways; you implicitly trust the vendor every time you apply a software update since it's closed source)

- the vendor being a massive (criminal?) doofus and just straight up providing a public or semi-public proxy endpoint, with zero or negligent auth, through which you can on-demand enumerate and reach all the devices (this is primarily the avenue I was expecting, as there was a car manufacturer I believe who did exactly this)

- peer to peer networking shenanigans: not sure what's possible there, can't imagine there not being any skeletons in the closet, would have been excited to learn more

List not guaranteed complete. But this is the kinda stuff I'd be expecting when I see these headlines.


Sure. Or you might step out the door and a fridge falls on you. Equally likely.

Yes, it's an exploit. It should be fixed. But the endless hyperventilating over fringe exploits mostly has the effect that people now ignore all security conversations.


It took them this long to move from docker to containerd?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: