I object to the framing of the title: the user behind the bot is the one who should be held accountable, not the "AI Agent". Calling them "agents" is correct: they act on behalf of their principals. And it is the principals who should be held to account for the actions of their agents.
If we are to consider them truly intelligent then they have to have responsibility for what they do. If they're just probability machines then they're the responsibility of their owners.
If they're children then their parents, i.e. creators, are responsible.
They aren't truly intelligent so we shouldn't consider them to be. They're a system that, for a given stream of input tokens predicts the most likely next output token. The fact that their training dataset is so big makes them very good at predicting the next token in all sorts of contexts (that it has training data for anyway), but that's not the same as "thinking". And that's why they get so bizarelly of the rails if your input context is some wild prompt that has them play acting
We aren't, and intelligence isn't the question, actual agency (in the psychological sense) is. If you install some fancy model but don't give it anything to do, it won't do anything. If you put a human in an empty house somewhere, they will start exploring their options. And mind you, we're not purely driven by survival either; neither art nor culture would exist if that were the case.
I agree because I'm trying to point out the the over-enthusiasts that if they really reached intelligence it has lots of consequences that they probably don't want. Hence they shouldn't be too eager to declare that the future has arrived.
I'm not sure that a minimal kind of agency is super complicated BTW. Perhaps it's just connecting the LLM into a loop that processes its sensory input to make output continuously? But you're right that it lacks desire, needs etc so its thinking is undirected without a human.
They are different, and the biggest reason is (I suspect) that a Zulip workspace is self-contained while a Matrix server is able to federate with other Matrix servers.
Other European institutions are also adopting Matrix, so federation may turn out to be an important feature.
Just because the hooks have the label "pre-commit" doesn't mean you have to run them before committing :).
I, too, want checks per change in jj -- but (in part because I need to work with people who are still using git) I need to still be able to use the same checks even if I'm not running them at the same point in the commit cycle.
So I have an alias, `jj pre-commit`, that I run when I want to validate my commits. And another, `jj pre-commit-branch`, that runs on a well-defined set of commits relative to @. They do use `pre-commit` internally, so I'm staying compatible with git users' use of the `pre-commit` tool.
What I can't yet do is run the checks in the background or store the check status in jj's data store. I do store the tree-ish of passing checks though, so it's really quick to re-run.
There are two layers, both relating to concentration.
Driving a car takes effort. ADAS features (or even just plain regular "driving systems") can reduce the cognitive load, which makes for safer driving. As much as I enjoy driving with a manual transmission, an automatic is less tiring for long journeys. Not having to occupy my mind with gear changes frees me up to pay more attention to my surroundings. Adaptive cruise control further reduces cognitive load.
The danger comes when assistance starts to replace attention. Tesla's "full self-driving" falls into this category, where the car doesn't need continuous inputs but the driver is still de jure in charge of the vehicle. Humans just aren't capable of concentrating on monitoring for an extended period.
IMNSHO yes. But not necessarily so drastically -- a VW (to pick an example I've seen evidence of: https://www.thedrive.com/article/10131/the-volkswagen-arteon...) will ping at you if you stop touching the steering wheel for ten seconds or so, and will actively monitor to make sure your attention is on the road. A Tesla won't, or at least wouldn't in 2018, to the point where someone was convicted of dangerous driving having climbed into the passenger seat while driving along the M1: https://www.bbc.co.uk/news/uk-england-beds-bucks-herts-43934...
You should probably be running your renewal pipeline more frequently than that: if you had let your ACME client set itself up on a single server, it would probably run every 12h for a 90-day certificate. The ACME client won't actually give you a new certificate until the old one is old enough to be worth renewing, and you have many more opportunities to notice that the pipeline isn't doing what you expect than if you only run when you expect to receive a new certificate.
There's a linear buffer of pages, most of which come from the pool. It's not clear to me under what conditions these are returned to the pool? Is it when the specific session terminates?
When a non-standard page reaches the point of being recycled, it'll instead be re-added to the list but with a standard size. That effectively leaks the extra space above the standard size. But when the buffer is released (because the session ends?) the pool is also released, which releases all the standard sized pages but leaks the custom-sized ones?
Which suggests that the issue may be even rarer than it initially looked to me: I tend to open a small number of sessions and then use them continuously, rather than starting new sessions during the lifetime of the process. If I never terminated a session, I would never fully leak the memory?
reply