Hacker Newsnew | past | comments | ask | show | jobs | submit | aydyn's commentslogin

I'm sure there are better examples, but your fridge idea doesn't work. Fridges already operate on the edge of freezing, so if you make it a little cooler you will ruin all your food. Also 3-5pm is peak hangry time.

A modern fridge also uses approximately five watts, on average. There are far better targets.

good thing the grid cares about long term average real power and not instanaenous reactive power then.

If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.

Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.


Studies have shown that AI is significantly better at manipulating opinions. Mechanically, LLMs are choosing the best next token trained over all human writing, so it shouldn't be a surprise that the words and prose AI use are more powerful on average.


critical thinking is alive and well


Unlike TPB founders who were convicted in 2009 because copyright infringement also violates swedish law, the 4chan lawyers are correct that they are breaking no U.S. law. 1A provides broad protections.


They compare to Claude and Gemini in their tweet


I am quite hopeful. One benchmark that was passed only very recently was Levelized Full System Cost parity in Texas. That is, the total cost of generating electricity via renewables, importantly, including storage and infrastructure costs became equivalent to other options.

I don't think this gets talked about enough because its truly a milestone.

It's still more expensive in colder places, but the math is changing very fast.


You're missing the important part about needing to model a tiny paw mashing on the keyboard. /dev/random is insufficient.


If you want a picture of the future of SWE, imagine a tiny paw mashing on a keyboard — for ever


Claude says it was safe too. Bare minimum the flagship models of these companies should understand their own ToS. Sheesh.


It's not just waiting for input, it has a heartbeat.md prompt that runs every X minutes. That gives it a feeling that it's always on and thinking.


That gives _you_ a feeling that it's always on. It still can't model time.


Or feeling things for that matter.


Of course it can "model time". It has access to system clock and know its heartbeat rate. Can you "model time" when you are asleep? Whatever "model time" means, it sounds like projection to be frank.

> Or feeling things for that matter.

Philosophical zombie experiment, the conclusion is qualia dont matter, only IO. If two systems have the same behavior there is no meaningful difference.


Modeling time means there is some way of e.g. evolving state over time, or projecting change over a period of time. Or being able to count time passing (without being told by external sources).

The LLM has no model of time. It is being called at regular times. If the cronjob misses two days or even a whole year of calling the LLM, the LLM will not respond any differently from if the cronjob was on time.


You keep saying "LLM has no model of time" but that doesn't inherently mean anything.

If you give it clock as input, it will observe the passage of time and can model it. The entire explosion of the AI industry due to LLMs is the observation that their abilities generalize, so there's no reason to believe that LLMs can't "model time". They can, if you tell it to and give it proper input.


Claws are also potentially dangerous so it is a pretty apt analogy.


That’s also very apt yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: