I'm sure there are better examples, but your fridge idea doesn't work. Fridges already operate on the edge of freezing, so if you make it a little cooler you will ruin all your food. Also 3-5pm is peak hangry time.
If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
Studies have shown that AI is significantly better at manipulating opinions. Mechanically, LLMs are choosing the best next token trained over all human writing, so it shouldn't be a surprise that the words and prose AI use are more powerful on average.
Unlike TPB founders who were convicted in 2009 because copyright infringement also violates swedish law, the 4chan lawyers are correct that they are breaking no U.S. law. 1A provides broad protections.
I am quite hopeful. One benchmark that was passed only very recently was Levelized Full System Cost parity in Texas. That is, the total cost of generating electricity via renewables, importantly, including storage and infrastructure costs became equivalent to other options.
I don't think this gets talked about enough because its truly a milestone.
It's still more expensive in colder places, but the math is changing very fast.
Of course it can "model time". It has access to system clock and know its heartbeat rate. Can you "model time" when you are asleep? Whatever "model time" means, it sounds like projection to be frank.
> Or feeling things for that matter.
Philosophical zombie experiment, the conclusion is qualia dont matter, only IO. If two systems have the same behavior there is no meaningful difference.
Modeling time means there is some way of e.g. evolving state over time, or projecting change over a period of time. Or being able to count time passing (without being told by external sources).
The LLM has no model of time. It is being called at regular times. If the cronjob misses two days or even a whole year of calling the LLM, the LLM will not respond any differently from if the cronjob was on time.
You keep saying "LLM has no model of time" but that doesn't inherently mean anything.
If you give it clock as input, it will observe the passage of time and can model it. The entire explosion of the AI industry due to LLMs is the observation that their abilities generalize, so there's no reason to believe that LLMs can't "model time". They can, if you tell it to and give it proper input.
reply