Sure. It's the market that's irrational, not the people here. The people here are the truly enlightened rational ones and know what the true value of things are.
Don't want to remember how much money I spent on a Copengagen wheel for my wife when she was in school. At least some kind souls published a way to unbrick it.
>Will increase your Linux skills because diversity always helps the human brain
Is this still true, given how much runs through systemd now? I thought about trying out FreeBSD last time I got a new computer, but decided on sticking with Debian to help skill building on other Linux systems
Diversity of programming languages, operating systems, cultures, human languages, countries, music etc. always gives a fresh perspective I've found. You may go back to what you prefer at the end but it gives you learnings that are at a "higher level" :-)
> Is this still true, given how much runs through systemd now?
Yes, still true. On FreeBSD you will realize what complexity systemd might be hiding from you and what additional features it provides. BTW I don't actually like rc init on FreeBSD that much ! I feel that rc.d can learn a lot from more modern init systems like systemd, dinit etc. I don't like reading highly complex rc scripts !!
But the programming language has explicitly laid out rules. It was not trained on those sets of rules, but it was trained on many trillions of lines of code. It has a map of how programs work, and an explanation of this new language. It's using training data and data it's fed to generate that result.
I don't know how you'd prompt this, but if there was a clean example of an A.I. coming up with an idea that's completely novel in more than details, it would be compelling evidence that these next-token predictors have some weird emergent properties that don't necessarily follow from intricate, sophisticated webs of token-prediction.
E.g. "What might be a room-temperature superconductor" -> "some plausible iteration on existing high-temperature superconductors based on our current understanding of the underlying physics" would not be outside how we currently understand them.
"What might be a room-temperature superconductor?" -> "some completely outlandish material that nobody has studied before and, when examined, seems to have higher temperature superconducting than we would predict" would provoke some serious questions.
A fun experiment I've heard suggested is training a model on all scientific understanding just up to some counterintuitive quantum leap in scientific understanding, say, Einstein's theory of relativity, and then seeing if you can prompt it to "discover" or "invent" said leap, without explicitly telling it what to look for. This would of course be pretty hard to prove, but if you could get it to work on a local model, publish the training set and parameters so that anyone can replicate it on their own machine, that could be pretty darn compelling.
Why would it matter whether or not the robot looks something up if it makes a novel discovery?
Why would it matter that the discovery wasn't just novel but felt like an unconventional one to me, someone who is probably a total outsider to that field?
Both of those feel subjective or at least hard to sustain.
Look. What I'm trying to tell people is that the easy explanations for how these models worked circa GPT-2 is just not cutting it anymore. Neither is setting some subjective and needlessly high bar for...what exactly? What? Do we decide to pay attention to AI after it does all the above? That seems a bit late to the party for cheering on or resisting it.
Some new shit is afoot. Folk need to pay attention, not think they got it figured out already.
Programs are fundamentally lists of instructions. LLMs are very good at building these lists. That it performs well when you say "Build a list you've seen before, but do it in a slightly different way this time. Here's the exact way I want you to do it." is not surprising. I would honestly be surprised if it couldn't do it.
As the other commenter suggested, a genuinely novel scientific idea would be surprising. A new style of art (think Picasso or Pollack coming along), not just an iteration on Ghibli, would be surprising. That's actual creativity.
A missile will always be cheaper than a missile interceptor, and the interceptor will never be a 1:1 kill. Building a missile interceptor system ia a good way to get your strategic opponent to build a bigger stockpile.
Disagree on always being cheaper. Military planners are obsessed with the best weapons, such interceptors are pricey. But look at Israel: Iron Dome. ~$50k/shot. They deliberately built a dumb SAM because it was designed to go against dumb opponents--objects falling freely on a ballistic trajectory. While they are usually facing light stuff that isn't even worth that they have successfully engaged longer range stuff that costs many times what the interceptor costs.
Overall, though, the offense always wins this one because interceptors can only protect a limited area whereas missiles can go anywhere.
Iron Dome is a great example of my point. It is a $50k interceptor designed to take out a propane tank with a rocket strapped to it, not a real ballistic missile like a Scud.
reply