Yeah, I was on the outside looking in when it came to sketch comedy until I developed a new character that can make people laugh just with hand gestures. It's really funny how you go nowhere telling other people's jokes and you really need to write your own material.
Among other reasons, if you turn temperature down to 0, llms stop working. Like they don't give natural language answers to natural language questions any more, they just halt immediately. Temperature gives the model wiggle room to emit something plausible sounding rather than clam up when presented an input that wasn't verbatim in the training data (such as the system prompt).
Yes but that doesn't explain why we aren't given a choice. Program code is boringly deterministic but in many cases it's exactly what you need while non-determinism becomes your dangerous enemy (like in the case of some Airbus jets being susceptible to bit flips under cosmic rays)
The current way to address this is through RAG applications or Retrieval Augmented Generation. This means using the LLM side for the natural language non-deterministic portion and using traditional code and databases and files for the deterministic part.
A good example is bank software where you can ask what your balance is and get back the real number. A RAG app won't "make up" your balance or even consult the training the find it. Instead, the traditional code (deterministic) operations are done separately from the LLM calls.
I don't think so. Primarily because if you can ask that question instead of just being dead, then it's not the fast takeoff.
On a less drastic note, if AI were autonomously cannibalizing the economy, you'd run into more things and go "huh I guess that's run by AI now" instead of "ah godammit why did they shove an LLM in this workflow".
I don't think adding 20 bytes to every response, which are never read in practice, will improve the energy efficiency of the internet. Not to be mean, there's just a lot of stuff in most responses that effectively nothing acts on.
We're not it's competition in the same way chimpanzees aren't our competition. Some fraction of us are interested in their well-being for aesthetic reasons, but a lot of the time this fraction loses to a not particularly powerful faction in direct competition for territory. And if there is any serious conflict of interest, there is no contest and the chimps lose. If we get lucky some fraction of superintelligence will look on us the way we look at ground apes, but that's far from a given.
I don't think it's right that Airbnb solved short term rentals - outside of a few dozens prestige markets that they monitor with humans, it's really a race to the bottom. Reputation at scale remains unsolved, and so it's still a market for lemons.
Akerlof's paper uses the market for used cars as an example of the problem of quality uncertainty. It concludes that owners of high-quality used cars will not place their cars on the used car market. A car buyer should only be able to buy low-quality used cars, and will pay accordingly as the market for good used cars does not exist.
At the end of the day, much like housing cannot be both affordable and a good investment, healthcare cannot be both affordable and a lucrative career. Avoid going to an MD if you can get the same care from someone else.
You could get healthcare from someone who doesn't have to go $100ks in debt first, and then expects to make more than you do. If no such person with that profile exists, then yeah you're stuck paying the rate of the system that trained and employs them.
Kanban operates on a known product with a known manufacturing process. Many software products are undefined even at time of public release, and evolve continuously. "Deciding what to build", while explicitly highlighted in the agile software manifesto, is the weak link.
Put another way, lean manufacturing improves metrics for the margianal unit of goods. No one, customer or dev, is interested in the margianal unit of software.
I would guess not. An important feature of compilers is that they are guaranteed to emit code with certain properties in response to specific inputs (memory safety guarantees, asymptotic performance, calling convention, etc.). If they don't do that, you can file a bug report.
You cannot file a bug report against an LLM that it produced an unexpected output, because there is no expected output; The core feature of an LLM is that neither you nor the LLM developer knows what it will output for a wide range of inputs. I think there are a wide range of applications for which LLMs core value proposition of "no-one knows a priori what this tool will emit" is disqualifying.