Are we talking about structural things or about individual perspective things?
At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.
At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.
Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.
You and I remember pre-AI famous works. "Hey, I'm pretty sure Odysseus took a long time to get home". Somebody goes and prints 50 different AI-generated versions of the _Odyssey_, how are future generations supposed to know which is real and which is fake?
That's definitely true. History has been thoroughly manufactured by humans. Naively, I thought the storage of computers might preserve first-hand accounts forever; it might, but it might not be discernible.
No, but it was a bad example because I was thinking only of the authorship point of view.
A better example would have been the complaint tablet to Ea-nāṣir. We're pretty sure it's real; there might still be people alive that remember it being discovered. But in a hundred years, people with gen AI have created museums of fake artifacts but plausible, can future people be sure? A good fraction of the US population today believes wildly untrue things about events happening in real time!
It did look that way and it's a fun way to interpret it, but pattern-matching on a pretty obvious pattern in the text (several failed fixes in a row) seems more likely. LLM's will repeat patterns in other circumstances too.
I mean, humans do this too... If I tell an interviewee that they've done something wrong a few times, they'll have less confidence going forward (unless they're a sociopath), and typically start checking their work more closely to preempt problems. This particular instance of in-context pattern matching doesn't seem obviously unintelligent to me.
This was code that finished successfully (no stack trace) and rendered an image, but the output didn't match what I asked it to do, so I told it what it actually looked like. Code Interpreter couldn't check its work in that case, because it couldn't see it. It had to rely on me to tell it.
So it was definitely writing "here's the answer... that failed, let's try again" without checking its work, because it never prompted me. You could call that "hallucinating" a failure.
I also found that it "hallucinated" other test results - I'd ask it to write some code that printed a number to the console and told it what the number was supposed to be, and then it would say it "worked," reporting the expected value instead of the actual number.
I also asked it to write a test and run it, and it would say it passed, and I'd look at the actual output and it failed.
So, asking it to write tests didn't work as well as I'd hoped; it often "sees" things based on what would complete the pattern instead of the actual result.
I think the first sentence of the author counters your comment.
What you described works best in a familiar codebase where the organizing principles have been maintained well and are familiar to the reader and the tools are just the extension of those organizing principles. Even then a deviation from those rules might produce gaps in understanding of what the codebase does.
And grep cuts right through that in a pretty universal way. What the post describes are just ways to not work against grep to optimize for something ephemeral.
Can they force a blood test on you? I know in my country (Poland) they can arrest you and take you to lab for blood test if they have justified suspiction.
It's not actually saying that. It's saying, that the N is not really relevant technologically enough to be worth evaluating every damn time. It maybe relevant organizationally to fit your org chart.
Also, putting a barrier to creating a new service is a feature of monolith. Other types of partitionings are still there. They just might require some insight into existing arch.
How does a shared car not need those items? It is just not under your management and is accounted somewhere else. But they are needed and you are paying for it. The only advantage could be that you're not paying with your own time and then it depends on what is cheaper, your work time or that maintenance time.
I call BS on your claim about everyone being in. For anything to work only a particular type of people must be in especially at the start. To be honest most of PGs writings are dedicated to defining that, including your quote.
At least that's what I take from it. Cultism is just something that follows success.
I might be cynical, but that change would work only with the assumption that what is done on the page remains constant. What I guess would happen is that it would do 5% more useless operations. The limiting factor here is user's tolerance for bullshit, not some predefined functionality.
At individual perspective - AI is useful as a helper to achieve your generative tasks. I'd argue against analytic tasks, but YMMV.
At the societal perspective, e.g. you as individual can not trus anything the society has produced, because it's likely some AI generated bullshit.
Some time ago, if you were not trusting a source, you could build your understanding by evaluating a plurality of sources and perspectives and get to the answer in a statistical manner. Now every possible argument can be stretched in any possible dimension and your ability to build a conclusion has been ripped away.