It's a really interesting case study, but the summary seems to lean into the AI hype to an extent that borders on lying.
> His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything.
I don't see how he can give this summary with a straight face after posting the interview that CLEARLY contradicts it.
In the interview the engineer says "When Claud Code came out almost a year ago, I started dabbling with web based tools ..." and "When it first came out I had so many ideas and tried all these different things", so he had clearly already used extensively it for a year. I would also guess the engineer was somewhat technically minded from the get-go, since he claims he was "really good with excel" before starting with Claude Code, but that is beside the point.
The interviewer later asks "How much of those 8 weeks was learning Claude Code versus actually building the thing?", and the interviewee answers "Well, I started Claude Code when it first came out so the learning curve has really gone down for me now..." and then trails off to a different subject. Which further confirms that the summary in the post is false.
It really seems like the engineer has spent the year prior learning Claude Code and then spent 8 weeks on solely building this specific application.
The interviewer also claims "This would normally have taken a developer a year to build", which seems really unsubstantiated. It's of course hard to judge without all the details, but looking at the short demo in the video, 8 weeks of regular development time from a somewhat experienced developer doesn't seem too far fetched if the objective is "don't make it pretty, just make it work".
As I said, it's a really interesting case study about a paradigm shift in how software is developed, and it's clear this app would never have existed without Claude Code. So I don't really see the need for the blatant lying.
I've noticed even experienced engineers have started overestimating how long things would take to build without AI. Believe it or not we coded before AI and not everything took years all the time.
We’ve all worked on projects where it took months to get requirements from the business. Sometimes to see the project cancelled after months of sitting around waiting for them to decide on things.
Coding has never been the roadblock in software. Indeed don’t we experience this now with ai? Vibe code a basics idea then discover the things we didn’t consider. Try to vibe that and the code base quickly gets out of hand. Then we all discover “spec driven development” SDD and in turn discover thinking of specifying everything our selves is an even bigger of PITA?
The standard for obscurity is different for LLMs, something can be very widespread and public without the average person knowing about it. DICOM is used at practically every hospital in the world, there's whole websites dedicated to browsing the documentation, companies employ people solely for DICOM work, there's popular maintained libraries for several different languages, etc, so the LLM has an enormous amount of it in its training data.
The question relevant for LLMs would be "how many high quality results would I get if I googled something related to this", and for DICOM the answer is "many". As long the that is the case LLMs will not have trouble answering questions about it either.
They mention false positives as well on github: The rate of false positives is harder to measure, but based on limited manual reviews it's well within 20% range and the majority of it is a gray zone.
That 20% figure is actually better than it sounds. Coverity on kernel-scale C codebases typically lands in the 40-60% false positive range... "not wrong but not the bug you'd prioritize" is different from a true false positive.
You make it seem like it's not predominantly skewed right wing, just a "healthy" mix of right wingers and left wingers due to not banning anyone. Which might be an unpopular take, but in this scenario I think it's unpopular simply because it is demonstrably wrong.
> A study published by science journal Nature has examined the impact of Elon Musk’s changes to X/Twitter, and outlines how X’s algorithm shapes political attitudes, and leans towards conservative perspectives. They found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm.
https://www.socialmediatoday.com/news/x-formerly-twitter-amp...
> Sky News team ran a study where they created nine new Twitter/X accounts. Right-wing accounts got almost exclusively right-wing material, all accounts got more of it than left-wing or neutral stuff. (Notably, the three “politically neutral” accounts got about twice as much right-wing content as left-wing content.
https://news.sky.com/story/the-x-effect-how-elon-musk-is-boo...
> New X users with interests in topics such as crafts, sports and cooking are being blanketed with political content and fed a steady diet of posts that lean toward Donald Trump and that sow doubt about the integrity of the Nov. 5 election, a Wall Street Journal analysis found.
https://www.wsj.com/politics/elections/x-twitter-political-c...
It becomes a problem for everone when spaces meant for meaningful work become overrun with an awful stream of endless mediocre slop that someone quickly generated without giving it a second thought. The problem here is not that it is fast and easy. The cardinal sin is that it is fast, easy AND bad.
I understood it just fine. You object to creations and creativity that do not pass your subjective quality bar and/or aren't produced in a way that is satisfactory to the people already behind the gate.
It's the literal definition of gatekeeping.
The problem you describe (quantity over so-called quality) is a discovery and curation problem.
Yet you blame the tools of creation and lament the lack of restriction or controls on production instead.
Yes these tools make it easier to produce, and yes that means that you have more low-quality work out there. I'm not pretending like that doesn't introduce new challenges.
But the answer isn't to gate-keep the tools or the process of creation or to stop or shame people from being creative with these new tools by universally calling their work "slop" or "bad".
So you completely agree with the factual description of the problem I supplied when asked to describe the problem, your only real complaint is that I used the phrase "more awful slop" instead of your preferred euphemism "more low-quality work". Having a frank discussion about the problems caused by new technology is not gatekeeping, and I don't think we should sugarcoat it out of fear of hurting people's feelings.
> It becomes a problem for everone when spaces meant for meaningful work become overrun with an awful stream of endless mediocre slop that someone quickly generated without giving it a second thought. The problem here is not that it is fast and easy. The cardinal sin is that it is fast, easy AND bad.
So..
"a problem for everyone" <- the fallacy of assuming your personal feelings and opinions are universal and apply to all of us (they're not and they don't).
"spaces meant for meaningful work" <- tells me that you don't seem to believe anything made with these new tools can be meaningful, implying they don't belong etc..
And again the hubris of believing that your personal opinion reflects the ideal state or voice of a broad and diverse community (a fucking textbook definition of gatekeeping btw)
And lastly, do you truly believe that AI tooling is the dividing line?
That all non-AI games made today are meaningful?
There's tons of quick and dirty stuff out there like asset flips and weekend projects that people throw up on Steam or Itch for sale, and there have been for years and years.
If your fear is that bad games are going to get out into the world you haven't been paying attention for the last (checks watch) 50+ years...
> "a problem for everyone" <- the fallacy of assuming your personal feelings and opinions are universal and apply to all of us (they're not and they don't).
The phrase "a problem for everyone" doesn't mean everyone agrees, it just means the described situation would affect everyone broadly...
And even you literally admitted you agree it will introduce problems just in the previous post: "I'm not pretending like that doesn't introduce new challenges", it's a little too late to try walk that back now.
> "spaces meant for meaningful work" <- tells me that you don't seem to believe anything made with these new tools can be meaningful, implying they don't belong etc..
No, just that the non-meaningful work they create risks overwhelming any meaningful work created with or without the tools, which is a real problem AI is already creating in online communities today. Knitting patterns on Etsy is a prime example. It is an accurate description of a problem that already exists today, and trying to avoid discussing it helps no-one.
Again, even you admit the problem is real and don't really have any real complaints except that you keep complaining about my phrasing. It seems you would have been happy if I'd just used the more polite terms you introduce instead, like "new challenges" instead of "problems", "low-quality work" instead of "awful slop", and "not low-quality" instead of "meaningful"? Which is fine, but not really an interesting discussion.
To avoid admitting you are simply annoyed with my phrasing you instead try to pin extreme opinions on me that are nothing close to anything I have ever said, like "you believe your personal opinion reflects the ideal voice of the community", "you believe your personal feelings and opinions are universal", "you believe nothing made with these new tools can be meaningful" and that I think "all non-AI games made today are meaningful", which is just silly.
Since you agree that you see the same problem I see, and just want to discuss other opinions you invent for me that I don't actually share, I don't think we will reach any conclusion here and I probably won't engage further. Thank you for your time anyway.
Your characterization of the event as a simple reminder to follow established best practices is directly contradicted by the briefing note of the meeting, which specifically mentions a lack of best practices related to AI. Which makes me skeptical of your assessment of the situation in general.
> Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established”.
Reviewing the correctness of code is a lot harder than writing correct code, in my experience. Especially when the code given looks correct on an initial glance, and leads you into faulty assumptions you would not have made otherwise.
I'm not claiming AI-written and human-reviewed code is necessarily bad, just that the claim that reviewing code is equivalent to writing it yourself does not match my experience at all.
Plus if you look at the commit cadence there is a lot of commits like 5-10 minutes a part in places that add new functionality (which I realize doesn't mean they were "written" in that time)
I find people do argue a lot about "if it is reviewed it is the same" which might be easy when you start but I think the allure of just glancing going "it makes sense" and hammering on is super high and hard to resist.
We are still early into the use of these tools so perhaps best practices will need to be adjusted with these tools in mind. At the moment it seems to be a bit of a crap shoot to me.
Elon Musk is promoting progress only when he has something to gain from it (in economical terms or in terms of his image), but has no qualms wrecking progress, butchering indiscriminately and hurting prople when it comes to his personal grievances. This is further aggravated by his mercurial and egomaniacal personality, and the false reality build on conspiracies he surrounds himself in.
Hapazardly and chaotically dismantling the US public sector on some ideological crusade was not advancing human progress. Netiher was turning Twitter into some farcical shell of its former self, owned by Saudi Arabia. Neither was sabotaging projects such as high-speed rail systems purely out of spite.
> Musk told me that the idea originated out of his hatred for California’s proposed high-speed rail system. … At the time, it seemed that Musk had dished out the Hyperloop proposal just to make the public and legislators rethink the high-speed train. He didn’t actually intend to build the thing. … With any luck, the high-speed rail would be canceled. Musk said as much to me during a series of e-mails and phone calls leading up to the announcement.
Any good he has produced along the way (that mitigates the damage he is causing) is only a means to an end for him, and he would have no hesitations burning it all to the ground the moment it suites him. If everyone acted like him humanity would be doomed, not quickly progressing toward some technological utopia.
Or, as his acquaintance Sam Altman put it: "Elon desperately wants the world to be saved. But only if he can be the one to save it."
What are your credentials on this topic? You speak with a lot of certainty, but fail to acknowledge any nuance that would complicate you world view, such that a lot of water shortages happen also in developed and peaceful regions (as it mentioned in the article). The people without much water are not only in very poor places and warzones, unless you are specifically referring to the people dying due to lack of water.
How would your proposed solution of "the oceans are full of water, just desalinate" affect affordability in agriculture and industry? I assume it would require vast investments in infrastructure that has not been built and is not even planned to be built, what would be required for such an infrastructure to be put in place and what challenges need to be overcome? Are there ecological concerns with the required scale of the operation (such as massive brine runoff at the coast)?
In short, you say "I can assure you there is plenty of water", but is that assurance coming from actual knowledge in the area at hand, or is it misplaced confidence due to dodging any inherent complexity before reaching your conclusion?
OP has 38K karma; some people take it as a signal of valuable contributions to HN, other people understand it's signal of throwing everything at the wall hoping something sticks.
Hilariously, you got downvoted. Of course, HN is a popularity contest just like any other social media or social group for that matter.
Truth doesn't matter that much for most people, they just want to belong with a tribe even if they need to come up or agree with a lot of bullshit along the way.
Thankfully there are people who don't care too much and they push the envelope.
> His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything.
I don't see how he can give this summary with a straight face after posting the interview that CLEARLY contradicts it.
In the interview the engineer says "When Claud Code came out almost a year ago, I started dabbling with web based tools ..." and "When it first came out I had so many ideas and tried all these different things", so he had clearly already used extensively it for a year. I would also guess the engineer was somewhat technically minded from the get-go, since he claims he was "really good with excel" before starting with Claude Code, but that is beside the point.
The interviewer later asks "How much of those 8 weeks was learning Claude Code versus actually building the thing?", and the interviewee answers "Well, I started Claude Code when it first came out so the learning curve has really gone down for me now..." and then trails off to a different subject. Which further confirms that the summary in the post is false.
It really seems like the engineer has spent the year prior learning Claude Code and then spent 8 weeks on solely building this specific application.
The interviewer also claims "This would normally have taken a developer a year to build", which seems really unsubstantiated. It's of course hard to judge without all the details, but looking at the short demo in the video, 8 weeks of regular development time from a somewhat experienced developer doesn't seem too far fetched if the objective is "don't make it pretty, just make it work".
As I said, it's a really interesting case study about a paradigm shift in how software is developed, and it's clear this app would never have existed without Claude Code. So I don't really see the need for the blatant lying.
reply