Hacker Newsnew | past | comments | ask | show | jobs | submit | DavidPiper's commentslogin

I've just started a new role as a senior SWE after 5 months off. I've been using Claude a bit in my time off; it works really well. But now that I've started using it professionally, I keep running into a specific problem: I have nothing to hold onto in my own mind.

How this plays out:

I use Claude to write some moderately complex code and raise a PR. Someone asks me to change something. I look at the review and think, yeah, that makes sense, I missed that and Claude missed that. The code works, but it's not quite right. I'll make some changes.

Except I can't.

For me, it turns out having decisions made for you and fed to you is not the same as making the decisions and moving the code from your brain to your hands yourself. Certainly every decision made was fine: I reviewed Claude's output, got it to ask questions, answered them, and it got everything right. I reviewed its code before I raised the PR. Everything looked fine within the bounds of my knowledge, and this review was simply something I didn't know about.

But I didn't make any of those decisions. And when I have to come back to the code to make updates - perhaps tomorrow - I have nothing to grab onto in my mind. Nothing is in my own mental cache. I know what decisions were made, but I merely checked them, I didn't decide them. I know where the code was written, but I merely verified it, I didn't write it.

And so I suffer an immediate and extreme slow-down, basically re-doing all of Claude's work in my mind to reach a point where I can make manual changes correctly.

But wait, I could just use Claude for this! But for now I don't, because I've seen this before. Just a few moments ago. Using Claude has just made it significantly slower when I need to use my own knowledge and skills.

I'm still figuring out whether this problem is transient (because this is a brand new system that I don't have years of experience with), or whether it will actually be a hard blocker to me using Claude long-term. Assuming I want to be at my new workplace for many years and be successful, it will cost me a lot in time and knowledge to NOT build the castle in the sky myself.


Then you're using it more towards vibe coding than AI-assisted coding: I use AI to write the stuff the way I want it to be written. I give it information about how to structure files, coding style and the logic flow.

Then I spend time to read each file change and give feedback on things I'd do differently. Vastly saves me time and it's very close or even better than what I would have written.

If the result is something you can't explain than slow down and follow the steps it takes as they are taken.


AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you. Is what it produced correct? Well who knows? I didn't actually think about it. As current gen seniors brains atrophy over the next few years the scarier thing is that juniors won't even be learning the fundamentals because it is too easy to let AI handle it.

Strongly disagree. If the complexity of your work it the software development itself, then it means that your work is not very complex to begin with.

It has always been extremely annoying to fight with people who mistake the ability of building or engaging with complicated systems (like your regex) with competency.

I work in building AI for a very complex application, and I used to be in the top 0.1% of Python programmers (by one metric) at my previous FAANG job, and Claude has completely removed any barriers I have between thinking and achieving. I have achieved internal SOTA for my company, alone, in 1 week, doing something that previously would have taken me months of work. Did I have to check that the AI did everything correctly? Sure. But I did that after saving months of implementation time so it was very worth it.

We're now in the age of being ideas-bound instead of implementation-bound.


What was the metric?

Trivia was always the hallmark of an insufferable programmer. Remembering the syntax to regex always struck me as a detail of programming, not a fundamental. I'm glad I no longer have to waste my life debugging it.

>> AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you.

Regex is the worst possible example you could have given. Seriously, how many people do you know who painstakingly hand-craft their own regexes as opposed to using one of the million tools out there that can work backwards from example inputs and outputs to generate a regex that satisfies the conditions?


I agree. In the beginning when I was starting, I let the AI do all of the work and merely verified that it does what I want, but then I started running into token limits. In the first two weeks I honestly was just looking forward for the limit to refresh. The low effort made it feel like I would be wasting my time writing code without the agent.

Starting with week three the overall structure of the code base is done, but the actual implementation is lacking. Whenever I run out of tokens I just started programming by hand again. As you keep doing this, the code base becomes ever more familiar to you until you're at a point where you tear down the AI scaffolding in the places where it is lacking and keep it where it makes no difference.


I agree that being further along the Vibe end of the spectrum is the issue. Some of the other ways I use Claude don't have the same problems.

> If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

The problem is I can explain it. But it's rote and not malleable. I didn't do the work to prove it to myself. Its primary form is on the page, not in my head, as it were.


I'm on the same path as you are it seems. I used to be able to explain every single variable name in a PR. I took a lot of pride in the structure of the code and the tests I wrote had strategy and tactics.

I still wrote bugs. I'd bet that my bugs/LoC has remained static if not decreased with AI usage.

What I do see is more bugs, because the LoC denominator has increased.

What I align myself towards is that becoming senior was never about knowing the entire standard library, it was about knowing when to use the standard library. I spent a decade building Taste by butting my head into walls. This new AI thing just requires more Taste. When to point Claude towards a bug report and tell it to auto-merge a PR and when to walk through code-gen function by function.


> I can explain it. But it's rote and not malleable.

The AI can help with that too. Ask it "How would one think about this issue, to prove that what was done here is correct?" and it will come up with somethimg to help you ground that understanding intuitively.


This is the approach I’m taking, along with being much more verbose than my normal style with comments in the code and commit messages (including snippets of the prompts/insights that inspired the change).

It's a spectrum and we don't have clear notches on the ruler letting us know when we're confidently steering the model and when we've wandered into vibe coding. For me, this position is easy to take when I am feeling well and am not feeling pressured to produce in a fixed (and likely short) time frame.

It also doesn't help that Claude ends every recommendation with "Would you like me to go ahead and do that for you?" Eventually people get tired and it's all to easy to just nod and say "yes".


That is indeed a very annoying part of many AI models. I wish I could turn it off.

For me it seems more or less similar to reviewing others' changes to a codebase. In any large organization codebase, most of the changes won't be our own.

This is my primary personal concern. I think it could be an silent psychological landmine going off way too late (sic).

In a living codebase you spent long stretches to learn how it works. It's like reading a book that doesn't match your taste, but you eventually need to understand and edit it, so you push through. That process is extremely valuable, you will get familiar with the codebase, you map it out in your head, you imagine big red alerts on the problematic stuff. Over time you become more and more efficient editing and refactoring the code.

The short term state of AI is pretty much outlined by you. You get a high level bug or task, you rephrase it into proper technical instructions and let a coding agent fill in the code. Yell a few times. Fix problems by hand.

But you are already "detached" from the codebase, you have to learn it the hard way. Each time your agent is too stupid. You are less efficient, at least in this phase. But your overall understanding of the codebase will degrade over time. Once the serious data corruption hits the company, it will take weeks to figure it out.

I think this psychological detachment can potentially play out really bad for the whole industry. If we get stuck for too long in this weird phase, the whole tech talent pool might implode. (Is anyone working on plumbing LLMs?)


I believe it's ultimately a tug of war between what the business wants (more features, faster, etc) and the engineers want (maintainability, documentation, scalable patterns etc). Engineers rarely win this tug of war. At times it feels like watching a car crash in slow motion. I don't think this trend will meaningfully slow or change, until businesses interests are hit. That may take a while for this cruft to start causing pain. Even then, you may just throw money at the problem, or just live with it. Will companies go bankrupt because of vibe coding? I don't think so, and that's why ai coding is here to stay. My 2 cents

I believe the detachment gets exacerbated by the fact that others are simultaneously modifying the codebase at speeds that doesn't allow you to keep up. Depending on how the codebase boundaries and ownership are defined, this directly impacts your ability to reason about the whole system and therefore influence direction.

And you can't ask anyone about it, because they are also detached.

Ask Claude to explain the code in depth for you. It's a language model, it's great at taking in obscure code and writing up explanations of how it works in plain English.

You can do this during the previous change phase of course. Just ask "How would one plan this change to the codebase? Could you explain in depth why?" If you're expected to be thoroughly familiar with that code, it makes no sense to skip that step.


This is like asking Claude to explain some aspect of physics to you. It'll 'feel' like you understand, but in order to really understand you have to work those annoying problems.

Same with anything. You can read about how to meditate, cook, sew, whatever. But if you only read about something, your mental model is hollow and purely conceptual, having never had to interact with actual reality. Your brain has to work through the problems.


> ...in order to really understand you have to work those annoying problems.

GP says that they have to come back tomorrow and edit the code to fix something. That's a verification step: if you can do that (even with some effort) you understand why the AI did what it did. This is not some completely new domain where what you wrote would apply very clearly, it's just a codebase that GP is supposed to be familiar with already!


By working in this way you're proactively de-skilling yourself. Do it long enough and you're now replaceable by anyone that can type a prompt.

It sounds pedantic, but I think it's important: maths and physics are often used to describe sounds, their relationships and emergent properties through combination. Maths and physics aren't ever really used to describe music.

It's like telling someone they can paint a masterpiece because they understand Fe4[Fe(CN)6]3 makes an aesthetically pleasant blue pigment.


That's a very nice analogy, thank you.

> People don't want change? Nah, people like change when it is obvious to them that the change is good.

I agree with some of what you said, but just want to point out that you're doing the very thing you criticise here.

I think lots of people genuinely don't want change. Hopefully you have great answers to my objection.

In general, I've found the question of "who needs to provide evidence first?" is one of the most casually ignored and maliciously manipulated questions in so much professional discourse. The answer is often implicitly "the person with less role power" which by itself is a terrible answer.


I don't know if anyone is holding the author up as a hero, least of all herself. The book reads as a masterclass in grooming, manipulation and abuse.

If anything, the title "Careless People" does a disservice to its message: the people above and around her clearly knew exactly what they were doing, and took great care to evade any and all responsibility for anything.


Sounds just like John Cleese's "Open Mode" and "Closed Mode" - https://www.youtube.com/watch?v=Pb5oIIPO62g

No idea why this has been downvoted. There is a lot of demand for this, and at least one company actively working on orchestrating home and EV batteries with the grid: https://www.amber.com.au/amber-for-evs

> don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society

How large does a group have to be (absolute number or percentage of population) for you to change your mind on this?

Serious question - genuinely curious. My answer is about 10% provided they are organised in some way. 5% if you're particularly good at collective action.


> A much more "obvious" solution IMO is to invest in efficient, grid-scale renewable generation combined with robust storage tiers, as well us long overdue updates to the grid.

Individual rooftop solar + home batteries _is_ how we're doing this in Australia. You can connect your home setup directly to the wholesale grid and import/export electricity at appropriate times.


Apple didn't have this issue 1 year ago :-)

The idea that it might cost "someone" $2 every time a user opens and app AND it sends a bunch of private data to a 3rd party is completely dystopian, let alone everything else.

And a serious question: with deepest respect to the author for their extraordinarily impressive time and effort in this investigation... Why was this not already flagged by political reporters or investigative journalists? I'm not American so maybe I don't understand the media structure over there but it feels like SOMEONE should have been all over this way before it's gotten to the point described in this post.


When a megacorp funds a network of non-profits to lobby a bunch of politicians, draft legislation, and tell them to take it to committee, that can happen without much visibility, especially when it's been orchestrated at the state level, as this has. Where does any of this show up until there's a vote called on it? There's no open debate. No working "across the aisle" to address concerns. There's nothing left of the legislative process that started this country, or, indeed, any Western representative democracy. So someone has to be watching, see something on an agenda that raises the hairs on their necks, figure out what it is, and if there's a story there, and they're not going to get any help from anyone because everyone involved knows how the public is going to feel about it. And then, as the article indicates, even a place like Reddit is going to astroturf the effort to get the story out. (Which I've been trying to point out for YEARS, but which -- surprise, surprise! -- gets supressed.)


Mainstream media is largely captured by the same monied interests as discussed in the reddit post. Although the poster does mention an article from Bloomberg as evidence, most of their sources are local outlets or tech-focused. https://github.com/upper-up/meta-lobbying-and-other-findings...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: