And some people do, both things can be true. I'd rather make a tool just for me that breaks when I introduce a new requirement and I just add into it and keep going.
The statement wasn't: "no one ever vibe codes an alternative to product X"
It was: "With sufficiently advanced vibe coding the need for certain type of product just vanishes."
If a product has 100 thousand users and 1% of them vibe codes an alternative for themselves, the product / business doesn't vanish. They still have 99 thousand of users.
That was the rebuttal, even if not presented as persuasively and intelligently as I just did.
So no, it's not the case of "both things being true". It's a case of: he was wrong.
At some point there will be market consequences for that kind of behavior. So where market dynamics are not dominated by bullshit (politics, friendships forged on Little St James, state intervention, cartel behavior, etc.) if my company provides the same service as another, but I replaced all of the low quality software as a service products my competitor uses with low quality vibe coded products, my overhead cost will be lower and that will give me an advantage.
I just added a Claude alias that calls Claude with flags wrapped in asciinema. Only annoying thing is that people have wanted video or gifs and the conversion has been annoying a few times. Will fix it later.
> Claude was trying to talk me out of it, saying I should keep it separate, but I wanted to save a bit because I have this setup where everything is inside a Virtual Private Cloud (VPC) with all resources in a private network, a bastion for hosting machines
I will admit that I've also ignored Claude's very good suggestions in the past and it has bitten me in the butt.
Ultimately with great automation becomes a greater risk of doing the worst thing possible even faster.
Just thinking about this specific problem makes me more keen to recommend that people have backups and their production data on two different access keys for terraform setups.
I'm not sure how difficult that is I haven't touched terraform in about 7 years now, wow how time flies.
When the LLMs start compacting they summarize the conversation up to that point using various techniques. Overall a lot of maybe finer points of the work goes missing and can only be retrieved by the LLM being told to search for it explicitly in old logs.
Once you compact, you've thrown away a lot of relevant tokens from your problem solving and they do become significantly dumber as a result. If I see a compaction coming soon I ask it to write a letter to its future self, and then start a new session by having it read the letter.
There are some days where I let the same session compact 4-5 times and just use the letter to future self method to keep it going with enough context because resetting context also resets my brain :)
If you're ever curious in Claude once you compact you can read the new initial prompt after compaction and see how severe it gets cut down. It's very informative of what it forgets and deems not important. For example I have some internal CLIs that are horribly documented so Claude has to try a few flags a few times to figure out specifics and those corrections always get thrown away and it has to relearn them next time it wants to use the CLI. If you notice things like that happening constantly, my move is to codify those things into my CLAUDE.md or lately I've been making a small script or MCP server to run very specific flags of stuff.
Look at the compaction prompt yourself. It's in my opinion way too short. (I'm running on Opus 4.5 most of the time at work)
From what my colleague explained to me and I haven't 100% verified it myself is that the beginning and end of the window is the most important to the compaction summary so a lot of the finer details and debugging that will slow down the next session get dropped.
What prompt do you use for the letter-to-self? I've been trying that technique myself to manually reset context without losing the important parts (e.g. when it has barked up the wrong tree and I'm sensing that misstep might influence its current generation in a pathological way), but I've not had much success.
It tends to be pretty manual. I mention the goal of the next session, the current stage of progress, the tests for the next steps, and any skills I want it to load next time.
Having a specific goal seems to make a big difference vs. asking it to summarize the session.
If the session was something where it struggled and had to do multiple attempts I have it write about 'gotchas' or anything it had to attempt multiple times.
The letters are usually more detailed than what I see in the compacted prompt.
So you use the letter to itself in addition to the compacted context? I am curious what you ask it to include in the letter and how it is different from a custom instruction passed to /compact?
You should do your own experiment when you see compaction about to start use the end of your window to have it write one first, and then let the session compact and compare. I was surprised by how small the compact message is.
When I tell it to write a letter to itself I usually phrase it.
'write a letter to yourself Make notes of any gotchas or any quirks that you learned and make sure to note them down.'
It does get those into the letter but if you check compaction a lot of it is gone.
Why would I do that if the gateway to the internet becomes these LLM interfaces? How is it not easier to ask or type 'buy me tickets for Les Mis'? In the ideal world it will just figure it out, or I frustratingly have to interact with a slightly different website to purchase tickets for each separate event I want to see.
One of the benefits that I see is as much as I love tech and writing software, I really really do not want to interface with a vast majority of the internet that has been designed to show the maximum amount of ads in the given ad space.
The internet sucks now, anything that gets me away from having ads shoved in my face constantly and surrounded by uncertainty that you could always be talking to a bot.
I'm sympathetic to this view too, but I don't think the solution is to have LLM's generate bespoke code to do it. We absolutely should be using them for more natural language interfaces tho.
Yeah, that can also work. But I don’t see the future of software is to keep building multimillion line of code systems in a semi manual way (with or without llms). I think we will reach a phase in which we’ll have to treat code as disposable. I don’t think we are there yet, though.
We probably need higher levels of abstraction, built upon more composable building blocks and more interplay between various systems. To me that requires less disposable code though.
Just experienced this with my heavily networked off openclaw setup. I gave up and will do manual renewals until I have more time to figure out a good way of doing it. I was trying to get a cert for some headscale magic dns setups, but I think that's way more complicated than I thought it would be.
I did this trick at work where I use git worktrees and my team does not yet.
There's the common team instructions + a thing that says "run whoami and find the users name, you can find possible customizations to these instructions in <username>.md" and that will be conditionally loaded after my first prompt is sent. I also stick a canary word in there to track that it's still listening to me.
I've been having the same feelings lately, especially around the AI doomers coming in with some weird observations that make no sense. I usually don't comment, but seeing a lot of these types of overgeneralized responses.
This has been my best way to learn, put one agent on a big task, let it learn things about the problem and any gotchas, and then have it take notes, do it again until I'm happy with the result, if in the middle I think there's two choices that have merit I ask for a subagent to go explore that solution in another worktree and to make all its own decisions, then I compare. I also personally learn a lot about the problem space during the process so my prompts and choices on us sequent iterations use the right language I need to use.
Love this, this is what I have been envisioning as a LLM first OS! Feels like truly organic computing. Maybe Minority Report figured it out way back then.
The idea of having the elements anticipated and lowering the cognitive load of searching a giant drop down list scratches a good place in my brain. Instantly recognize it as such a better experience than what we have on the web.
I think something like this is the long term future for personal computing, maybe I'm way off, but this the type of computing I want to be doing, highly customized to my exact flow, highly malleable to improvement and feedback.
reply