I will get probably heavily crucified for this, but to people who are ideologically opposing AI generated code — executives, directors and managerial staff think the opposite. Being very anti-LLM code instead of trying to understand how it can improve the speed might be detrimental for your career.
Personally, I’m on the fence. But having conversations with others, and some requests from execs to implement different AI utils into our processes… making me to be on the safer side of job security, rather than dismiss it and be adamant against it.
> executives, directors and managerial staff think the opposite
Executives, directors, and managerial staff have had their heads up their own asses since the dawn of civilization. Riding the waves of terrible executive decisions is unfortunately part of professional life. Executives like the idea of LLMs because it means they can lay you off; they're not going to care about your opinion on it one way or another.
> Being very anti-LLM code instead of trying to understand how it can improve the speed might be detrimental for your career.
You're making the assumption that LLMs can improve your speed. That's the very assumption being questioned by GP. Heaps of low-quality code do not improve development speed.
I'm willing to stake my reputation on the idea that yes, LLMs can improve your speed. You have to learn how to use them effectively and responsibly but the productivity boosts they can give you once you figure that out are very real.
I'm with you on this one. If one's experience is just using it as a general chat bot, then yeah, I can see why people are reluctant thinking it's useless. I have a feeling that a good chunk of people haven't tried to use latest models on a medium-small project from scratch, where you have to play around with its intricacies to build intuition around it.
It becomes some sort of muscle memory, where I can predict whether using LLM would be faster or slower. Or where it's more likely to give bad suggestions or not. Basically treating it as googling skills.
"It becomes some sort of muscle memory, where I can predict whether using LLM would be faster or slower"
Yeah, that intuition is so important. You have to use the models a whole bunch to develop it, but eventually you get a sort of sixth sense where you can predict if an LLM is going to be useful or harmful on a problem and be right about it 9/10 times.
My frustration is that intuition isn't something I can teach! I'd love to be able to explain why I can tell that problem X is a good fit and problem Y isn't, but often the answer is pretty much just "vibes" based on past experience.
1) Being up to date with the latest capabilities - this one has slowed down a bit, and my biggest self-learning experience was in August/September, and most of the intuition still works. However, although I had time to do so, it's hard to ask my team to drop 5-6 free weekends of their lives to get up to speed
2) Transition period where not everyone is on the same page about LLM - this I think is much harder, because the expectations from the executives are much different than on the ground developers using LLMs.
A lot of people could benefit on alignment of expectations, but once again, it's hard to explain what is possible and not possible if your statements will become nullified a month later with a new AI model/product/feature.
You're confusing intuition of what works or not with being too close to your problem to make a judgement on the general applicability of your techniques.
They're saying it might work for you, but isn't generally applicable (because most people aren't going to develop this intuition, presumably).
Not sure I agree with that. I would say there are classes of problems where LLMs will generally help and a brief training course (1 week, say) would vastly improve the average (non-LLM-trained) engineer's ability to use it productively.
No it is more like thinking that my prescribed way of doing stuff must be the way things work in general because it works for me. But you give instructions specifically about everything you did but the person you give them to isn't your exact height or can't read your language, so you can easily assume now that they just don't get it. But with these LLMs you also get this bias hidden from you as you inch closer to the solution at every turn. The result seems "obvious" but the outcomes were never guaranteed and will most likely be different for someone else if one thing at point is different.
My whole thing about LLMs is that using them isn't "obvious". I've been banging that drum for over a year now - the single biggest misconception about LLMs is that they are easy to use and you don't need to put a serious amount of effort into learning how to best apply them.
It's to me more that the effort you put in is not a net gain. You don't advance a way of working with them that way because of myriad reasons including things from ownership of the models, fundamentals of the resulting total probabilistic space of the interaction, to just simple randomness even at low temperatures. The "learning how to best apply them" is not definable because who is learning what to apply what to what... The most succint way I know how to describe these issues is that like startup success you're saying "these are lotto number that worked for me" in many of the assumptions you make about the projects you present.
In real, traditional, deterministic systems where you explicitly design a feature, even that has difficulty being coherent over time as usage grows. Think of tab stops on a typewriting evolving from an improvised template, to metal tabs installed above the keyboard, to someone cutting and pasting incorrectly and reflowing a 200 page document to 212 pages accidentally because of tab characters...
If you create a system with these models that writes the code to process a bunch of documents in some way or so some kind of herculean automation you haven't improved the situation when it comes to clarity or simplicity, even if the task at hand finishes sooner for you in this moment.
Every token generated has an equal potential to spiral out into new complexities and whack a mole issues that tie you to assumptions about the system design while providing this veneer that you have control over the intersections of these issues, but as this situation grows you create an ever bigger problem space.
And I definitely hear you say, this is the point where you use sort of full stack interoception holistic intuition about how to persuade the system towards a higher order concept of the system and expand your ideas about how the problem could be solved and let the model guide you ... And that is precisely the mysticism I object to because it isn't actually a kind productiveness, but a struggle, a constant guessing, and any insight from this can be taken away, changed accidentally, censored, or packaged as a front run against your control.
Additionally the nature of not having separate in band and out of band streams of data means that even with agents and reasoning and all of the avenues of exploration and improving performance will still not escape the fundamental question of ... What is the total information contained in the entire probabilistic space. If you try to do out of band control in some way like the latest thing I just read where they have a separate censoring layer, you just either wind up having to use another LLM layer there which still contains all of these issues, or you use some kind of non transformer method like Bayesian filtering or something and you get all of the issues outlined in the seminal spam.txt document...
So, given all of this, I think it is really neat the kinds of feats you demonstrate, but I object that these issues can be boiled down to "putting a serious amount of effort into learning how to best apply them" because I just don't think that's a coherent view of the total problem, and not actually something that is achievable like learning in other subjects like math or something. I know it isn't answerable but for me a guiding question remains why do I have to work at all, is the model too small to know what I want or mean without any effort? The pushback against prompt engineering and the rise of agentic stuff and reasoning all seems to essentially be saying that, but it too has hit diminishing returns.
> Executives like the idea of LLMs because it means they can lay you off; they're not going to care about your opinion on it one way or another.
Probably, but when the time comes for layoffs the ones that will be the first to go are those that are hiding under a rock, claiming that there is no value to those LLM’s even as they’re being replaced.
First, what LLMs/GenAI do is automated code generation, plain and simple. We've had code generation for a very long time; heck, even compiling is automated generation of code.
What is new with LLM code generation is non-deterministic, unlike traditional code generation tools; like a box of chocolates, you never know what you're going to get.
So, as long as you have tools and mechanisms to make that non-determinism irrelevant, using LLMs to write code is not a problem at all. In fact, guess what? Hand-coding is also non-deterministic, so we already have plenty of those in place: automated tests, code reviews etc.
I think I’m having the same experience as you. I’ve heard multiple times from execs in my company that “software” will have less value and that, in a few years, there won’t be as many developer jobs.
Don’t get me wrong—I’ve seen productivity gains both in LLMs explaining code/ideation and in actual implementation, and I use them regularly in my workflow now. I quite like it. But these people are itching to eliminate the cost of maintaining a dev team, and it shows in the level of wishful thinking they display. They write a snake game one day using ChatGPT, and the next, they’re telling you that you might be too slow—despite a string of record-breaking quarters driven by successful product iterations.
I really don’t want to be a naysayer here, but it’s pretty demoralizing when these are the same people who decide your compensation and overall employment status.
> If CEOs invest heavily in this, they won't back down because no one wants to be wrong.
They might not have to. If the results are bad enough then their companies might straight-up fail. I'd be willing to bet that at least one company has already failed due to betting too heavily on LLMs.
That isn't to say that LLMs have no uses. But just that CEOs willing something to work isn't sufficient to make it work.
Yes, but don't forget that higher-ups also control, to a large extent, the narrative. Whether laying off developers to replace them with LLMs was good for the company is largely uncorrelated to whether the person in charge of the operation will get promoted for having successfully saved all this money for the company.
Pre-LLM, that's how Boeing destroyed itself. By creating value for the shareholders.
It makes sense—we all know how capitalism works. But the thing is, how can you not apply the law of diminishing returns here? The models are getting marginally better with a significant increase in investment, except for DeepSeek’s latest developments, which are impressive but mostly for cost reasons—not because we’ve achieved anything remotely close to AGI.
If your experienced employees, who are giving an honest try to all these tools, are telling you it’s not a silver bullet, maybe you should hold your horses a little and try to take advantage of reality—which is actually better—rather than forcing some pipe dream down your bottom line’s throat while negating any productivity gains by demotivating them with your bullshit or misdirecting their efforts into finding a problem for a given solution.
I think with LLMs we will actually see the demand for software developers who can understand code and know how to use the tools sky rocket. There will be ultimately be way more money in total going towards software developers, but average pay will be well above the median pay.
> I’ve heard multiple times from execs in my company that “software” will have less value and that, in a few years, there won’t be as many developer jobs.
If LLMs make average devs 10x more productive, Jevon's Paradox[1] suggests we'll just make 10x more software rather than have 10x fewer devs. You can now implement that feature only one customer cares about, or test 10x more prototypes before building your product. And if you instead decide to decimate your engineering team, watch out because your competitors might not.
Just another way for the people on top to siphon money from everyone else. No individual contributor is going to be rewarded for any productivity increase beyond what is absolutely required to get them to fulfill the company’s goals, and the goal posts will be moving so fast that keeping up will be a full time job. As we see from the current job market, the supply of problems the commercial software market needs more coders to solve maybe isn’t quite as bountiful as we thought it was, and maybe we won’t need to perpetually ramp up the number of developers humanity has… maybe we even have too many already? If a company’s top developer can do the work of the 10 developers below them, their boss is going to happily fire the extra developers— not think of all the incredible other things they could use those developers for. A lot of developers assume that one uber-productive developer left standing will be more valuable to the company than they were before— but now that developer is competing with 10 people that also know the code base willing to work for a lot cheaper. We get paid based on what the market will bear, not the amount of value we deliver, so 100% of that newfound profit goes to the top and the rest of it goes to reducing the price of the product to stay competitive with every other company doing the exact same thing.
Maybe I’m being overly cynical, but assuming this isn’t a race to the bottom and people will get rich being super productive ai-enhanced code monsters, to me, looks like a conceited white collar version of the hustle porn guys that think if they simultaneously work the right combo of gig apps at the right time of day in the right spots then they can work their way up to being wealthy entrepreneurs. Good luck.
Writing code isn’t the bottleneck though. What LLMs do is knock out the floor because you used to need a pretty significant baseline knowledge of a programming language to do anything. Now you don’t need that because you can just spray and pray prompts with no programming knowledge. This actually works to a point since most business code is actually repetitive CRUD. The problem comes with the fact that implicit expectations that the higher level system run with a certain uptime, level of quality, and conform to any number of common sense assumptions that no one but a good programmer was thinking about until someone uses the system and says “why does it do this completely wrong thing”. There are huge classes of these types of problems that an LLM will not be capable of resolving, and if you’ve been blasting ahead with mountains of LLM slop code even the best human programmers might not be able to save you. In other words I think LLMs will make it easy to paint yourself into a corner if you gut the technical core of your team.
But there’s no hard ceiling above the people on the bottom. It’s not a stratification — it’s a spectrum. The lower-end developers replaced easily by LLMs aren’t going to just give up and become task rabbits: they’re going to update their skills trying to qualify for the (probably temporarily) less vulnerable jobs above them. They might never be good enough to solve the really hard problems, they’ll put pressure on those just above them… which will echo up the entire industry. When everyone— regardless of the applicability of LLMs to their workflow— is suddenly facing competition from the developers just below them because of this upward pressure, the market gets a whole lot shittier. Damn near everybody I’ve spoken to thinks they’re the special one that surely can’t be heavily affected by LLMs because their job is uniquely difficult/quality-focused/etc. Even for the smallish percentage of people for whom that’s true, the value of their skill set on a whole is still going to take a huge hit.
What seems far more likely to me is that computer scientists will be doing math research and wrangling LLMS, a vanishingly small number of dedicated software engineers work on most practical textual coding tasks with engineering methodologies, and low or no code tooling with the aid of LLMs gets good enough to make custom software something made by mostly less-technical people with domain knowledge, like spreadsheet scripting.
A lot of people in the LLM booster crowd think LLMs will replace specialists with generalists. I think that’s utterly ridiculous. LLMs easily have the shallow/broad knowledge generalists require, but struggle with the accuracy and trustworthiness for specialized work. They are much more likely to replace the generalists currently supporting people with domain-specific expertise too deep to trust to LLMs. The problem here is that most developers aren’t really specialists. They work across the spectrum of disciplines and domains but know how to use a very complex toolkit. The more accessible those tools are to other people, the more the skill dissolves into the expected professional skill set.
Yeah it seems pretty obvious where this is all going and yet a sizable proportion of the programming population cheers on every recent advancement that makes their skills more and more of a commodity.
This simply doesn't work much of the time as an excuse - virtually all the AI tool subscriptions for corporations provide per user stats on how much each staff member is using the AI assist. This shouldn't come as a surprise - software tool purveyors need to demonstrate ROI to their customer's management teams and as always this is in reporting tools.
I've already seen several rounds of slacks: "why aren't you using <insert LLM coding assistant name>?" off the back of this reporting.
These assistants essentially spy on you working in many cases, if the subscription is coming from your employer and is not a personal account. For one service, I was able to see full logging of all the chats every employee ever had.
It's not necessarily just monitoring though. I actively ask that question when I don't see certain keys not being used to inquire their relevancy. Basically taking feedback from some engineers, and generalizing it. Obviously in my case we're doing it in good faith, and assuming people will try to get their work done with whatever tools we give them access to. Like I see Anthropic keys get heavily used among eng department, but I constantly get requests for OpenAI keys for Zapier connects and etc. for business people.
This has been true for every heavily marketed development aid (beneficial or not) for as long as the industry has existed. Managing the politics and the expectations of non-technical management is part of career development.
Yeah, I totally agree, and you're 100% right. But the amount of integrations I've personally done and have instructed my team to do implies this one will be around for a while. At some point spending too much time on code that could be easily generated will be a negative point on your performance.
I've heard exactly the same stories from my friends in larger tech companies as well. Every all hands there's a push for more AI integration, getting staff to use AI tools and etc., with the big expectation that development will get faster.
> At some point spending too much time on code that could be easily generated will be a negative point on your performance.
If we take the premise at face value, then this is a time
management question, and that’s a part of pretty
much every performance evaluation everywhere. You’re not rewarded for writing some throwaway internal tooling that’s needed ASAP in assembly or with a handcrafted native UI, even if it’s strictly better once done. Instead you bash it out in a day’s worth of Electron shitfuckery and keep the wheels moving, even if it makes you sick.
Hyperbole aside, hopefully the point is clear: better is a business decision as much as a technical one, and if an LLM can (one day) do the 80% of the Pareto distribution, then you’d better be working on the other 20% when management come knocking. If I run a cafe, I need my baristas making coffee when the orders are stacking up, not polishing the machine.
Caveats for critical code, maintenance, technical debt, etc. of course. Good engineers know when to push back, but also, crucially, when it doesn’t serve a purpose to do so.
I don't think AI is an exception. In organizations where there were top-down mandates for Agile, or OOP, or Test-Driven Development, or you-name-it, those who didn't take up the mandate with zeal were likely to find themselves out of favor.
It's not necessarily top down. I genuinely don't know a single person in my organization who doesn't use LLMs one way or another. Obviously with different degrees of applications, but literally everyone does. And we haven't had a real "EVERYONE MUST USE AI!", just people suggesting and asking for specific model usages, access to apps like Cursor and so on.
(I know it because I'm in charge of maintaining all processes around LLM keys, their usages, Cursor stuff and etc.)
No, right now the only thing higher ups ask from me is general percentage usages for different types of model/software usages (Anthropic/OpenAI/Cursor and etc.), so we can reassess subscriptions to cut costs wherever it's needed. But to be fair, they have access to the same dashboards as I do, so if they want to, they can look for it.
> executives, directors and managerial staff think the opposite
The entire reason they hire us is to let them know if what they think makes sense. No one is ideologically opposed to AI generated code. It comes with lots of negatives and caveats that make relying on it costly in ways we can easily show to any executives, directors, etc. who care about the technical feasibility of their feelings.
As a former "staff engineer" these executives can go and have their careers and leave people who want to have code they can understand, reason about and focus on quality software well alone.
When IntelliJ was young the autocomplete and automated refactoring were massive game changers. It felt like a dawn of a new age. But then release after release no new refactorings materialized. I don't know if they hit the Pareto limit or the people responsible moved on to new projects.
I think that's the sort of spot where better tools might be appropriate. I know what I want to do, but it's a mess to do it. I suspect that will be better at facilitating growth instead of stunting it.
Hmm… I wonder if there will be a category of LLM-assisted refactoring tools that combine mechanistic transformations with the more flexible capabilities of generative AI. E.g.: update the English text in comments automatically to reflect code structure changes.
Little tools like how to pluralize nouns, convert adjectives to verbs (function takes data and arranges it into a response that the adjective applies to), would help a lot with rename refactors.
I've seen the exact opposite. Management at my company has been trying to shove AI into everything. They even said that this year we would be dropping all vendors that didn't have some for of AI in their workflow.
Personally, I’m on the fence. But having conversations with others, and some requests from execs to implement different AI utils into our processes… making me to be on the safer side of job security, rather than dismiss it and be adamant against it.