If a binary was as expensive to generate as some AI projects, I suspect that we would. We do not commit a binary because a binary is cheap to produce locally.
Maybe we get to the point where a conversation can reasonably be used to reproduce software, but that is way too costly at the moment.
Even if that were the case, then it shouldn’t be in Git. It’s a build artifact.
You of course don’t have to do CI to generate on every push. You could just emit it manually and push the code to S3 or something.
I feel like the semantics of Git and GitHub are about demonstrating changes from human effort, and by pushing millions of AI generated code is breaking that.
> Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,”
The challenge is that this is overwhelmingly the reason people go to college in the first place. They want to raise the economic floor of their lives and have greater certainty in the outcome. Precluding strange and eccentric outcomes is certainly acceptable or even desirable with respect to that goal.
> I have no reason to expect this technology can succeed at the same level in law, medicine, or any other highly human, highly subjective occupation.
I mean, if anything, I would expect it to help bring structure to medicine, which is an often sloppy profession killing somewhere between tens of thousands and hundreds of thousands of people a year through mistakes and out of date practices.
As medicine is currently very subjective. As a scientific field in the realm of physical sciences, it shouldn't be.
I was just talking to some friends in medicine the other day. They are getting more and more AI stuff and they love it.
Just basic stuff like smart dictation that listens to the conversation the practitioner is having and auto creates the medical notes, letters, prescriptions etc saving them time and effort to type that all up themselves etc. They were saying that obviously they have to check everything but it was (and I quote) "scarily perfectly accurate". Freeing up a bunch of their time to actually be with the patient and not have to spend time typing etc.
It's way beyond dictation. Medics I know (fresh postgraduates who used LLMs to help write their R code for statistical analysis for their research) are starting to treat it as one of their peers for domain reasoning, e.g. for discussing whether the conditions for a heart transplant are met. They're indeed in the "wow, this thing is human-like" stage, just not in the "let's delegate to the super brain, and then rubber-stamp the result at the end if it looks good" one we seem to be in... perhaps yet.
This is the crazy part with LLMs. It knows much more than you as a single user will ever realize, as it only shows the part that matches with what you put in.
I was building a tool to do exploratory data analysis. The data is manufacturing stuff (data from 10s of factories, having low level sensor data, human enrichments, all the way up to pre-agregated OEE performance KPIs). I didn't even have to give it any documentation on how the factories work - it just knew from the data what it was dealing with and it is very accurate to the extent I can evaluate. People who actually know the domain are raving about it.
Even if you don't want to do yolo mode, there are things like Copilot Autopilot or you can make the permissions for Claude so wide that they can work for an hour and let you come back to the artifact after lunch.
I think the bigger issue is that the percentage of internet writing that can be classified as "business writing" is growing significantly, now that the effort required to produce it is literally zero.
Overall, it feels like no matter where you go on the internet, it's impossible to dodge content that exists primarily for the purpose of extracting money from the reader in some way. SEO spam blogs, AI startups shilling their latest product, AI generated stories posted to reddit that casually slip in a mention of how the supposed author has recently won money on a gambling website. It's all the same thing, really.
I've always wondered how many people know about this. As someone who had to persist on Chromebooks for a bit (before Linux support), it was a godsend for quick fixes.
If a binary was as expensive to generate as some AI projects, I suspect that we would. We do not commit a binary because a binary is cheap to produce locally.
Maybe we get to the point where a conversation can reasonably be used to reproduce software, but that is way too costly at the moment.
reply