I tried that too, I called it "agents". (This was long before AI-mania.) An agent was an object that handled some aspect of behavior (like gravity and collision physics) "on behalf of" some entity, hence the name. The word I was actually searching for was probably "delegate", but I was a stupid 20-something.
ECS is to me still conceptually cleaner and easier to work with, if more tedious and boilerplate-y.
The other day I was working with some shaders GLSL signed distance field functions. I asked Claude to review the code and it immediately offered to replace some functions with "known solutions". Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
His work is available with a permissible license on the Internet but somehow it doesn't seem right that a tool will just regurgitate someone else's work without any mention of copyright or license or original authorship.
Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own. Disgusting.
> Turns out those functions were basically a verbatim copy of Inigo Quilez's work.
Are they? A lot of these were used by people >20 years before Inigo wrote his blog posts. I wrote RenderMan shaders for VFX in the 90's professionally; you think about the problem, you "discover" (?) the math.
So they were known because they were known (a lot of them are also trivial).
Inio's main credit is for cataloging them, especially the 3D ones, and making this knowledge available in one place, excellently presented.
And of course, Shadertoy and the community and giving this knowledge a stage to play out in that way. I would say no one deserves more credit for getting people hooked on shader writing and proceduralism in rendering than this man.
But I would not feel bad about the math being regurgiated by an LLM.
There were very few people writing shaders (mostly for VFX, in RenderMan SL) in the 90's and after.
So apart from the "Texturing and Modeling -- A Procedural Approach" book, the "The RenderMan Companion" and "Advanced RenderMan", there was no literature. The GPU Gems series closed some gaps in later years.
The RenderMan Repository website was what had shader source and all pattern stuff was implict (what we call 2D SDFs today) beause of the REYES architecture of the renderers.
But knowledge about using SDFs in shaders mostly lived in people's heads. Whoever would write about it online would thus get quoted by an LLM.
Yeah, I find this super rude - in this example, the author distributed the code under a very permissive license, basically just wanting you to cite him as an author.
BAM, the LLM just strips all that out, basically pretending it just conjured an elegant solution from the thin air.
No wonder some people started calling the current generation of "AI" plagiarism machines - it really seems more fitting by the day.
LLMs have already told you these are "known solutions", which implicitly means they are established, non-original approaches. So the key point is really on the user side—if you simply ask one more question, like where these "known solutions" come from, the LLM will likely tell you that these formulas are attributed to Inigo Quilez.
So in my view, if you treat an LLM as a tool for retrieving knowledge or solutions, there isn't really a problem here. And honestly, the line between "knowledge" and "creation" can be quite blurry. For example, when you use Newton's Second Law (F = ma), you don't explicitly state that it comes from Isaac Newton every time—but that doesn't mean you're not respecting his contribution.
> Pre-LLM world one would at least have had to search for this information, find the site, understand the license and acknowledge who the author is. Post LLM the tool will just blatantly plagiarize someone else work which you can then sign off on as your own
These don't contradict each other though, you could "blatantly plagiarize someone else work" before as well. LLMs just add another layer in between.
Copyright violation would happen before LLMs yes, but it would have to be done by a person who either didn’t understand copyright (which is not a valid defence in court), or intentionally chose to ignore it.
With LLMs, future generations are growing up with being handed code that may or not be a verbatim copy of something that someone else originally wrote with specific licensing terms, but with no mention of any license terms or origin being provided by the LLM.
It remains to be seen if there will be any lawsuits in the future specifically about source code that is substantially copied from someone else indirectly via LLM use. In any case I doubt that even if such lawsuits happen they will help small developers writing open source. It would probably be one of the big tech companies suing other companies or persons and any money resulting from such a lawsuit would go to the big tech company suing.
An assertion can be arbitrarily expensive to evaluate. This may be worth the cost in a debug build but not in a release build. If all of assertions are cheap, they likely are not checking nearly as much as they could or should.
Possibly but I've never seen it in practice that some assert evaluation would be the first thing to optimize. Anyway should that happen then consider removing just that assert.
That being said being slow or fast is kinda moot point if the program is not correct. So my advisor to leave always all asserts in. Offensive programming.
Good luck to you. Having worked in this space for around 10 years I can say it's nearly impossible to arouse anyone's interest since the market is so totally saturated.
For a new engine to take on it needs do something else nobody else is doing so that it's got that elusive USP.
Getting visibility, SEO hacking etc is more important than the product itself.
To me this kind of "no need to change anything" implies stability but there's a younger cohort of developers who are used to everything changing every week and who think that something that is older than week is "unmaintained" and thus buggy and broken.
One of the earliest security issues that I remember hitting Windows was that if you had a server running IIS, anyone could easily put a properly encoded string in the browser and run any command by causing IIS to shell out to cmd.
I mentioned in another reply the 12 different ways that you had to define a string depending on which API you had to call.
Can you imagine all of the vulnerabilities in Windows caused by the layers and layers of sediment built up over 30 years?
It would be as if the modern ARM Macs had emulators for 68K, PPC, 32-bit x86 apps and 64K x86 apps (which they do) and had 64 bit Carbon libraries (just to keep Adobe happy)
I think its at least as much of a working environment preference.
Once I became experienced enough to have opinions about things like my editor and terminal emulator... suddenly the Visual Studio environment wasn't nearly as appealing. The Unix philosophy of things being just text than you can just edit in the editor you're already using made much more sense to me than digging through nested submenus to change configuration.
I certainly respect the unmatched Win32 backwards/forwards compatibility story. But as a developer in my younger years, particularly pre-WSL, I could get more modern tools that were less coupled to my OS or language choice, more money, and company culture that was more relevant to my in my 20s jumping into Ruby/Rails development than the Windows development ecosystem despite the things it does really well.
Or to say differently: it wasn't the stability of the API that made Windows development seem boring. It was the kind of companies that did it, the rest of the surrounding ecosystem of tools they did it with, and the way they paid for doing it. (But even when I was actually writing code full time some corners of the JS ecosystem seemed to lean too hard into the wild west mentality. Still do, I suspect, just now its Typescript in support of AI).
Seems to me that really the simplest solution to authors problem is to write C++ safely. I mean...this is a trivial utility app. If you can't get that right in modern C++ you should probably just not even pretend to be a C++ programmer.
C++ is hard to get safe in complex systems with hard performance requirements.
If the system is simple and you don't give a shit about performance, it's very very easy to make C++ safe. Just use shared_ptr everywhere. Or, throw everything in a vector and don't destroy it until the end of the program. Whatever, who cares.
No seriously why would you need a graphics engine for procedurally generating content? In this particular case for example his "content" is the world map expressed in some units (tile grid) across two axis. Then you generation algorithm produces that 2d data and that's that.
Now that everyones running faster than ever and trying to outrun the competition by slapping more code on than they do you can only brace for the results.
I expect these tools will quickly let people to ramp up several orders of magnitude of more complexity and lines of code to any software project.
The your 100kloc JS electron app will become a 10m loc JS electron app running on a 500m loc browser runtime.
Repeat this across the stack for every software component and application and library. If you think things are bloated now just wait a few years and your notepad will be a 1m line behemoth with runtime performance of a glacier.
But how do you make the case for thoughtful less bloated software to people who just value writing less code themselves, even if the output produces more lines of code? Seems to me like people don’t care about LOC, they care about how much effort they have to spend writing the lines.
Until at some point in a language like python all the things that allowed you write software faster start to slow you down like the lack of static typing and typing errors and spending time figuring out whether foo method works with ducks or quacks or foovars or whether the latest refactoring actually silently broke it because now you need bazzes instead of ducks. Yeah.
* We're going to make sure we double down on our dark patterns slamming our obnoxious account requirements in your face every chance we get. We'll also make sure to "accidentally forget" any "unfavourable" setting you might have turned to your liking just to make sure you get the best experience we want.
* We're going to keep shoving AI and copilot in your face in every corner of the system whether you want it or not. It's what we want after all. Please subscribe to copilot now or 3 days later.
* We're going to continue vibe coding core system components and interface elements in JavaScript to minimize our developer costs. Just get over it already.
Traditionally called something like entity and attachments.
reply