APIs need to stay stable even as the systems behind them keep changing. Each version becomes a contract that cannot move, while data models and requirements evolve around it.
Keeping everything working without slowing development is harder than it seems, and common approaches often break down once incremental changes start to accumulate.
I'm at that stage in my career where I don't code much anymore, so I don't really share the same anxieties younger developers have about Gen AI. That might partly explain my more optimistic view.
I would also argue that it's quite hard to determine from a Wikipedia article if a person were a good programmer at the time of creating a certain thing about 40 years ago.
If my memory serves me correctly, Leslie Lamport [1] created TeX because he wanted to write a book on math but there were no good systems to write math, so he made TeX. So to me, it sounds like he were a math teacher at that time, I have no idea if he actually knew programming when starting to work on TeX.
Your memory doesn't serve you correctly! Most parts of your post are factually incorrect...
Lamport didn't make TeX. It was Don Knuth. Lamport wrote the LaTeX macros for TeX to simplify typesetting books and articles.
> I would also argue that it's quite hard to determine from a Wikipedia article if a person were a good programmer at the time of creating a certain thing about 40 years ago.
Ironically, the informative and relevant article you linked does answer this:
Leslie Lamport was a computer scientist from 1970 to 1985. He released LaTex in 1984. So he was a full time computer scientist for more than 14 years before LaTeX. This (plus of course his subsequent career including winning the Turing Award) suggests he was a "good programmer" for most common usages of "good" and "programmer".
Lamport was only a math teacher (at Marlboro College) from 1965 to 1969, 15 years before LaTeX. He was a computer scientist for his entire post-PhD career.
That's not a useful metric. The questions that today show up as locked for historical significance were simply closed before the historical significance lock was introduced.
Conversely, the lock was introduced to stop those highly interesting (but ultimately useless) questions from getting deleted. If anything, it's a step in being less strict.
Lastly, I love how you say that the graphs don't include deletions, and one of them (supposedly) show deletions and undeletions. You kinda screwed up the queries there, the graphs don't make sense ;)
That deletions you see on there are items that were deleted and undeleted. Anything that is deleted as of the data dump that those graphs were built on is not displayed. I didn't figure that out until after I posted the graph, and I never bothered to go back and remove the delete/reopen data.
The important point I'm trying to make is take a look at the asked and closed graph lines. When a community has to close around 40% of questions, I think something is wrong.
From June 3, 2010 to September 29, 2010 Programmers was a Q&A site for anything and everything that didn't fit Stack Overflow.
From September 29, 2010 and onwards Programmers is a Q&A site for professional programmers who are interested in getting expert answers on conceptual questions about software development.
Why people still complain about the change in scope more than two years after it happened, is beyond me. In retrospect, it might have been a better idea to close the original site and create a new one from scratch, but that's all in the past now.
NPR, the original version of Programmers, was an experiment, and, well, it failed. Pierre and a few other equally exemplary contributors from the good old days decided to move on, and that's that.
Unfortunate, but we can't do anything about it now, especially since the site's experiencing steady growth and an overall increase in quality since the new scope was solidified. The majority of the community seems to be quite happy with the current direction of the site, no one has called me an evil nazi mod on our Meta site in months ;P
The goal was simple: help humans navigate complexity. Make domain experts and developers speak the same language.
We had no idea those same boundaries would matter for something else entirely.