Hacker Newsnew | past | comments | ask | show | jobs | submit | materielle's commentslogin

I think AI bans are more common in projects where the maintainers are nice people that thoughtfully want to consider each PR and provide a reasoned response if rejected.

That’s only feasible when the people who open PRs are acting in good faith, and control both the quality and volume of PRs to something that the maintainers can realistically (and ought to) review in their 2-3 hours of weekly free time.

Linux is a bit different. Your code can be rejected, or not even looked at in the first place, if it’s not a high quality and desired contribution.

Also, it’s not just about PR quality, but also volume. It’s possible for contributions to be a net benefit in isolation. But most open source maintainers only have an hour or so a week to review PRs and need to prioritize aggressively. People who code with AI agents would benefit themselves to ask “does this PR align with the priorities and time availability of the maintainer?”

For instance, I’m sure we could point AI at many open source projects and tell it to optimize performance. And the agent would produce a bunch of high quality PRs that are a good idea in isolation. But what if performance optimization isn’t a good use of time for a given maintainer’s weekly code review quota?

Sure, maintainers can simply close the PR without a reason if they don’t have time.

But I fear we are taking advantage of nice people, who want to give a reasoned response to every contribution, but simply can’t keep up with the volume that agents can produce.


No he didn’t. He built a proof of concept demo in 7 days then handed it off to other maintainers to code for real. I’m not sure why this myth keeps getting repeated. Linus himself clarifies this in every interview about git.

His main contributions were his ideas.

1) The distributed model, that doesn’t need to dial the internet.

2) The core data structures. For instance, how git stores snapshots for files changes in a commit. Other tools used diff approaches which made rewinding, branch switching, and diffing super slow.

Those two ideas are important and influenced git deeply, but he didn’t code the thing, and definitely not in 7 days!


Those were not his ideas. Before Git, the Linux kernel team was using BitKeeper for DVCS (and other DVCS implementations like Perforce existed as well). Git was created as a BitKeeper replacement after a fight erupted between Andrew Tridgell (who was accused of trying to reverse engineer BitKeeper in violation of its license) and Larry McVoy (the author of BitKeeper).

https://graphite.com/blog/bitkeeper-linux-story-of-git-creat...

You may find this 10-year-old thread on HN enlightening, too: https://news.ycombinator.com/item?id=11667494


I agree and that’s the point I was trying to make.

Linus’s contribution is a great one. He learned from prior tools and contributions, made a lot of smart technical decisions, got stuff moving with a prototype, then displayed good technical leadership by handing it off to a dedicated development team.

That’s such a good lesson for all of us devs.

So why the urge to lie and pretend he coded it in a week with no help? I know you’re not saying this, but this is the common myth.


He did what needed to be done. Linux similarly has thousands of contributors and Linus's personal "code contribution" is almost negligible these days. But code doesn't matter. Literally anyone can generate thousands of lines of code that will flip bits all day long. What matters is some combination of the following: a vision, respect from peers earned with technical brilliance, audaciousness, tenacity, energy, dedication etc. This is what makes Linus special. Not his ability to bash on a keyboard all day long.

Im specifically pointing out the false history that Linus god-coded git and handed it to us on the 7th day.

In reality, it was a collaborative effort between multiple smart people who poured months and years of sweat into the thing.

I seem to agree with you. The real story is a good thing and Linus made important contributions!

But he didn’t create git by himself in a week like the parent comments argue.


The point was only that Linus didn't build git in 8 days and alone.

That's just being pedantic for the sake of it.

Git is decades old. Of course, there are tons of contributions after the first 10 days. Everyone knows that.

He started it and built the first working version.


It’s not being pedantic.

The parent comments are arguing that 17million for git 2.0 is insane because Linux wrote the original in a week.

Except that’s not true. He sketched out a proof of concept in a week. Then handed it off to a team of maintainers who worked on it for the next two decades.

It’s also not pedantic because Linus himself makes this distinction. He doesn’t say he coded Git and specifically corrects people in interviews when they this.


There are all sorts of contracts that are deemed non-enforceable. Our government should pass a law that bans non-disparagement clauses.

One of the most pressing problems of our time is that these large corporations, on balance, have too much power compared to the electorate.


Needless to say even in the USA, dick-move clauses in contracts are not a magic wand that allows anything to be enforceable. Contracts can be challenged through litigation (e.g. as unconscionable) and there are laws (e.g. California state law prohibits non-compete clauses).

How does this show that corporations have too much power? We are literally discussing that this act could easily be stopped by legislation. Doesn’t that imply they have less power than the electorate?

A corporation having to ability to bribe people who need money to pay their rent and healthcare in order to save their own image is indeed "too much power".

> We are literally discussing that this act could easily be stopped by legislation. Doesn’t that imply they have less power than the electorate?

Not when they have full time people dedicated to lobbying the legislation. That's the issue on why things move so slow or halt when it comes to really voting on such policy.


The problem is funding.

There seems to be a pervasive believe that the Python tooling and interpreter suck and are slow because the maintainers don’t care, or aren’t capable.

The actual problem is that there isn’t enough money to develop all of these systems properly.

Google says that Astral had 15 team members. Or course, it’s so hard to make these projections. But it wouldn’t shock me if uv and ruff are each individually multi-million dollar pieces of software.

If you’d like to invest a million dollars to improve pip, or work for free for 3 years to do it yourself, I’m not sure if anyone would object.


As a Go developer, this is right on the mark. I’ve been hearing good things about Java lately, so I decided to check out the language for the first time since 2012 or something. And I was impressed!

The language maintainers have added so many great features while maintaining backwards compatibility. And slowly but surely improved the JVM and garbage collection. So after toying around for a bit, I decided to write some personal projects in Java.

After a week, I gave up and returned to Go. The build tooling is still an over engineered mess. Third party library APIs are still horrible. I will never invest even 5 minutes in learning that horrible Spring framework when stuff like Django, Rails, or the Go ecosystem exist.

The community, and thus the online forums and open source libraries, still approach engineering and aesthetics in a way that is completely foreign and off putting to me.


> I will never invest even 5 minutes in learning that horrible Spring framework

well there's your problem - why are you using spring for a personal project, when there's so many other simpler, lighter weight frameworks around?


> Spring framework when stuff like Django, Rails, or the Go ecosystem exist.

Django and Rails have a very similar model to Spring, so frankly this is just "I was lazy to learn a new tool and it must suck" kinda take. Is there a learning curve? Sure. Does it worth it? I would say for typical CRUD stuff, yeah. No matter how trivial it is you will likely end up needing something that is a single annotation away. But you may want to try quarkus over spring.


Yeah, don't use Spring. If I'm doing DI, I want compile time DI, so Micronaut or Quarkus.


What’s actually going to happen is the second they start to lose market share or struggle at all, they will cancel everything Chromebook related and give up.

With that said, I think Chromebook’s still hold a competitive advantage for public school contracts. It doesn’t matter that the Neo is pretty cheap and the best value. Contracts are signed based on what’s cheapest, period.

Also, a big blind spot for a lot of HN: this is going to be big in developing Markets. This is within budget for middle class Latin Americans in a way that even the Air isn’t.


You are probably right. Just saw the news about Google Fiber being sold. What a shame!


I would really urge everyone to actually engage in the arguments people are making.

Go’s core design philosophy is stability. This means backwards compatibility forever. But really, even more than that. The community is largely against “v2” libraries. After the first version is introduced, Go devs trend towards stability, live with its flaws, and are super hesitant to fix things with a “v2”.

There have been exceptions. After 20 years of the frankly horrible json library, a v2 one is in the works.

Most of the uuid concerns come from a place of concern. After the api is added to the standard library, it will be the canonical api forever.

There are surely pros and cons to this design philosophy. I just don’t understand why people who disagree with Go’s core goals don’t just use a different language? Sorry to take a jab here, but are we really short on programming languages that introduce the wrong v1 api, so then the language ends up with codebases that depend on v1, v2, and v3? (Looking at you Java, Python, and C#)


That’s just not true though. Sure English doesn’t have tones, but there are other tricky parts of the language. Additionally, Russian is another “difficult” language, but all the satellite nations had no problem picking it up.

The real reason people learn English isn’t because it’s easy. It’s because they need to. As someone who is married to an immigrant, it’s not easy for them. They’ve just worked really hard over decades.

Americans will do fine learning Chinese if it ever becomes an economic necessity.


It's not easy to become highly proficient in english but it's quite easy to speak just barely well enough to communicate effectively in a professional context. Importantly, the written form follows naturally from the spoken. You won't get all the edge cases right (that's incredibly difficult even for native speakers) but getting in the ballpark can be done purely phonetically with a fairly small set of rules. Combine with modern spellcheck and I expect it's pretty difficult to beat for ease of practical use.

I think at least a few of the latin based languages are in the same ballpark but for inane historical reasons it's english that won out.

Compare with chinese where even if you sweep tones under the rug you've got a bunch of idioms (difficult) followed by one of the most difficult writing systems in existence. Don't get me wrong, I think the writing system is quite elegant and has a truly impressive history, but neither of those things has anything to do with ease of mastery.

A tangential thought is that if you intentionally set out to come up with a rule following yet maximally difficult language I think a reasonable approach would be to fuse the equivalent of latin grammar with chinese tones and then fuse a chinese style writing system with arabic style contextually sensitive ligatures.


Pinyin converts reading into a vocabulary exercise. China might decide to Pinyin all the things.


> Russian is another “difficult” language, but all the satellite nations had no problem picking it up.

Russian is not more difficult than English and a lot of the satellite states were speaking other Slavic languages. If you already speak Spanish, it's less difficult to pick up Italian too.


There’s also the fact that a huge portion of foreign immigrants to the US don’t and won’t learn English, but can still operate just fine (or even have the system cater to them - press 1 for Spanish).

Look at the uproar over requiring commercial drivers to be able to read road signs in English.


The US also did annex large parts of what used to be Mexico in the 19th century, so you don't even technically have so be an immigrant to speak Spanish


Unless you're 126 years old, that excuse doesn't really hold up. Plenty of immigrants came from Italy, Poland, and Russia more recently than your mentioned time, but you don't hear Press 3 for Italian too often.


Well... they weren't immigrants, they were annexed. Why should they speak English?


They didn't have to. But they also shouldn't expect the annexing government or populace to accommodate them.

Their country lost the war, lost the territory, and those that stayed and chose to take American citizenship should've learned English, the (de facto) language of the country they chose to join.


People still speak German in South Tyrol even though it's part of Italy since 1919.


Along Interstate 5 in 1980s-90s Southern California, there were large signs, black-on-white, which showed a pictogram of a family running.

The English text above read "WATCH FOR PEOPLE CROSSING ROAD"

The Spanish text below read "PROHIBIDO"


It’s sort of surprising how naive developers still are given the countless rug pulls over the past decade or two.

You’re right on the money: the important thing to look at are the incentive structures.

Basically all tech companies from the post-great financial crisis expansion (Google, post Balmer Microsoft, Twitter, Instagram, Airbnb, Uber, etc) started off user-friendly but all eventually converged towards their investment incentive structure.

One big exception is Wikipedia. Not surprising since it has a completely different funding model!

I’m sure Anthropic is super user friendly now, while they are focused on expansion and founding devs still have concentrated policial sway. It will eventually converge on its incentive structures to extract profit for shareholders like all other companies.


I really think corporations are overplaying their hand if they think they can transform society once again in the next 10 years.

Rapid de industrialization followed by the internet and social media almost broke our society.

Also, I don’t think people necessarily realize how close we were to the cliff in 2007.

I think another transformation now would rip society apart rather than take us to the great beyond.


I worry that if the reality lives up to investors dreams it will be massively disruptive for society which will lead us down dark paths. On the other hand if it _doesn't_ live up to their dreams, then there is so much invested in that dream financially that it will lead to massive societal disruption when the public is left holding the bag, which will also lead us down dark paths.


It's already made it impossible to trust half of the content i read online.

Whenever i use search terms to ask a specific question these days theres usually a page of slop dedicated to the answer which appears top for relevancy.

Once i realize it is slop i realize the relevant information could be hallicinated so i cant trust it.

At the same time im seeing a huge upswing in probable human created content being accused of being slop.

We're seeing a tragedy of the information commons play out on an enormous scale at hyperspeed.


You trust nearly half??!!??


I think corporations can definitely transform society in the near future. I don't think it will be a positive transformation, but it will be a transformation.

Most of all, AI will exacerbate the lack of trust in people and institutions that was kicked into high gear by the internet. It will be easy and cheap to convince large numbers of people about almost anything.


As a young adult in 2007, what cliff were we close to?

The GFC was a big recession, but I never thought society was near collapse.


We were pretty close to a collapse of the existing financial system. Maybe we’d be better off now if it happened, but the interim devastation would have been costly.


It felt like the entire global financial system had a chance of collapsing.


We weren't that far away from ATMs refusing to hand out cash, banks limiting withdrawals from accounts (if your bank hadn't already gone under), and a subsequent complete collapse of the financial system. The only thing that saved us from that was an extraordinary intervention by governments, something I am not sure they would be capable of doing today.


I'm still not buying that AI will change society anywhere as much as the internet or smart phones for the matter.

The internet made it so that you can share and access information in a few minute if not seconds.

Smart phones build on the internet by making this sharing and access of information could done from anywhere and by anyone.

AI seems occupies the same space as google in the broader internet ecosystem.I dont know what AI provides me that a few hours of Google searches. It makes information retrieval faster, but that was the never the hard part. The hard part was understanding the information, so that you're able to apply it to your particalar situation.

Being able to write to-do apps X1000 faster is not innovation!


You are assuming that the change can only happen in the west.

The rest of the world has mostly been experiencing industrialisation, and was only indirectly affected by the great crash.

If there is a transformation in the rest of the world the west cannot escape it.

A lot of people in the west seem to have their heads in the sand, very much like when Japan and China tried to ignore the west.

China is the world's second biggest economy by nominal GDP, India the fourth. We have a globalised economy where everything is interlinked.


When I look at my own country it has proven to be open to change. There are people alive today who remember Christianity now we swear in a gay prime minister.

In that sense Western countries have proven that they are intellectualy very nimble.


Three of the best known Christians I have known in my life are gay. Two are priests (one Anglican, one Catholic). Obviously the Catholic priest had taken a vow of celibacy anyway to its entirely immaterial. I did read an interview of a celeb friend (also now a priest!) of his that said he (the priest I knew) thought people did not know he was gay we all knew, just did not make a fuss about it.

Even if you accept the idea that gay sex is a sin, the entire basis of Christianity is that we are all sinners. Possessing wealth is a failure to follow Jesus's commands for instance. You should be complaining a lot more if the prime minister is rich. Adultery is clearly a more serious sin than having the wrong sort of sex, and I bet your country has had adulterous prime ministers (the UK certainly has had many!).

I think Christians who are obsessed with homosexuality as somehow making people worse than the rest of us, are both failing to understand Christ's message, and saying more about themselves than gays.

If you look at when sodomy laws were abolished, countries with a Christian heritage lead this. There are reasons in the Christian ethos if choice and redemption for this.


> people alive today who remember Christianity now we swear in a gay prime minister

Why would that be a contradiction? Gay people can't be Christian?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: