Hacker Newsnew | past | comments | ask | show | jobs | submit | sharkjacobs's commentslogin

My phone is littered with apps like these, which seem well designed to address a very specific problem which I don't have very often. The problem is remembering the app's there 3 or 6 or 9 or 18 months later when it would actually be useful to me.

I always think that the goal of apps like these is build userbase and then get acquired, like darksky or waze: the big providers realise they have missed a trick and then it becomes the default.

"Needed tens of dollars of rental compute" isn't much of a moat to get acquired instead of copied.

I loath Waze. Its idea of shortcuts are terrible, its search routinely suggests things hundreds of mile away ahead of the match that's nearby, and sometimes I go through an area of intermittent service where it just decides to stop routing without giving a heads-up.

But Waze is so much better at accurately alerting me to police than my Valentine 1 was that I never even bothered mounting it in my latest car. Google supposedly integrates that data for years now but every time I try it comes up short. Google and Apple Maps are better in every other way, but for me at least, that one feature of Waze is a massive moat.


Why they police alerting feature it's so important for you? Google maps it's almost as good I find anyways.

His car burns cannabis while blasting "Fuck Tha Police" at 10,000W.

Nonono, despite my outward white appearance, I follow the sage wisdom of Chris Rock[0]. It's so I have enough advance notice to blast "Fuck the Fire Department" so we can share a nice laugh at the historic rivalry between LE / FD.

[0] How Not To Get Your Ass Kicked by the Police: https://www.youtube.com/watch?v=uj0mtxXEGE8


Would you be assuaged if it was titled "Ad-tech is police state tech"?

I think that would definitely make it a more precise polemic, but the incorrect use of the word seems more of a symptom of the author's sloppiness than anything else.

You hate polemic, I dislike milquetoast.

I think that I, like most people, enjoy the ones I agree with. That said, I’m generally skeptical of all polemics, especially the ones I agree with.

Word use is important. We have allowed thumos (and epithumia) to rule over nous.

It has become acceptable to misuse words, like "fascist" or "communist" in political contexts, to the detriment of rational and fruitful discourse. Often a false equivalence is drawn between denying something is "fascist" or "communist" and denying something is bad. This is false. Something can be bad without being fascist or communist.

There is plenty to be critical about in American politics and in tech, but calling everything you don't like "fascist" or "communist" isn't helpful. These seem to be go-to words used by those "defending" what is now a crumbling postwar liberal democratic order, i.e., anything that seems at odds with this order is reflexively called one of these two terms, depending on which faction of the American uniparty you align with.


Word use is important.

Please explain how the trumpist movement significantly differs from most points of Umberto Eco's Ur-Fascism. Because in my estimation, the word is entirely appropriate for what we're facing and people are shouting it down because they don't like the uncomfortable truth.

I'm open to changing my mind, especially if there is a better term that more accurately describes what we're facing. Because the dynamic isn't merely "crumbling postwar liberal democratic order", but rather a particular overly-simplistic reaction to that crumbling.


The burden of proof is on those making the claim that it is fascist (or communist, which is the Right's analoguous lazy epithet).

Citing Eco on this matter as an authoritative source is inappropriate, because that essay is a personal reflection, not a product of scholarly research. I, too, can reach for my personal family experience living under both fascist and communist regimes during the 20th century, and frankly, this ain't it, at least not yet. You're free to cite relevant passages here, of course. I would consider a better source.

And I agree that the MAGA/Trump style is on the whole a bad one (just as I think its Leftist counterpart is equally flawed in its own way; there's more overlap between them than intellectually superficial partisan-minded people think). There are tyrannical elements and impulses woven into these movements, yes. But it is important to realize that these are, in fact, the result of the procession of liberalism, not some repudiation or aberration of it. They are the way in which we can witness the self-immolation of liberalism as its internal contradictions, tensions, and weaknesses unfold in history. In other words, while liberalism flatters itself as the way to ever-greater freedom, its logic leads elsewhere.

Of course, scapegoating the other (party) is comforting, because it allows us to convince ourselves of our own purity, and that all we need to preserve that purity is the elimination of this pesky other. But the uncomfortable truth that is becoming increasingly difficult to ignore is that the defect is deep, in the very cloth of liberalism itself. We may think that all we need is to want a liberal order for there to be one, but that's not how things work. Societies aren't static. Ideas stand behind our wants, and if the ideas are misguided, then the force of their errors will play out, eventually.


> just as I think its Leftist counterpart is equally flawed in its own way; there's more overlap between them than intellectually superficial partisan-minded people think

I'll give you that, and hopefully you can see that there is some common ground to be had.

What I see as the chief difference is that the populism hasn't take over the entire Democratic party, but rather is more confined to small vocal contingents (identity politics, eat the rich, etc). Or maybe it's more accurate to say that said populism is being prevented from taking over the Democratic party by the business interests that run the Democratic party, whereas in the Republican party the populism is compatible with the desired policies of the party's sponsors and is therefore embraced as a motivating force.

> it is important to realize that these are, in fact, the result of the procession of liberalism, not some repudiation or aberration of it. They are the way in which we can witness the self-immolation of liberalism as its internal contradictions, tensions, and weaknesses unfold in history. In other words, while liberalism flatters itself as the way to ever-greater freedom, its logic leads elsewhere.

I'm willing to entertain this, but you're going to have to lay out arguments for the precise mechanics rather than merely just asserting it. I can certainly fill in my own meanings for what you've said (eg the many terms used to criticize "liberals" like "identity politics" and "cancel culture" apply equally or even harder to the current Republican party). But I'm not going to make your argument for you!

Furthermore I'd point out that even if we're running aground because liberalism is running out of steam, this does not mean that the direction some people choose to respond in is attributable to liberalism.

> I, too, can reach for my personal family experience living under both fascist and communist regimes during the 20th century, and frankly, this ain't it, at least not yet

Would you please elaborate on what you mean specifically? Even though subjective through the eyes of your family, this would seem to lend itself to being somewhat objective criteria we could at least discuss.

I do have to ask though, about your hedging of "not yet". Do you not consider Hitler pre-Reichstag-fire as a fascist? If it walks like a baby duck, and quacks like a baby duck, does it not make sense to call it a duck?

As far as the burden of making the case, many people have spent more time than a message board comment making the case. For example here is an essay linked in a different reply: https://acoup.blog/2024/10/25/new-acquisitions-1933-and-the-... . The second part is based on Eco (failing your standards), but the first part is not. I'd say the case has been made enough to put some burden of proof on people summarily rejecting the use of the word.


[flagged]


I'm open to another definition that attempts to faithfully capture the general dynamics of fascism, and avoids the trap of pigeonholing the term into a few specific movements that are now safely in the past.

Basing the definition on actual examples of fascist movements is not pigeonholing, it's being accurate.

So your definition is based on it being incorrect to call anything else besides Mussolini's Italy or Hitler's Germany fascist? That's not particularly germane to discussion or analysis, which is why I was asking for other general definitions.

@ratrace [dead]:

"My side" is my side because I perceive this movement as fascist, or at least close enough to oppose it on that grounds.

I was much happier both sidesing when digital authoritarianism was centered around theoretically-voluntary digital services, and both the red and blue political teams were pushing bureaucratic authoritarianism.

In my estimation, the red team has switched to autocratic authoritarianism, taken control of the digital authoritarian systems of surveillance and control, while engaging in populist rallying with most of the standard tropes of fascists.

As I said, I am open to examining whether it makes sense to apply the term fascist or not. But to do that we need a definition that lays out the general characteristics of fascism, at least as you see it. So far neither of you have supplied one.


"Exalts the Nation and Often Race Above the Individual Donald Trump claims immigrants are “poisoning the blood of our nation,” a turn of phrase used by Adolf Hitler in Mein Kampf...

Associated with a Centralized Autocratic Government Headed by a Dictatorial Leader This one is almost too easy: Trump says, “‘You’re not going to be a dictator are you?’ I said ‘No, no, no, other than day one.”...

Severe Economic and Social Regimentation Did we mention the “largest deportation operation in American history“? And promises to invoke the Alien Enemies Act of 1798...

Forcible Suppression of Opposition This is by far the most important component of the definition and the one that is the easiest to document in Trump’s own words... "

Bret Deveraux, American historian, October 25, 2024 https://acoup.blog/2024/10/25/new-acquisitions-1933-and-the-...

It's a long blog post and the definitions are more detailed (hence the ellipses) and compared to Umberto Eco's Ur-Fascism. Worth a read, paticularly if you are in search of definitions and examples.


I read this when it was published (or perhaps partway and then it got lost in the tab forest). Thank you, because it was worth reading again for me, and lays out reasonable straightforward arguments for anyone just stumbling upon it for the first time.

But to be clear, what was I asking these interlocutors for was their definitions of fascism that back up the argument that trumpism is not fascism. So far the only answer I have gotten is "There is no definition of fascism that has meaning anymore" which is obviously nonsensical in the context of both Deveraux's and Doctorow's posts using the term productively.


I apologise. Reading-comprehension.exe has failed 100% on my side. Thank you for pointing that out.

It probably didn't help that I replied to myself to respond to a dead comment! "We're all in this together, kid"

> Blanchard's account is that he never looked at the existing source code directly. He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch

This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"

Blanchard is, of course, familiar with the source code, he's been its maintainer for years. The premise is that he prompted Claude to reimplement it, without using his own knowledge of it to direct or steer.


> Blanchard is, of course, familiar with the source code, he's been its maintainer for years.

I would argue it's irrelevant if they looked or didn't look at the code. As well as weather he was or wasn't familiar with it.

What matters is, that they feed to original code into a tool which they setup to make a copy of it. How that tool works doesn't really matter. Neither does it make a difference if you obfuscate that it's an copy.

If I blindfold myself when making copies of books with a book scanner + printer I'm still engaging in copyright infringement.

If AI is a tool, that should hold.

If it isn't "just" a tool, then it did engage in copyright infringement (as it created the new output side by side with the original) in the same way an employee might do so on command of their boss. Which still makes the boss/company liable for copyright infringement and in general just because you weren't the one who created an infringing product doesn't mean you aren't more or less as liable of distributing it, as if you had done so.


>that they feed to original code into a tool which they setup to make a copy of it

Well, no. They fed the spec (test cases, etc) into a tool which made a new program matching the spec. This is not a copy of the original code.

But also this feels like arguing over the color of the iceberg while the titanic sinks. If you have a tool that can make code to spec, what is the value in source code anymore? Even if your app is closed-source, you can just tell claude to write new code that does the same thing.


Everyone writes as if he just fed the spec and tests to Claude Code. Ignoring for now that the tests are under LGPL as well, the commit history shows that this has been done with two weeks of steering Claude Code towards the desired output. At every one of these interactions, the maintainer used his deep knowledge of the chardet codebase to steer Claude.

Is this perspective implying that the maintainer might be legally culpable because he, the *human*, was trained on the codebase?

Well I'm implying that someone who's been reading a codebase for 10+ years is the worst person to claim an "independent reimplementation".

Blanchard fed the spec to the tool, and Anthropic fed the code to the tool, so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.

> Blanchard fed the spec to the tool,

Yes...

> and Anthropic fed the code to the tool,

Presumably, as part of the massive amount of open-source code that must have been fed in to train their model.

> so Blanchard didn't do anything wrong, and Anthropic didn't do anything wrong. Nothing to see here.

This is meant as irony, right?


Yes. Specifically: The use of words to express something different from and often opposite to their literal meaning, and not some knifey spoony confusion.

if the actual text of the code isn't the same or obviously derivative, copyright doesn't apply at all.

What does derivative mean here? Because IMO it means that the existing work was used as input. So if you used a LLM and it was trained on the existing work, that's a derivative work. If you rot13 encode something as input, so you can't personally read it, and then a device decides to rot13 on it again and output it, that's a derivative work.

In order for it to be creatively derivative you would need to copy the structure, logic, organization, and sequence of operations not just reimplement the functionality. It is pretty clear in this case that wasn't done.

It's not clear at all.

As a cynical person I assume all the frontier LLMs were trained on datasets that include every open source project, but as a thought experiment, if an LLM was trained on a dataset that included every open source project _execept_ chardet, do you think said LLM would still be able to easily implement something very similar?

There is no doubt in my mind that it could still do it.

Of course, the problem with this interpretation is that all modern LLMs are derivatives from huge amounts of text under completely different licenses, including "All rights reserved", and therefore can not be used for any purpose.

I'm not sure how you square the circle of "it's alright to use the LLM to write code, unless the code is a rewrite of an open source project to change its license".


> Of course, the problem with this interpretation is that all modern LLMs are derivatives from huge amounts of text under completely different licenses, including "All rights reserved", and therefore can not be used for any purpose.

> I'm not sure how you square the circle of "it's alright to use the LLM to write code

You seem like you're on the cusp of stating the obvious correct conclusion: it isn't.


> Because IMO it means that the existing work was used as input

That's your opinion (since you said "IMO"), not the actual legal definition.


LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default. You can understand this more simply by calculating the model size as an inverse of a fantasy compression algorithm that is 50% better than SOTA. You'll find you'd still be missing 80-90% of the training data even if it were as much of a stochastic parrot as you may be implying. The outputs of AI are not derivative just because they saw training data including the original library.

Then onto prompting: 'He fed only the API and (his) test suite to Claude'

This is Google v Oracle all over again - are APIs copyrightable?


I find the "compression" argument not very strong, both because copyright still applies to (very) lossy codecs (e.g. your 16kbps Opus file of Thriller infringes, even if the original 192khz/32bit wav file was 12,000kbps), and because copyright still applies to transformed derivative works (a tiny midi file of Thriller might still be enough for the Jackson's label to get you)

> This is Google v Oracle all over again - are APIs copyrightable?

Yes this is the best way to ask the question. If I take a public facing API and reimplement everything, whether it's by human or machine, it should be sufficient. After all, that's what Google did, and it's not like their engineers never read a single line of the Java source code. Even in "clean room" implementations, a human might still have remembered or recalled a previous implementation of some function they had encountered before.


> LLMs do not encode nor encrypt their training data. The fact they can recite training data is a defect not a default.

About this specific point, it is unclear how much of a defect memorization actually is - there are also reasons to see it as necessary for effective learning. This link explains it well:

https://infinitefaculty.substack.com/p/memorization-vs-gener...


> This is Google v Oracle all over again - are APIs copyrightable?

No, it is completely different.

Claude was trained on chardet, anything built by Claude would fail the clean-room reimplementation test.


"The clean-room reimplementation test" isn't a legal standard, it's a particular strategy used by would-be defendants to clearly meet the standard of "is the new work free of copyrightable expression from the original work".

See also: https://monolith.sourceforge.net/, which seeks to ask the question:

> But how far away from direct and explicit representations do we have to go before copyright no longer applies?


Copyright protects even very abstract aspects of human creative expression, not just the specific form in which it is originally expressed. If you translate a book into another language, or turn it into a silent movie, none of the actual text may survive, but the story itself remains covered by the original copyright.

So when you clone the behavior of a program like chardet without referencing the original source code except by executing it to make sure your clone produces exactly the same output, you may still be infringing its copyright if that output reflects creative choices made in the design of chardet that aren't fully determined by the functional purpose of the program.


If you pirate a movie and reencode it, does that apply as well? You can still watch the movie and it is “obviously” the same movie, even though the bytes are completely different. Here you can use the program and it is, to the user, also the same.

> If it isn't "just" a tool, then it did engage in copyright infringement

Copyright infringement is a thing humans do. It's not a human.

Just like how the photos taken by a monkey with a camera have no copyright. Human law binds humans.


Correct. The human who shares the copy is the one who engages in copyright infringement.

So, let's say that rather than actually touching any copyrighted material, a human merely tells an AI about how to go onto the internet and find copyrighted material, download it, and ingest it for training. The AI, fully autonomously, does so, and after training itself on the material deletes it so no human ever downloads, consumes, or shares it.

If we are saying AI is "more than a tool", which seems to be the case courts are leaning since they've ruled AI output without direct human involvement is not copyrightable[0], then the above seems like it would be entirely legal.

[0] https://www.copyright.gov/newsnet/2025/1060.html


Someone would likely get prosecuted if they instructed AI agent to run say a pump and dump scheme...

Even if the final output doesn't have copyright protection it might still be copyright violation. I think it could be reasonable to have work that itself violates copyright when distributed even if it does not have copy right itself.


I just don't see how it's relevant whether he did look or didn't. In my opinion, it's not just legally valid to make a re-implementation of something if you've seen the code as long as it doesn't copy expressive elements. I think it's also ethically fine as well to use source code as a reference for re-implementing something as long as it doesn't turn into an exact translation.

It's actually not legally fine, or at least it's extremely dangerous. Projects that re-implement APIs presented by extremely litigious companies specifically do not allow people who, for instance, have seen the proprietary source code to then work on the project.

I don't think fear or legal action makes it illegal.

If I know it is legal to make a turn at a red light. And I know a court will uphold that I was in the right but a police officer will fine me regardless and I would need to go to actually pursue some legal remedy I'm unlikely to do it regardless of whether it is legal because it is expensive, if not in money but time.

In the case of copyright lawsuits they are notoriously expensive and long so even if a court would eventually deem it fine, why take the chance.


That's my point. It's dangerous and there are sharks in the water. That sounds like you're not going to have a good time if you do the described approach to someone who might assert you're infringing.

My understanding is that that is a maximalist position for the avoidance of risk, and is sufficient but probably not necessary.

Right. The alternative is that we reward Dan for his 14 years of volunteer maintenance of a project... by banning him from working on anything similar under a different license for the rest of his life.

Ignoring the legal or ethical concerns. Let’s say we live in a world where the cost of copying code is so close to zero that it’s indistinguishable from a world without copyright.

Anything you put out can and will be used by whatever giant company wants to use it with no attribution whatsoever.

Doesn’t that massively reduce the incentive to release the source of anything ever?


No, because (most) people don't work on OSS for vanity, they do it to help other people, whether it's individuals or groups of individuals, ie corporations.

It's the same question as, if an AI can generate "art", or photographers can capture a scene better than any (realistic) painter, then will people still create art? Obviously yes, and we see it of course after Stable Diffusion was released three years ago, people are still creating.


I don’t know what a world without copyright does to corporate sponsored open source. It certainly reduces it because there are many corporate sponsored projects that monetize through dual licensing. My guess is in a world where you can’t even guarantee attribution, it’s much harder to convince your boss to let you open source a project in the first place.

So ignoring people who are being paid by corporations directly to work on open source, in my experience the vast majority of contributors expect to be able to monetize their work eventually in a way that requires attribution. And out of the small number who don’t expect a monetary return of any kind, a still smaller number don’t expect recognition.

If this weren’t the case you’d see a much larger amount of anonymous contributions. There are people who anonymously donate to charity. The vast majority want some kind of recognition.

Obviously we still see art, if you greatly reduce the monetary benefit to producing art, you’ll see a lot less of it. This is especially true of non trivial open source software that unlike static artwork requires continual maintenance.


If the cost to copying code based on specifications, tests, etc is so close to zero as to be functionally zero cost, then any user can simply turn their AI on any library for which there is documentation and any ability to generate tests, have it reverse engineer it, and release their reverse engineered copy on GitHub for others to use as they like.

So I'm not sure it matters whether a giant company uses it because random users can get the same thing for ~ free anyway.


You can mostly stop that with enoiugh lawyers and requiring and agreement not to reverse engineer to access documentation or use the software.

Most commercial software that I've used has the model of a legal moat around a pretty crappy database schema.

The non IP protection has largely been in the effort involved in replicating an application's behavior and that effort is dropping precipitously.


You must not have used much commercial software outside of crappy business SaaS.

Truth

Yes, and it reduces the incentives to release binaries too. Such a world will be populated by almost entirely SaaS, which can still compete on freedom.

Oracle had it's day in court with Google over the Java APIs. Reimplementing APIs can be done without copyright infringement, but Oracle must have tried to find real infringement during discovery.

In this case, we could theoretically prove that the new chardet is a clean reimplementation. Blanchard can provide all of the prompts necessary to re-implement again, and for the cost of the tokens anyone can reproduce the results.


Can anyone find the actual quote where Blanchard said this?

My understanding was that his claim was that Claude was not looking at the existing source code while writing it.


That is what he claimed. However, his design document instructs the AI to download the codebase, references specific files in the codebase, and to create a rewrite of the same project by name. It seems very unlikely it didn't look at the code while working, even forgetting that it had already likely been trained on it.

He would have had a better argument if he created a matching spec from scratch using randomized names.


Conveniently ignoring the likelihood that Claude had been trained on the freely accessible source code.

Does he have access to Claude's training data? How can he claim Claude wasn't trained on the original code?

Isn't this a red herring? An API definition is fair use under Google v. Oracle, but the test suite is definitely copyrightable code!

>This feels sort of like saying "I just blindly threw paint at that canvas on the wall and it came out in the shape of Mickey Mouse, and so it can't be copyright infringement because it was created without the use of my knowledge of Micky Mouse"

IANAL, but that analogy wouldn't work because Mickey Mouse is a trademark, so it doesn't matter how it is created.


If you only stick to the API and ignore the implementation, it is not Mickey Mouse any more but a rodent. If it was just a clone it wouldn't be 50x as fast. Nevertheless, APIs apparently can be copyrightable. I generally disagree with this; it's how PC compatibles took off, giving consumers better options.

Wait what, didn't oracle lose the case against Google? Have I been living in an alternate reality where API compatibility is fair use?

> This feels sort of like saying "I just blindly threw paint at that canvas on the wall and

> He fed only the API and the test suite to Claude and asked it

Difference being Claude looked; so not blind. The equivalent is more like I blindly took a photo of it and then used that to...

Technically did look.


The article is poorly written. Blanchard was a chardet maintainer for years. Of course he had looked at it's code!

What he claimed, and what was interesting, was that Claude didn't look at the code, only the API and the test suite. The new implementation is all Claude. And the implementation is different enough to be considered original, completely different structure, design, and hey, a 48x improvement in performance! It's just API-compatible with the original. Which as per the Google Vs oracle 2021 decision is to be considered fair use.


> What he claimed, and what was interesting, was that Claude didn't look at the code

Who opened the PR? Who co-authored the commits? It's clearly on Github.

> Blanchard was a chardet maintainer for years. Of course he had looked at its code!

So there you have it. If he looked, he co-authored then there's that.


If I put my signature on Picasso painting, it doesn't make me co-author of said painting.

Blanchard is very clear that he didn't write a single line of code. He isn't an author, he isn't a co-author.

Signing GitHub commit doesn't change that.


> Blanchard is very clear that he didn't write a single line of code

He used Claude to write it. Difference? The fact that I write on the notepad vs printed it out = I didn't do it?

> Signing GitHub commit doesn't change that.

That's the equivalent of me saying I didn't kill anyone. The fingerprints on the knife doesn't change that.


I'll take a commit authored by someone else and then git amend the author to myself, did I write that commit then? By your logic I did apparently.

> I'll take a commit authored by someone else and then git amend the author to myself, did I write that commit then

I did say co-author didn't I? Even if you added 0.000000001% to something you did so technically, yes.

> By your logic I did apparently

If you take someone's email and forward it did you write that email? Instead of debating that imagine you took a trojan email and forwarded it to someone and they opened it - do you think you'd be held up in any way?


did he claim that Claude wasn't trained on the original? Or just that he didn't personally provide Claude with a copy?

I recon the latter, how would he know what was in Claude's training data?

What if we said that generative AI output is simply not copyrightable. Anything an AI spits out would automatically be public domain, except in cases where the output directly infringes the rights of an existing work.

This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.

I think this also helps somewhat with the ethical quandary of these models being trained on public data while contributing nothing of value back to the public, and disincentivize the production of slop for profit.


We did in fact say so.

https://www.carltonfields.com/insights/publications/2025/no-...

> No Copyright Protection for AI-Assisted Creations: Thaler v. Perlmutter

> A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law


> > A recent key judicial development on this topic occurred when the U.S. Supreme Court declined to review the case of Thaler v. Perlmutter on March 2, 2026, effectively upholding lower court rulings that AI-generated works lacking human authorship are not eligible for copyright protection under U.S. law

This was AI summary? Those words were not in the article.

The courts said Thaler could not have copyright because he refused to list himself as an author.


> This would make it so relicensing with AI rewrites is essentially impossible unless your goal is to transition the work to be truly public domain.

That's not true at all. Anyone could follow these steps:

1. Have the LLM rewrite GPL code.

2. Do not publish that public domain code. You have no obligation to.

3. Make a few tweaks to that code.

4. Publish a compiled binary/use your code to host a service under a proprietary license of your choice.


Are you just talking hypothetically about an abstract harm that might occur in an imaginary world or do you think that's what DEI is?

Being in academia, I'm facing it almost every single day.

You're not able to publish cutting edge research in an era where you have LLMs and Arxiv?

Academia seems more open and competitive today than ever before, with more weight and influence given to more universities around the world


[dead]


I think that there were and are a lot of different DEI programs with lots of different targets and goals and that the people who were not "uplifted", either by any single specific program, or all of them in aggregate, do not make up a coherent identifiable group.

There's this weird race where I have in my head some level of LLM performance which is "good enough" and the open models keep improving to that level, but by the time that they do my "good enough" has acclimatized to what I'm used to doing with the latest frontier models and what the open models are isn't good enough anymore.

The "good enough" points so far have been

- "as good as ChatGPT"

- "as good as GPT4"

- "as good as Sonnet 3.5"

- "as good as Opus 4.5 or Codex 5.2"

Anyway, we'll see where the chinese models are in a year, and we'll see where my expectations are. Hopefully they overlap at some point.


I'll genuinely miss it getting dark at 4PM. Winter won't be the same.

Urban trees in Montreal (and presumably other cities) only survive through the summer because of the water they get from leaky pipes.

> Maple trees drink about 50 litres of water every day, and it seems some of their hydration is coming from Montreal’s crumbling infrastructure.

https://www.ctvnews.ca/montreal/article/montreals-leaky-pipe...


I just realised I've never actually thought about how urban trees get water. I never see them get watered and I assume that would be an incredibly inefficient way to do it.


In Austin we saw water trucks roll up and water em with hoses out the back. It was weird to see after having lived in a wet climate my whole life.


NNW is like a river stone tumbled smooth and with enough weight that it feels good in your hand


It was a victory that the teenage pregnancy rates plummeted during the 90's in my small town high school, but when I was there there was still a real drive to discourage kids from having kids, and I internalized the idea that "having children will ruin your life" and carried that with me through my twenties.


> on July 26, CIA was officially born. Just a few months later, on October 1, CIA assumed all responsibility for the JANIS basic intelligence program. Shortly thereafter, JANIS was renamed the National Intelligence Survey (NIS), but continued along the same tradition, providing policymakers and military leaders with up-to-date data, maps, and other reference materials.

> In 1971, the Factbook was created as an annual summary of the NIS studies and in 1973 it supplanted the NIS encyclopedic studies as CIA’s publication of basic intelligence. It was first made available to the public in 1975 and in 1981 was renamed The World Factbook.

https://www.cia.gov/stories/story/history-of-the-world-factb...


Thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: