Hacker Newsnew | past | comments | ask | show | jobs | submit | rpdillon's commentslogin

Are you worried that you're going to become subject to attestation via systemd?

This is such an important point. As a father of two, children are turning out to be a very large investment...larger than anything else I ever will pour money into, probably by an order of magnitude (though not quite, since I have a house).

I talk to lots of people in SV, heads of design, engineers, as well as folks from around the world that I work with, from San Diego to Argentina and Chile. So many 20-30 year-olds have told me they are never having kids. Life is too fun, and they want to see the world. But training the next generation is hard work, and it's easy to do a terrible job. We want to incentivize people to have kids and be great parents. But that requires voluntary sacrifice, which is a hard sell.

If I hadn't had kids, I could retire now. As it is, I'll be lucky to be able to work and get a job so I can earn for the next couple of decades so I have enough to retire.


I'm a Costco booster, and I have storage space. One of the greatest feelings for me is returning from a Costco and knowing I have enough in the house to last a month for a family of four.

But your second point is spot-on: this strategy has to be augmented by weekly (or more) runs to get fresh food. I like to make fried rice with vegetables, so having a local market is essential.


In a prior job, I had to scan a 2M+ line codebase for software license violations to support the sale of a unit to another corporation. One class of violation was using SO snippets, because they are licensed under CC and not compatible with the distribution model the new company was planning. Many weeks of work to track them all down.

I don't. The general trend is that, in US rulings, courts have found that if the material was obtained legally, then training can be fair use. My understanding is that getting LLMs to regurgitate anything significant requires very specific prompting, in general.

I would very much like someone to give me the magic reproduction triple: a model trained on your code, a prompt you gave it to produce a program, and its output showing copyright infringement on the training material used. Specific examples are useful; my hypothesis is that this won't be possible using a "normal" prompt that's in general use, but rather a prompt containing a lot of directly quoted content from the training material, that then asks for more of the same. This was a problem for the NYT when they claimed OpenAI reproduced the content of their articles...they achieved this by prompting with large, unmodified sections of the article and then the LLM would spit out a handful of sentences. In their briefing to the court, they neglected to include their prompts for this reason. I think this is significant because it relates to what is really happening, rather than what people imagine is happening.

But I guess we'll get to see from the NYT trial, since OpenAI is retaining all user prompts and outputs and providing them to the NYT to sift through. So the ground-truth exists, I'm sure they'll be excited to cite all the cases where people were circumventing their paywall with OpenAI.


> My understanding is that getting LLMs to regurgitate anything significant requires very specific prompting, in general.

Then you have been mislead:

https://arstechnica.com/features/2025/06/study-metas-llama-3...

> I would very much like someone to give me the magic reproduction triple

Here's how I saw it directly. Searched for "node http server example." Google's AI spit out an "answer." The first link was a Digital Ocean article with an example. Google's AI completely reproduced the DO example down to the content of the comments themselves.

So.. don't know what to tell you. How hard have you been looking yourself? Or are you just trying to maintain distance with the "show me" rubrick? If you rely on these tools for commercial purposes then the onus was always on you.

> So the ground-truth exists

And you expect a civil trial to be the most reliable oracle of it? I think you know what I know but would rather _not_ know it.


To your last statement: not at all. I think releasing all the chats publicly would show that basically no one is using ChatGPT to circumvent paywalls because the model was trained on that material.

As to your Ars article, I'm familiar because I read Ars.

> The chart shows how easy it is to get a model to generate 50-token excerpts from various parts of Harry Potter and the Sorcerer’s Stone. The darker a line is, the easier it is to reproduce that portion of the book.

50-token excerpts are not my concern, that's 40 words. The argument that the plantiffs need to make is that people are not paying for the NYT because ChatGPT (part of the four fair use pillars, I could expand, but won't). That's gonna be tough. Let's revisit this after the ruling and/or settlement.


This is exactly my position. Landscape-changing technology is impossible to get away from, because it follows you. It's like a local business owner in 1998 telling me they didn't care about the stupid "internet" thing, and then the internet blew away their business within 10 years. Similar story with the PC: folks didn't get the option to just "opt out" of a digital office because they liked typewriters and paper. Cell phones were this way also, and while many people post about how they hate their phones and needs to quit using it so much, pretty much everyone admits you can't live in society without one because they have pervaded so many interactions.

So that's how I think AI will be seen in 20 years: like the PC, the internet, and mobile phones. Tech that shapes society, for better or worse.


100%, even if models stopped advancing today, there's already enough utility that just needs to be constrained by traditional software. It's not going away; it's going to change our interfaces completely, and change how services interface with each other, how they're designed, and change the pace at which software evolves.

This is a tipping point and most anti-AI advocates don't understand that other software developers who keep telling them to reevaluate their positioned are often just trying to make sure no one is left behind.


Trying a new technology seems like what engineers do (since they have to leverage technology to solve real problems, having more tools to choose from can be good). I'm surprised it rings as tribalist.

The impression I get from this post is that anyone who doesn't like it needs to try it more. It doesn't really feel like it leaves space for "yeah, I tried it, and I still don't want to use it".

I know what its capabilities are. If I wanted to manage a set of enthusiastic junior engineers, I'd work with interns, which I love doing because they learn and get better. (And I still wouldn't want to be the manager.) AIs don't, not from your feedback anyway; they sporadically get better from a new billion dollar training run, where "better" has no particular correlation with your feedback.


I think it's going to be important to track. It's going to change things.

I agree on your specific points about what you prefer, and that's fine. But as I said 15 years ago to some recent Berkeley grads I was working with: "You have no right to your current job. Roles change."

AI will get better and be useful for some things. I think it is today. What I'm saying is that you want to be in the group that knows how to use it, and you can't there if you have no experience.


You of course don't have to use AI. Your core point is correct: the world around you is changing quickly, and in unpredictable ways. And that's why it's dangerous to ignore: if you've developed a way that worked in the world 10 years ago, there's a risk it won't play the same way in the world of 2030. So this is time-frame to prepare for whatever that change will be.

For some people, that's picking up the tool and trying to figure out what its good for (if anything) and how it works.


Agree. I think folks are romanticizing the iPod. It synced only with Mac via iTunes, had 5 GB of storage for $400 and had 10 hours of battery life and weighed 184 grams.

Today you can get a music player whose battery lasts five times as long, weighs one sixth as much, costs one-tenth the price, and stores 25 times as much, while also offering full wireless connectivity, supporting more audio formats, video playback, and reading books.


I think the ipod being so common has helped it remain useful 20 years on. There's a lot of them being sold cheaply as people clean out old drawers, spares/replacements and upgrades are readily available, how to work on them is common knowledge and tools to work on consumer electronics is common, and rockbox is available for most of the range. MP3 players were a commodity good with a huge range of models available from practically any company capable, but most won't have the market around it. My ipod has a 128GB SD card instead of a HDD and a battery that lasts at least 5.5 days playback instead of whatever the original 650mAH allowed

I will continue romanticising the later models though - the ipod mini and nano were, at the time, a lot of value for money.

There's no way to win in these threads. It's a very common pattern on HN that somebody will say, "X doesn't exist!" And then people will proceed to point out that in fact does exist. And then you'll find out that the original poster has a bunch of non-functional requirements that were baked into their original request that they didn't state, and I usually don't agree with (typically because they are either not practical or only of theoretical concern). They'll typically defend them using highly charged language, like claiming that having to carry a 200 gram device will pull their pants down because it's so heavy, or that managing a Bluetooth stack and USB doesn't require an OS, but rather just a couple of event loops that a non-professional could code directly in firmware.

I've simply stopped participating because in my efforts to try to help people, I find that I just get into silly arguments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: