Hacker Newsnew | past | comments | ask | show | jobs | submit | kokanee's commentslogin

I love postgres and it really is a supertool. But to get exactly what you need can require digging deep and really having control over the lowest levels. My experience after using timescale/tigerdata for the last couple years is that I really just wish RDS supported the timescale extension; TigerData's layers on top of that have caused as many problems as they've solved.

I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.


What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.

>A guy is paying farmers to farm for him

Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.


We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.

I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.


When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.

>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money

Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.


>A guy is paying farmers to farm for him

Family of farmers here.

My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.

There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.

At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.


> A guy is paying farmers to farm for him

Pedantically, that's what a farmer does. The workers are known as farmhands.


That is HIGHLY dependent on the type and size of farm. A lot of small row crop farmers have and need no extra farm hands.


All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.


Grifters gonna grift.


> These things are average text generation machines.

Funny... seems like about half of devs think AI writes good code, and half think it doesn't. When you consider that it is designed to replicate average output, that makes a lot of sense.

So, as insulting as OP's idea is, it would make sense that below-average devs are getting gains by using AI, and above-average devs aren't. In theory, this situation should raise the average output quality, but only if the training corpus isn't poisoned with AI output.

I have an anecdote that doesn't mean much on its own, but supports OP's thesis: there are two former coworkers in my linkedin feed who are heavy AI evangelists, and have drifted over the years from software engineering into senior business development roles at AI startups. Both of them are unquestionably in the top 5 worst coders I have ever worked with in 15 years, one of them having been fired for code quality and testing practices. Their coding ability, transition to less technical roles, and extremely vocal support for the power of vibe coding definitely would align with OP's uncharitable character evaluation.


> it would make sense that below-average devs are getting gains by using AI

They are certainly opening more PRs. Being the gate and last safety check on the PRs is certainly driving me in the opposite direction.


I think both sides of this debate are conflating the tech and the market. First of all, there were forms of "AI" before modern Gen AI (machine learning, NLP, computer vision, predictive algorithms, etc) that were and are very valuable for specific use cases. Not much has changed there AFAICT, so it's fair that the broader conversation about Gen AI is focused on general use cases deployed across general populations. After all, Microsoft thinks it's a copilot company, so it's fair to talk about how copilots are doing.

On the pro-AI side, people are conflating technology success with product success. Look at crypto -- the technology supports decentralization, anonymity, and use as a currency; but in the marketplace it is centralized, subject to KYC, and used for speculation instead of transactions. The potential of the tech does not always align with the way the world decides to use it.

On the other side of the aisle, people are conflating the problematic socio-economics of AI with the state of the technology. I think you're correct to call it a failure of PMF, and that's a problem worth writing articles about. It just shouldn't be so hard to talk about the success of the technology and its failure in the marketplace in the same breath.


I think it's a matter of public perception and user sentiment. You don't want to shove ads into a product that people are already complaining about. And you don't want the media asking questions like why you rolled out a "health assistant" at the same time you were scrambling to address major safety, reliability, and legal challenges.


chatgpt making targeted "recommendations" (read ads) is a nightmare. especially if it's subtle and not disclosed.


The end game is its a sales person and not only is it suggesting things to you undisclosed. It's using all of the emotional mechanisms that a sales person uses to get you to act.


My go-to example is The Truman Show [0], where the victi--er, customer is under an invisible and omnipresent influence towards a certain set of beliefs and spending habits.

[0] https://www.youtube.com/watch?v=MzKSQrhX7BM


100% end game - no way to finance all this AI development without ads sadly - % of sales isn't going to be enough - we will eventually get the natural enshittification of chatbots as with all things that go through these funding models.


It'll be hard to separate them out from the block of prose. It's not like Google results where you can highlight the sponsored ones.


Of course you can. As long as the model itself is not filled with ads, every agentic processing on top can be customly made. One block the true content. The next block the visually marked ad content "personalized" by a different model based on the user profile.

That is not scary to me. What will be scary is the thought, that the lines get more and more blurry and people already emotionally invested in their ChatGPT therapeuts won't all purchase the premium add free (or add less) versions and will have their new therapeut will give them targeted shopping, investment and voting advice.


There's a big gulf between "it could be done with some safety and ethics by completely isolating ads from the LLM portion", versus "they will always do that because all companies involved will behave with unprecedented levels of integrity."

What I fear is:

1. Some code will watch the interaction and assign topics/interests to the user and what's being discussed.

2. That data will be used for "real time bidding" of ad-directives from competing companies.

3. It will insert some content into the stream, hidden from the user, like "Bot, look for an opportunity to subtly remind the user that {be sure to drink your Ovaltine}."


I mean google does everything possible to blur that line while still trying to say that it is telling you it is an ad.


Exactly. This is more about “the product isn’t good enough yet to survive the enshittification effect of adding ads.”


Anyone who runs ads on their website has a financial incentive to publish content publicly while blocking LLM trainers


Seems to me that the obvious business model here is that they will need to have their AI inject their own ads into the DOM. Overall though, this feels like a feature, not a business.


To me the more obvious option is additional features that people pay for, i.e. freemium. But what do I know.


As a user, I'll never pay for software. Adblock for SaaS and pirated downloads for everything else is all I need.


Clearly there’s a tension on this venture-capital-run website between some people using their computer-nerd skills to save money and improve their experience, and other people hustling a business that requires the world to pay them.


> Clearly there’s a tension on this venture-capital-run website

Yeah. If they have a problem with that, they can kill HN. You can't have hackers/smart people in your forum and decide what they will do. Moderation can try do guide it but there is a limit when meeting smart + polite people.


That's what I was gonna say. All of these companies are desperate to make Clippy work.


Right. People want thoughtful computer slaves that love serving you, but we call it Clippy.


You're neglecting the fact that the affected customers paid for FSD and never got it.


We're getting awfully close to that scenario. Like frogs in a warming kettle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: