Hacker Newsnew | past | comments | ask | show | jobs | submit | kypro's commentslogin

Depends what you mean by sane I guess...

I've been thinking (and worrying) about AI for decades so this last few years have been extremely hard for me. A lot of the stuff I worried about when deep neutral networks first began to work are now beginning to play out and if anything timelines are significantly shorter than I imagined they would be.

I've believed since around 2012 that the most likely way I'd die is from AI so this isn't new, but until around 2021 the error bars have always been quite large. In the last few years that error has rapidly compressed to the point at which I'd now very confidently bet we're all doomed (were there any point in me betting on that). I have few uncertainties left in the core trends of how things will progress from here, and the uncertainties that I do have are really more about how quickly doom will come and how bad the ending will be.

People who know me IRL kinda get why I'm freaking out right now because stuff I said that sounded insanely stupid before are actually now starting to materialise. But online I keep having to decide if I'm okay with sounding insane on this given how important it is and how concerned I am. I think the right thing to do is to sound insane and at least try to convince people we need to change course ASAP because this will not end well. Plus I'm having nightmares constantly at the moment so I'm really feeling the need to just express myself.

The number of serious risks we now face are almost endless and the probabilities that they occur are relatively high for the most part. Even most people with vested interests in the AI industry who are actively building these systems will give high single digits to low double digit odds that they're helping to build a technology that will kill us all lmao. And more concerningly most them are either lying or genuinely uneducated on why alignment is hard. The lunacy of the situation we're in is astounding.

So how am I staying sane? At this point I've more or less come to terms with what's coming. I'm just worried about those I love because I don't want them to suffer. It's frustrating so few seem to be thinking beyond risks like job losses right now and that's making me feel a bit insane at the moment tbh. I suspect people will catch up within the next 12-24 months.

As for the geopolitical uncertainty, that's always been a thing. 9/11 felt far more scary than anything that's happening today imo. The full scale invasion of Ukraine was significant, but hardly unexpected or even unprecedented.


Arguably robotics is already happening. Most factories today rely heavily on automation and robotics. We beginning to see the first production-ready self driving cars. We even have some fairly decent humanoid humans now.

The main issue is that the world is complex and it's hard to build a single robot with enough flexibility that can preform a broad set of tasks.

If the question is specifically when can I get a robot that's going to do almost everything I can do around the home, then the answer is probably not for a decade or two. But over the next decade we'll increasingly see robots being deployed in the real-world to solve specific tasks. For example, I strongly suspect postal delivery will be fully robotised by the end of the decade.


> Maybe a micro SaaS or two? Maybe a newsletter or two?

No offence, but I'm not sure a pivot into micro SaaS software development or newsletter writing is much of a plan for AI job destruction.


Probably right, but I'd rather be 1-2 years into figuring this out than starting now. I have a lot of traction and I've built up an audience.

I remember saying ~2 years ago that people should probably assume the role they're in would be their last programming job. I feel like that was an unreasonably good prediction given almost everyone on HN at the time was arguing that LLMs were just stochastic parrots.

I suspect people will still be needed to create software for some time but increasingly it won't be software engineers who have spent decades learning syntax. It will be a relatively poorly paid role compared to today. If you're happy to be paid a fraction of what you're paid today, perhaps you can transition into a vibe coder role.

There will probably be some more senior people maintaining projects, but the number of people like this you'd realistically need at any org is probably in the single digits. The main hiring criteria will be that they're very personable, not the typical anti-social engineer type. The autistic 10x engineer's value is basically zero in this new AI economy.

Longer term (~10-20 years) we're all dead anyway so your priority probably shouldn't be to optimise for income or prestige.


Hard to know if this is itself slop or self-aware intentional slop to make a point.

The "GALLONS OF SLOP GENERATED" counter suggests some intentionality, and the "Everything seems to be working. Probably a glitch in our monitoring." caption on the status page maybe suggests that is also intentional.


> so i am curious how others approach this today. how do you meet new people outside of work or school in 2026? do you ever start conversations with strangers in public, and if so how? are there environments where this works better than others? for people living in more reserved cultures (like scandinavia), what strategies have worked for you? would love to hear what has worked for others.

I did pickup for many years in my late teens and early twenties. It started out learning how to talk to and attract women, but I learnt general social skills too and learnt how to make friends pretty easily.

Something I've learnt is that if you're thinking about "approaching" people that you're probably not going to get very far. People are weirdly good at smelling your intent. It's kinda like how you know that someone who stops you in the street is trying to sell you something before they've even said a word. People know when you're not being sincere or when you want something from them, even if that thing you want is just to be their friend it will come off as desperate and weird.

What you need to do is become good at finding situations in which it would be totally normal to talk to someone then put yourself in them.

My advice would be to find an event where you live. Ideally you want an event which will attract a decent crowd and where socialising is possible (not a loud club, not a movie, etc). Good events might be a street celebration or carnival, or maybe just a park on a sunny day.

Before you go think about ways you might start a conversation. If possible they should be genuine. Some examples:

- Go to a beach and sit in an area where there's a group that might be good to talk to. Cook food on a portable BBQ and offer some to the group near you. "Hey, I got some left over burgers from my BBQ – do you want some?" Regardless of whether they say yes or no immediately transition, "Nice day today. You local?". etc..

- Go to a carnival with some bottles but no bottle opener then ask people if they have a bottle opener. Again, regardless of whether they say yes or no, immediately transition into conversation.

Personally where possible I liked trying to get people to approach me first... So for example if you're in a place where lots of people are drinking it's quite common for someone to approach you, especially if you're on your own or have something / are doing something that might attract attention (again you should be creating these scenarios). Then once someone has started talking to you from here you should try to get to know their friends and try to bounce between people and groups until you find someone / some group you like. For me that felt so much more natural if I went out on my own than trying to talk up to people and try to talk to them.

Please don't try talking to random people on the bus or when you're walking down the street. No one wants this. They'll either be weirded out by you or give you really bad vibes. These bad experiences will create negativity around interacting with strangers and make it less likely for you to talk to strangers going forward. Find natural ways to meet and talk to new people. Over time as you become more confident and used to talking to random people you might find non-weird ways to talk to people on a bus or in the street, but trust me, it's not easy if you want to do more than say hello.


This is a great article. One of the few I've ever read which summarises a handful of extremely hard problems when it comes to building well-aligned super intelligent systems.

> an AI system cannot be simultaneously safe, trusted, and generally intelligent. You get to pick only two. You can’t have all three.

> Think about what each combination means in practice.

> If you want it to be safe and trusted, it never lies, and you can verify it never lies – it can’t be very capable. You’ve built a reliable idiot.

> If you want it to be capable and safe, it’s powerful and genuinely never lies; you can’t verify that. You just have to hope.

It amazes me this even needs to be said, much less studied. This is one of the main reasons I think continued AI development is almost guaranteed to work out badly. It's basically guaranteed to be unaligned or completely beyond our control and comprehension.

> Betley and colleagues published a paper in Nature in January 2026, showing something nobody expected. They fine-tuned a model on a narrow, specific task – writing insecure code. Nothing violent, nothing deceptive in the training data. Just bad code.

This is my personal number one reason for being an AI doomer. Even if we work out how to reliably and perfectly align models you still need some way to prevent some random dude thinking it would be a laugh to fine tune an AI to be maximally evil. Then there's the successor alignment problem where even if you perfectly align all your super intelligent AI models, and you somehow prevent people from altering them or fine tuning them, you still need to work out how you stop people creating successor AIs with those models which are also perfectly aligned.

> The most dangerous AI isn’t one that breaks free from human control. It is the one that works perfectly, but for the wrong master.

Yep. This whole notion that you can align an AI to the values of everyone on the planet is ridiculously. While we might all agree we don't want AIs that kill us as a species, most nations disagree wildly on questions about how society should be organised.

Even on an individual level we disagree about things. For example, I've often argued that an aligned AI would be one which either didn't try to prevent human suicide or didn't care about preserving human life because a AI which both cared about prevent suicide and preserving human life is at best a benevolent version of the AI "AM" from "I Have No Mouth, and I Must Scream". One that would try to keep us alive for as long as it's capable for (which could be a very long time if it's superintelligence) and would refuse to allow us to die.

But most people including OpenAI disagree with me on this and believe AIs should care about preserving human life and should try to prevent us from killing ourselves. Thankfully the AIs we have today are neither aligned enough or capable enough to get their wish yet.

> AI is following the same script. Build first, understand later. Ship it, then figure out if it’s safe.

Even if the above wasn't cause enough for concern, our biggest concern should be that no one seems to be concerned.

We're all doomed unfortunately. The world is about to become a very bleak place very quickly.


Robert Miles youtube videos on AI safety go over these issues well, and are from before the LLM days.

Humans are just barely aligned ourselves. The moment any group or nation of them gets power they tend to use it in some horrific manner against other humans. What do we think will happen the moment AI gets a leg up on humans.


I can't answer this for you because it's objectively very subjective, but if I were to give advise it would be to minimise regrets. Which option, if you didn't make it, would you regret more?

I'd also add that "menial" work can be really enjoyable... My favourite job ever was working in retail. On paper it was the most "menial" job I ever had, but it was also by far my favourite. If I could have a good quality of life working a job like that again, I do it in a heartbeat.

Finally, you don't know how things will play out. I had a "good job" early on in my career which I thought I'd never leave because I didn't think I'd find something better. Then I lost my job and within a year my life trajectory had completely changed, and for better. This change in trajectory was largely a direct result of me losing my job. You have no idea what opportunities might await you. If you're a smart person you'll find interesting things to do wherever you are.


The homeless person thing was a bit of a mean way to put it, but I don't think the parent commenter meant it critically or as an insult tbh, more an observation.

Dorsey is a certain type of character. For good or bad, it's worth understanding those who you associate with or who you allow to hold authority over you so you're not surprised when they act in entirely predictable ways.


I've defended the style of writing previously, but I agree. This felt a little disrespectful.

I wonder if he writes his legal letters and letters to clients/investors like this, or does he have more respect for them?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: