Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

VC math follows a power law and expects almost all investments to flop, and the one winner to pay for it all. The question here is not about the flops it’s: are the winners big enough?


Isn't this the exact logic behind the stagnation in media right now? The risk aversion/stakes is so high that real risks aren't taken and we get bland sequel/remake slop.

Ironically when media culture is at is at its healthiest is when winners are diverse and common, and more importantly smaller shows that try out new things can still break even, with periodic flops being generally tolerable. That low risk culture for attempting new ideas is precisely what creates legendary franchises later when a few of these hit everything right.


And now that they're eliminating diversity in their investments are you still certain they will pick the next generation's winners? All they're investing in are AI companies...

Once upon a time someone like me for whom engineering competence is a core aspect of my identity would have never considered turning my back on YC. But now I'm just embarrassed by them. The things they now think are the only things worth investing in mostly make me want to vomit, like the vibe coding casino-IDE startup. As someone who still espouses their old values rather than their new ones, I'd rather succeed on my own.


Both Mark Andreessen and Andrej Karpathy say AI is unique as a new technology in that small teams and individuals seem to be the earliest and most cutting edge adopters. (unlike computers and the internet which were used by government and then large companies)

YC just so happens to invest super early in small teams.

So the overlap of YC and AI is inevitable. AI is not an investment genre per se but it can be used to accelerate or improve any ecosystem if used carefully and cleverly.

Since my Humble Bundle days, I’ve always been partial to small companies and small dev studios. Not all EGG companies use AI but they are all keeping tabs on the technology. Mitch Lasky has said that AI may have opened a window in which small studios may have their best shot at outsized success in recent history. Eventually the big dogs will catch up and adopt the new tech themselves but right now David has a shot at Goliath.


It's funny, because in my industry it's the slavish attention to AI by goliath-scale companies like Microsoft that is leading them to set fire to quality, innovation, and consumer trust, when then gives me and my tiny startup the opportunity to jump in and eat their lunch.


> All they're investing in are AI companies...

Genuine question…

Do you not think that a large percentage of (random cut off) $1b companies over the next 10 years will be AI?

And/or do you not think that the next $100b+ company will be AI-centered?


Over the next 10 years, no, I think the market will course correct within that time frame. AI is the sauce that's being slathered on everything right now and demand for it is driving record valuations, particularly for AI startups and their founders. That demand is all investor-driven though: investors are falling over themselves to make AI investments, while consumers are not actually especially eager to have all human contact progressively stripped from their lives.


> That demand is all investor-driven though: investors are falling over themselves to make AI investments

Largely true.

> while consumers are not actually especially eager to have all human contact progressively stripped from their lives

Hmm… I agree with this sentiment, but I think it’s mostly a straw man. There are many things that AI can do well that people will end up embracing directly or indirectly.

Medical scans is one big one, imho.

Mundane but important legal services is another.

Skillful mediation of scutwork is definitely embraced.

Good and fast simple customer service via phone or text will end up being very welcome (at least in some contexts). I realize that most people will prefer superlative human customer service, but that’s currently not a widespread available reality, especially for simple tasks.

All sorts of learning (great and essentially free tutors).

All sorts of practice (e.g., language, speeches, debates, presentations, etc.).

All of the above (and more) are things that people are using AI for right now, and they seem to be loving it.

I realize that some folks use AI tools in regressive and sometimes dehumanizing ways, but that’s not the fault of the tool, imho.


I dunno, I see problems with every one of those things.

You could make a customer service AI that was an advocate for the consumer, but it would likely spend the company's money liberally. So instead you'll end up with AI agents incentivized to be stingy and standoffish about admitting the company could improve, just like the humans are.

You can tutor with AI, but there's no knowing what it will teach you. It will sound as convinced of itself when it teaches you why the earth is flat as it does teaching you why the earth is round. The one thing it will certainly do is reinforce your existing biases.

You can practice with AI, but you'd learn more by posing yourself the questions.

A doctor can have AI look at medical scans, but they can't defer to AI judgement and just tell the patient "AI says you have cancer, but I don't really know or care one way or the other". So again, the skill in reading results needs to be in the doctor.


> Medical scans is one big one, imho.

People have been trying this for a long time, as it's an obvious win, but have struggled so far. Perhaps newer models will help, though.


"Make something VCs wished people want"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: