Hacker Newsnew | past | comments | ask | show | jobs | submit | PostOnce's commentslogin

They've begun injecting obnoxious ads into the downloadable mp3s on a lot of podcasts I've found. Hyperlocal ads for tire shops and bakeries.

I don't want to buy tires, I want to learn about ______. The ads don't even make sense because they're irrelevant.


VPN to Sweden to get the IP geolocated ads to retarget. The ads still exist but they're less obnoxious, and they're often in Swedish so you don't have to know what they're on about anyway.

And what are we Swedes supposed to do?


Careful, I enjoyed this bonus (being in Japan and not being able to keep up with the ads)... so much so, that I started ignoring the Japanese. Including my wife. You can imagine how well that went.

Welcome to radio 2.0.

Give it another 10-20 years and your 2 hour podcasts will be 30 minutes of morning zoo DJ banter, 10 minutes of guests, and 1.5 hours of ads.

We’ll have reached peak 90s all over again. With any luck we’ll avoid recreating the conditions for another Nickelback and can stay in the weird zone where Trip Hop and pop punk could chart at the same time.


The 00's podcasts I listened to were often in 2-3 hour episodes, rarely well scripted (or scripted at all?), but a lot of fun and very amateurish. I re-listened to several entire series recently and the episode lengths were the only thing I think was worse than in newer podcasts.

On the other hand, if ads etc gets too annoying, I already have run all my downloaded podcasts through whisper to get transcripts with timestamps. Running some LLM to find ranges to delete would probably be quite easy. As a bonus I would be happy to also cut out all the filler repetitions that seem popular these days ("yes, X, I absolutely agree, [repeats everything X just said]"). Could probably cut 1 hour episodes to 20 minutes without losing any content.


> 2 hour podcasts

You have high hopes. Next YT tool will be to split anything long in 30s reels as brains will be completely incapable of focusing for longer.


And it will all be AI generated specifically for you live.

At least it is somewhat relevant. Hearing ads about Irish telecom operator ads at the other side of europe is pretty goofy. What's the actual point? Just worsening the podcast experience?

To each his own, but also:

All things in moderation; I drink a cup or two a day, but only before lunch, never after, and I eat beforehand.

If I don't eat, or I have too many coffees together, I get the anxiety you mention.

If I have coffee after lunch, it affects my sleep.

But, accounting for those things and mitigating them, I now not only get the benefit of coffee (if there is one), I get the social benefit of having coffee with people.


The British did not suddenly and instantaneously turn American in 1776, they had to already be culturally American for things to have wound up there.

What's more, the British didn't leave Britain so they could go be British overseas necessarily, but so they could go do un-British things, it could be argued.

On top of that, 250 years is both a very short time, but also a very long time. It's more than enough not to be hand-waved away, at least. In 250 years it went from a coastal breakaway to the sole hyperpower, slavery came and went, communism arrived and died out, the information age dawned, religion became more of a niche than a facet of everyday life... That's a lot of cultural upheaval.


Let's revisit where exactly it is that slavery went. It went into prisons, where it remains legal and used, with about a million people bound by it.

To make a long story short, in the US, you are and have always been one of two things: the exploited or the exploitor.


We spend substantially more on prisoners than they could ever hope to generate in labor.

After all, the average prisoner is not a diligent, sober, hardworking person trying to get ahead; how much economic value are you really likely to extract, even if you're evil?

Chattel slavery and prison labor may be distant relatives, but they're not siblings, and it's wrong (in a dozen different ways) to imply they are.


I want to chime in on SymbOS, which I think is the perfect reply to the GP's curiosity.

https://www.symbos.org/shots.htm

This is what slow computers with a few hundred kB of RAM can do.


The original Macintosh had similar specs as well – 128k with a 68k clocked at ~6-7 MHz. It helps that both platforms put a significant amount of OS code in ROM.

I very hesitantly used discord occasionally because some projects I wanted to keep up with moved there

I gave up a few times since I kept getting autobanned by a broken algorithm (i.e., based on my ip or phone number, not anything I'd said) until I contacted their devs and they manually fixed it.

Obviously, I am never going to consider using discord again after this shit-tsunami. Back to irc and signal groups.


You open your toolbox to get your pliers, but due to nanite software updates, your pliers are now a chisel.

We carefully considered this change and feel it brings the most value to our users, and we hope you'll love chisel as much as we do.

One day, you guys are gonna learn not to tie your livelihoods to the whim of a corporation, but today isn't that day.


You act like public opinion has no bearing on politics.

Historical precedent: prohibition.

Alternate future: the big websites start losing billions because people just use the internet less or not at all because it's a hassle with no return, and tax revenue drops. Then the politicians start to worry.

Even in the absence of democracy, public opinion affects politics.


The problem is that public opinion is now very much in favor in general. https://www.ipsos.com/en-uk/britons-back-online-safety-acts-...

It is fucking CRAZY how many cloud companies don't let you set a spending limit.

I had to hunt around for a host in a suitable geography with a spending limit, almost had to go on-prem (which will happen eventually, but not in the startup phase)

Waking up to bankruptcy because of bots out of your control visiting your website seems a little nuts. Adding some other bullshit on top (like cloudflare) seems even more nuts.

Yeah I can manage all that and have the machine stop responding when it hits a spending limit -- but why would I pay for the cloud if I have to build out that infrastructure?

grumble.


2 reasons basically.

1. Because people vote with their wallets and not their mouths, and most companies would rather have a cost accident (quickly refunded by AWS) rather than everything going down on a saturday and not getting back up until finance can figure out their stuff.

2. Because realtime cost control is hard. It's just easier to fire off events, store them somewhere, and then aggregate at end-of-day (if that).

I strongly suspect that the way major clouds do billing is just not ready for answering the question of "how much did X spend over the last hour", and the people worried about this aren't the ones bringing the real revenue.


> I strongly suspect that the way major clouds do billing is just not ready for answering the question of "how much did X spend over the last hour", and the people worried about this aren't the ones bringing the real revenue.

See: Google's AI studio. Its built on Google Cloud infrastructure so billing updates are slow which peeves users used to instant billing data with Anthropic and OpenAI.


> and the people worried about this aren't the ones bringing the real revenue.

It's this one. If you're in a position to refund a "cost accident", then clearly you don't have to enforce cost controls in real time, and the problem becomes much easier to achieve at billing cycle granularity; the user setting a cost limit is generally doesn't care if you're a bit late to best-effort throttle them.


I personally believe (and so far, my evidence suggests) that AI doesn't anywhere near as well as claimed for almost any of its use cases.

However, let's suppose the alternate case:

If AI works as claimed, people in their tens of millions will be out of work.

New jobs won't be created quickly enough to keep them occupied (and fed).

Billionaires own the media and the social media and will use it to attempt to prevent change (i.e. apocalyptic taxation)

What, then, will those people do? Well, they say "the devil makes work for idle hands", and I'm curious what that's going to look like.


It's amazing that it "works", but viability is another issue.

It cost $20,000 and it worked, but it's also totally possible to spend $20,000 and have Claude shit out a pile of nonsense. You won't know until you've finished spending the money whether it will fail or not. Anthropic doesn't sell a contract that says "We'll only bill you if it works" like you can get from a bunch of humans.

Do catastrophic bugs exist in that code? Who knows, it's 100,000 lines, it'll take a while to review.

On top of that, Anthropic is losing money on it.

All of those things combined, viability remains a serious question.


> You won't know until you've finished spending the money whether it will fail or not.

How do you conclude that? You start off with a bunch of tests and build these things incrementally, why would you spend 20k before realizing there’s a problem?


Because literally no real-world non-research project starts with "we have an extremely comprehensive test suite and specification complete down to the most finite detail" and then searches for a way to turn it into code.

Precisely. Figuring out what the specification is supposed to look like is often the hardest part.

100% agreed i use Claude often to just bounce ideas back and forth on specs i would like to create which I know will never gain traction because its either way too ambitious or too niche.

And the amount of times Claude proposes something thats completely contradictory in the same response. Or completely does a 180 after two more responses. Is ridiculous.


I’ve spent nearly 20 years working as a consultant writing software, I know that. How do you think humans solve that problem?

Typically by putting cost caps on deliverables.

Which is pretty much what I said once you factor in that you evaluate (test) the deliverables before paying more.

> It cost $20,000

I'm curious - do you have ANY idea what it costs to have humans write 100,000 lines of code???

You should look it up. :)


> > It cost $20,000

> I'm curious - do you have ANY idea what it costs to have humans write 100,000 lines of code???

I'll bite - I can write you an unoptimised C compiler that emits assembly for $20k, and it won't be 100k lines of code (maybe 15k, the last time I did this?).

It won't take me a week, though.

I think this project is a good frame of reference and matches my experience - vibing with AI is sometimes more expensive than doing it myself, and always results in much more code than necessary.


Does it support x64, x8664, arm64 and riscv? (sorry, just trolling - we don't know the quality of backend other than x8664 which is supposed to be able to build bootable linux.)

It's not hard to build a compiler just for a bootable linux.

I see no test criteria that actually runs that built linux through various test plans, so, yeah emitting enough asm just to boot is doable.


> I can write you an unoptimised C compiler that emits assembly for $20k

You may be willing to sell your work at that price, but that’s not the market rate, to put it very mildly. Even 10 times that would be seriously lowballing in the realm of contract work, regardless of whether it’s “optimised” or not (most software isn’t).


> You may be willing to sell your work at that price, but that’s not the market rate, to put it very mildly.

It is now.

At any rate, this is my actual rate. I live in South Africa, and that's about 4 weeks of work for me, without an AI.


Deal. I'll pay you IF you can achieve the same level of performance. Heck, I'll double it.

You must provide the entire git history with small commits.

I won't be holding my breath.


> Deal. I'll pay you IF you can achieve the same level of performance. Heck, I'll double it.

> You must provide the entire git history with small commits.

> I won't be holding my breath.

Sure; I do this often (I operate as a company because I am a contractor) - money to be held in escrow, all the usual contracts, etc.

It's a big risk for you, though - the level of performance isn't stated in the linked article so a parser in Python is probably sufficient.

TCC, which has in the past compiled bootable Linux images, was only around 15k LoC in C!

For reference, for a engraved-in-stone spec, producing a command-line program (i.e. no tech stack other than a programming language with the standard library), a coder could reasonably produce +5000LoC per week.

Adding the necessary extensions to support booting isn't much either, because the 16-bit stuff can be done just the same as CC did it - shell out to GCC (thereby not needing many of the extensions).

Are you *really* sure that a simple C compiler will cost more than 4 weeks f/time to do? It takes 4 weeks or so in C, are you really sure it will take longer if I switch to (for example) Python?


And having TCC, GCC, CLANG and any other project lying around as cheat sheet, as the trained model, in some way, had.

> the level of performance isn't stated in the linked article so a parser in Python is probably sufficient.

No, you'll have to match the performance of the actual code, regardless of what happens to be written in the article. It is a C compiler written in Rust.

Obviously. Your games reveal your malign intent.

EDIT: And good LORD. Who writes a C compiler in python. Do you know any other languages?!?


> No, you'll have to match the performance of the actual code, regardless of what is in the article. It is a C compiler written in Rust.

Look, it's clear that you don't hire s/ware developers very much - your specs are vague and open to interpretation, and it's also clear that I do get hired often, because I pointed out that your spec isn't clear.

As far as "playing games" goes, I'm not allowing you to change your single-sentence spec which, very importantly, has "must match performance", which I shall interpret to as "performance of emitted code" and not "performance of compiler".

> Your games reveal your intent.

It should be obvious to you by know that I've done this sort of thing before. The last C compiler I wrote was 95% compliant with the (at the time, new) C99 standard, and came to around 7000LoC - 8000LoC of C89.

> EDIT: And good LORD. Who writes a C compiler in python. Do you know any other languages?!?

Many. The last language I implemented (in C99) took about two weeks after hours (so, maybe 40 hours total?), was interpreted, and was a dialect of Lisp. It's probably somewhere on Github still, and that was (IIRC) only around 2000LoC.

What you appear to not know (maybe you're new to C) is that C was specifically designed for ease of implementation.

1. It was designed to be quick and easy to implement.

2. The extensions in GCC to allow building bootable Linux images are minimal, TBH.

3. The actual 16-bit emission necessary for booting was not done by CC, but by shelling out to GCC.

4. The 100kLoC does not include the tests; it used the GCC tests.

I mean, this isn't arcane and obscure knowledge, you know. You can search the net right now and find 100s of undergrad CS projects where they implement enough of C to compile many compliant existing programs.

I'm wondering; what languages did you write an implementation for? Any that you designed and then implemented?


[flagged]


> Too late friend, you've revealed your stripes.

So you are not willing to put $20k in escrow for, as per your offer:

>>>> Deal. I'll pay you IF you can achieve the same level of performance. Heck, I'll double it.

I just noticed now that you actually offered double. I will do it. This is my real name, my contact details are not hard to find.

I will do it, with emitted binaries performing as well as or better than the binaries emitted by CC.

Put your $40k into a recognised South African escrow service (I've used a few in the past, but I'd rather you choose one so you don't accuse me of being some sort of African scammer).

Because I am engaged in a 6+ hours/day gig right now, I cannot do it f/time until my current gig is completed (and they are paying me directly, not via escrow, so I am not going to jeopardise that).

I can however do a few hours each day, and collect my payment of $40k only once the kernel image boots in about the same time that the CC kernel image boots.

> Yes, we all took the compilers class in college. Those of us who went to college, that is.

If you knew that, why on earth would you assume that implementing a C compiler is at all a complex task?



lelanthran won, qarl lost. Well played lelanthran!

Dude - now I have to turn off my auto notify.

You are fucking nuts.


You seem to have doubled down on a bluff that was already called.

Naw. I got him to reveal himself, which was the whole point.

It's amazing what you can get people to do.


> Naw. I got him to reveal himself, which was the whole point.

Reveal myself as ... a contractor agreeing to your bid?

> It's amazing what you can get people to do.

There's a ton of money now floating around in pursuit of "proving" how cost-efficient LLM coding is.

I'm sure they can spare you the $40k to put into escrow?

After all, if I don't deliver, then the AI booster community gets a huge win - highly respected ex-FAANG staff engineer with 30 years of verified dev experience could not match the cost efficiency of Claude Code.

I am taking you up on your original offer: $40k for a C compiler that does exactly what the CCC program in the video does.


That’s a VERY nice rate for SA; approximately what I charge in the UK. I assume these are not local companies who hire you.

> That’s a VERY nice rate for SA; approximately what I charge in the UK. I assume these are not local companies who hire you.

A local Fintech needing PCI work pays that, but that's not long-term contracts.


No, you're overestimating how complex it is to write an unoptimized C compiler. C is (in the grand scheme of things) a very simple language to implement a compiler for.

The rate probably goes up if you ask for more and more standards (C11, C17, C23...) but it's still a lot easier than compilers for almost any other popular language.


This is very much a John Brown claim that will in the end, kill the OP. I'd rather have the OP using LLM powered code review tools to add their experience to that AI generated compiler.

That feels like Silicon-Valley-centric point of view. Plus who would really spend $20k in building any C compiler today in the actual landscape of software?

All that this is saying is that license laundering of a code-base is now $20k away through automated processes, at least if the original code base is fully available. Well, with current state-of-the-art you’ll actually end up with a code-base which is not as good as the original, but that’s it.


That's irrelevant in this context, because it's not "get the humans to make a working product OR get the AI to make a working product"

The problem is you may pay $20K for gibberish, then try a second time, fail again, and then hire humans.

Coincidentally yes, I am aware, my last contract was building out a SCADA module the AI failed to develop at the company that contracted me.

I'm using that money to finance a new software company, and so far, AI hasn't been much help getting us off the ground.

Edit: oh yeah, and on top of paying Claude to fuck it up, you still have to also pay the salary of the guy arguing with Claude.


> The problem is you may pay $20K for gibberish, then try a second time, fail again, and then hire humans.

You can easily pay humans $20k a day and get gibberish in output. Heck, this happen all the times. This happens right now in multiple companies.

Yes sometime humans produce nice code. This happens from time to time...


You wouldn’t pay a human to write 100k LOC. Or at least you shouldn’t. You’d pay a human to write a working useful compiler that isn’t riddled with copyright issues.

If you didn’t care about copying code, usefulness, or correctness you could probably get a human to whip you up a C compiler for a lot less than $20k.


Are you trolling me? Companies (made of humans) write 100,000 LOC all the time.

And it's really expensive, despite your suspicions.


No, companies don’t pay people to write 100k LOC. They pay people to write useful software.

We figured out that LOC was a useless productivity metric in the 80s.


[flagged]


I can't stress enough how much LOC is not a measure of anything.

Yep. I’ve seen people copy 100’s of lines instead of adding a if statement.

In fact it is. And can be useful. IF you have quality controls in place, so the code has a reasonable quality, the LOC will correlate with amount of functionality and/or complexity. Is a good metric? No. Can be used just like that to compare arbitrary code bases, absolutely no!

As a seasoned manager, I have an idea how long a feature should take, both in implementing effort and longness of code. I hace to know it, is my everyday work.


OK, well, the people in MY software industry use LOC as an informal measure of complexity.

LIKE THE WHOLE WORLD DOES.

But hey, maybe it's just the extremely high profile projects I've worked on.


As an informal measure of the complexity of the code sure 100k lines are inherently more complex than 10k because there’s just more there to look at. And if you are assuming that 2 projects were made by competent teams, saying that one application is 10k LOC and one is 1 million might be useful as a heuristic for number of man hours spent.

But I can write a 100k LOC compiler where 90k lines are for making error messages look pixel perfect on 10 different operating systems. Or where 90k lines are useless layers upon layers of indirection. That doesn’t mean that someone is willing to pay more for it.

AI frequently does exactly that kind of thing.

So saying my AI made a 100k LOC program that does X, and then comparing the cost to a 100k LOC program written by a human is a nonsense comparison. The only thing that matters is to compare it to how much a company would pay a human to produce a program capable of the same output.

In this case the program is commercially useless. Literally of zero monetary value, so no company would pay any money for it. Therefore there’s nothing to compare it to.

That’s not to say it’s not an interesting and useful experiment. Or that things can’t be different in the future.


Such as?

Without questioning the LOC metric itself, I'll propose a different problem: LOC for human and AI projects are not necessarily comparable for judging their complexity.

For a human, writing 100k LOC to do something that might only really need 15k would be a bit surprising and unexpected - a human would probably reconsider what they were doing well before they typed 100k LOC. Where-as, an AI doesn't necessarily have that concern - it can just keep generating code and doesn't care how long it will take so it doesn't have the same practical pressure to produce concise code.

The result is that while for large enough human-written programs there's probably an average "density" they reach in relation of LOC vs. complexity of the original problem, AI-generated programs probably average out at an entirely different "density" number.


Your first post specifically stated:

"I'm curious - do you have ANY idea what it costs to have humans write 100,000 lines of code???"

which any reasonable reading would take to mean "paid-by-line", which we all know doesn't happen. Otherwise, I could type out 30,000 lines of gibberish and take my fat paycheck.


> you could probably get a human to whip you up a C compiler for a lot less than $20k

I fork Clang or GCC and rename it. I'll take only $10k.


My question, which I didn’t still find anybody asking: how many compilers, including but not limited to the 2 most famous, were in the training set.

Certainly tcc. Probably also rui314's chibicc as it's relatively popular. sdcc is likely in there as well. Among numerous others that are either proprietary or not as well known.

If my devs are writing that much code they're doing something wrong. Lines of code is an anti metric. That used to be commonly accepted knowledge.

It really depends on the human and the code it outputs.

I can get my 2y old child to output 100k LoC, but it won't be very good.


Your 2yr old can't build a C compiler in Rust that builds Linux.

Sorry mate, I think you're tripping.


I never said this. I think you're the one tripping mate.

Well, if these humans can cheat by taking whatever needed degree of liberty in copycat attitude to fit in the budget, I guess that a simple `git clone https://gcc.gnu.org/git/gcc.git SomeLocalDir` is as close to $0 as one can hope to either reach. And it would end up being far more functional and reliable. But I get that big-corp overlords and their wanna-match-KPI minions will prefer an "clean-roomed" code base.

100k lines of clean, bug free, optimized, and vulnerability free code or 100k lines of outsourced slop? Two very different price points.

A compiler that can build linux.

That level of quality should be sufficient.

Do you know any low quality programmers that write C compilers in rust THAT CAN BUILD LINUX?

No you don't. They do not exist.


Yep. Building a working C compiler that compiles Linux is an impossible task for all but the top 1% of developers. And the ones that could do it have better things to do, plus they’d want a lot more than 20K for the trouble.

What's so hard about it? Compiler construction is well researched topic and taught in the universities. I made toy language compiler as a student. May be I'm underestimating this task, but I think that I can build some simple C compiler which will output trivial assembly. Given my salary of $2500, that would probably take me around a year, so that's pretty close LoL.

You can one shot prompt a toy C compiler. Getting one that can compile Linux in a bootable way is significantly harder.

Everybody talks as Linux is the most difficult thing to compile in the world. The reality is that linux is well written and designed with portability with crappy compilers in mind from the beginning.

Also, the booting part, as stated some times, is discutable.


The reality is you can build Linux with gcc and clang. And that’s it. Years ago you could use Intel’s icc compiler, but that stopped being supported. Let’s stop pretending it’s an undergrad project.

Just writing a non-toy C preprocessor is non-trivial.

It's a bit more nuanced. You can build a simple compiler without too many issues. But once you want it to do optimisations, flow control protection, good and fast register allocation, inling, autovectoriasation, etc. that's going to take a multiples of the original time.

Some of the hardest parts of the compiler are optimization and clear error handling/reporting. If you forego those - because you're testing against a codebase that is already free of things that break compilation and have no particular performance requirements for the generated code - it's a substantially simpler task.

Making a basic C compiler, without much error/warn detection and/or optimizations, is as a matter if fact no so difficult. In many Universities is a semester project for 2 to 3 students.

> Building a working C compiler ... is an impossible task

I think you might be thinking of C++


I’m not. I’ve been working with C on and off for 30 years. Linux requires GNU extensions beyond standard C. Once you get the basics done, there’s still a lot more work to do. Compiling a trivial program might work. But you’ll hit an edge case or 50 in the millions of lines in Linux.

I also should’ve qualified my message with “in 2 weeks”, or even “in 2 months.” Given more time it’s obviously possible for more people.


Interesting, why impossible? We studied compiler construction at uni. I might have to dig out a few books, but I’m confident I could write one. I can’t imagine anyone on my course of 120 nerds being unable to do this.

You are underestimating the complexity of the task so do other people on the thread. It's not trivial to implement a working C compiler very much so to implement the one that proves its worth by successfully compiling one of the largest open-source code repositories ever, which btw is not even a plain ISO C dialect.

I didn’t say it was trivial. Just that I thought my course mates would be able to do it.

You thought your course mates would be able to write a C compiler that builds the Linux?

Huh. Interesting. Like the other guy pointed out, compiler classes often get students to write toy C compilers. I think a lot of students don't understand the meaning of the word "toy". I think this thread is FULL of people like that.


I took a compilers course 30 years ago. I have near zero confidence anyone (including myself) could do it. The final project was some sort of toy language for programming robots with an API we were given. Lots of yacc, bison, etc.

Lots of segfaults, too.


If it helps, I did a PhD in computer science and went to plenty of seminars on languages, fuzz testing compilers, reviewed for conferences like PLDI. I’m not an expert but I think I know enough to say - this is conceptually within reach if a PITA.

Hey! I built a Lego technic car once 20 years ago. I am fully confident that I can build an actual road worthy electric vehicle. It's just a couple of edge cases and a bit bigger right? /s

That's really helpful, actually, as you may be able to give me some other ideas for projects.

So, things you don't think I or my coursemates could do include writing a C compiler that builds a Linux kernel.

What else do you think we couldn't do? I ask because there are various projects I'll probably get to at some point.

Things on that list include (a) writing an OS microkernel and some of the other components of an OS. Don't know how far I'll take it, but certainly a working microkernel for one machine, if I have time I'll build most of the stack up to a window manager. (b) implementing an LLM training and inference stack. I don't know how close to the metal I'd go, I've done some low level CUDA a long time ago when it was very new and low-level, depends on time. I'll probably start the LLM stuff pretty soon as I'm keen to learn.

Are these also impossible? What other things would you add to the impossible list?


Building a microkernel based OS feels feasible because it’s actually quite open ended. An “OS” could be anything from single user DOS to a full blown Unix implementation, with plenty in between.

Amiga OS is basically a microkernel and that was built 40 years ago. There are also many other examples, like Minix. Do I think most people could build a full microkernel based mini Unix? No. But they could get “something” working that would qualify as an OS.

On the other hand, there are not many C compilers that build Linux. There are many implementations of C compilers, however. The goal of “build Linux” is much more specific.


Minix is a fair example, or Herd. Something like that.

So what other projects are impossible? That was my question.


They did bot compile the whole linux, mind you, just an absolute minimal kernel.

Doing a real compiler to be used by humans is difficult. Doing a compiler that “gets the thing done” is a different thing.


Nowhere did I imply it is production-ready. I said "working compiler" and by definition Claude built one since they booted up the kernel.


I’ll be shocked if they are able to do it in 4 months, never mind 4 weeks.

Have you ever seen Tsoding youtube channel? I’m sure Mr Zosin can very much do it in one week. And considering russian salaries, it will be like an order of magnitude cheaper.

Do you think this was guided by a low quality Anthropic developer?

You can give a developer the GCC test suite and have them build the compiler backwards, which is how this was done. They literally brute forced it, most developers can brute force. It also literally uses GCC in the background... Maybe try reading the article.


[flagged]


The trick to not be confused is to read more than the title of the article.

[flagged]


Speaking of obnoxious

> On top of that, Anthropic is losing money on it.

It seems they are *not* losing money on inference: https://bsky.app/profile/steveklabnik.com/post/3mdirf7tj5s2e


no, and that is widely known. the actual problem is that the margins are not sufficient at that scale to make up for the gargantuan training costs to train their SOTA model.

They are large enough to cover their previous training costs but not their next gen training costs.

i.e They made more money on 3.5 than 3.5 cost to train, but didn't make enough money on 3.5 to train 4.0.


Source on that?

Because inference revenue is outpacing training cost based on OpenAI’s report and intuition.


Net inference revenue would need to be outpacing to go against his think about margins.

That's for the API right? The subs are still a loss. I don't know which one of the two is larger.

That's a good point! Here claude opus wrote a C compiler. Outrageously cool.

Earlier today, I couldn't get opus to replace useEffect-triggered-redux-dispatch nonsense with react-query calls. I already had a very nice react-query wrapper with tons of examples. But it just couldn't make sense of the useEffect rube goldberg machine.

To be fair, it was a pretty horrible mess of useEffects. But just another data point.

Also I was hoping opus would finally be able to handle complex typescript generics, but alas...


it's 20,000 in 2026, with the price of tokens halving every year (at a given perf level), this will be around 1,000 dollars in 2030

Progress can be reviewed over time, and I'd think that'd take a lot of the risk out.

Also, heaven knows if the result in maintainable or easy to change.

> On top of that, Anthropic is losing money on it

This has got to be my favorite one of them all that keeps coming up in too many comments… You know who also was losing money in the beginning?! every successful company that ever existed! some like Uber were losing billions for a decade. and when was the last time you rode in a taxi? (I still do, my kid never will). not sure how old you are and if you remember “facebook will never be able to monetize on mobile…” - they all lose money, until they do not


Anyone remember the dotcom bust?

Oh yeah, I do. That whole internet thing was a total HOAX. I can't believe people bought into that.

Can you imagine if Amazon, EBay, PayPal, or Saleforce existed today?


Well, how is your Solaris installation going?

I also remember having gone into research, because there were no jobs available, and even though I was employed at the time, our salaries weren't being paid.


What does this even mean?

Seems you don’t remember much of that time. Let me refresh: “we are the dot in dot com”

==> Anyone remember the dotcom bust?

Remember that thing that caused it? That "Internet" thing? After those companies went bust it pretty much disappeared didn't it.

Completely detached from reality, brainwashed SV VC's who have made dumping the norm in their bubble.

I can guarantee you that 90% of successful businesses in the world made a profit their first year.


1 year seems aggressive. Successful restaurants have around the first year as the average break even timeline, with the vast majority between 6 and 18 months.

They are making a profit on each sale, but there are fixed costs to running a business.


1 year isn't aggressive because of the modifier "successful". Most businesses that aren't profitable 12 months in go out of business not long after, having remained unsuccessful throughout their lifespan.

Restaurants have comparatively high start up costs and ramp up time. Compare to e.g. a store selling clothes. If for successful restaurants the average time is already a year, then in general for successful businesses it's going to be less.


I’ll bite. Share your data?

Companies that were not profitable in their first year: Microsoft, Google, SpaceX, airBnB, Uber, Apple, FedEx, Amazon.

If the vast majority of companies are immediately profitable, why do we have VC and investment at all? Shouldn’t the founders just start making money right aeay?


> Companies that were not profitable in their first year: Microsoft, Google, SpaceX, airBnB, Uber, Apple, FedEx, Amazon.

US Big Tech, US Big Tech, US Tech-adjacent, US Big Tech, US Big Tech, US Big Tech, FedEx, US Tech-adjacent.

In other words, exactly what I was getting at.

Also, a basic search shows Microsoft to have been profitable first year. I'd be very surprised if they weren't. Apple also seems to have taken less than 2 years. And unsurprisingly, these happen to be the only two among the tech companies you named that launched before 1995.

Check out the Forbes Global 5000. Then go think about the hypothetical Forbes Global 50,000. Is the 50,000th most successful company in the world not successful? Of course not, it's incredibly successful.

> why do we have VC and investment at all

Out of all companies started in 2024 I can guarantee you that <0.01% have received VC investment by now (Feb 2026) and <1% of tech companies did. I'll bet my house on it.


Are we forgetting that sometimes, they just go bankrupt?

name one with comparable number of users and revenue? not saying you are wrong but I would bet against the outcome

I'll be able to do just that in 36mo or so after the IPOs and subsequent collapse, I think.

Enron

I should have guessed someone would answer this question in this thread with Enron :)

I did not ask for random company that went under for any reason but specific question related to users and revenue.


Well there are lots and lots of examples that don't end in bankruptcy, just a very large loss of capital for investors. The majority of the stars of the dotcom bubble just as one example: Qualcomm, pets.com, Yahoo!, MicroStrategy etc etc.

Uber, which you cite as a success, is only just starting to make any money, and any original investors are very unlikely to see a return given the huge amounts ploughed in.

MicroStrategy has transformed itself, same company, same founder, similar scam 20 years later, only this time they're peddling bitcoin as the bright new future. I'm surprised they didn't move on to GAI.

Qualcomm is now selling itself as an AI first company, is it, or is it trying to ride the next bubble?

Even if GAI becomes a roaring success, the prominent companies now are unlikely to be those with lasting success.


I love how your comment is getting downvoted.

Like it's a surprise that startups burn through money. I get the feeling that people really have no idea what they're talking about in here anymore.

It's a shame.


then you are misunderstaing the downvoting. it's not that the fact that they are burning money. it's the fact that this cost today 20k but that is not the real cost if you factor the it is losing money on this price.

So Tomorrow when this "startup" will need to come out of their money burning phase, like every startup has to sooner or later, that cost will increase, because there is no other monetising avenue, at least not for anthropic that "wilL never use ads".

at 20k this "might" be a reasonable cost for "the project", at 200k it might not.


That would be insightful if the cost of inference weren’t declining at roughly 90% per year. Source: https://epoch.ai/data-insights/llm-inference-price-trends

According to that article, the data they analyzed was API prices from LLM providers, not their actual cost to perform the inference. From that perspective, it's entirely possible to make "the cost of inference" appear to decline by simply subsidizing it more. The authors even hint at the same possibility in the overview:

> Note that while the data insight provides some commentary on what factors drive these price drops, we did not explicitly model these factors. Reduced profit margins may explain some of the drops in price, but we didn’t find clear evidence for this.


What in the world would the profit motive be to “make it appear” that inference cost is declining? Any investors would have access to the real data. End users don’t care. Why would you do the work for an elaborate deception?

Source that they’re losing money on each token?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: