Hacker Newsnew | past | comments | ask | show | jobs | submit | gbnwl's commentslogin

+1 to creating tickets by simply asking the agent to. It's worked great and larger tasks can be broken down into smaller subtasks that could reasonably be completed in a single context window, so you rarely every have to deal with compaction. Especially in the last few months since Claude's gotten good at dispatching agents to handle tasks if you ask it to, I can plan large changes that span multilpe tickets and tell claude to dispatch agents as needed to handle them (which it will do in parallel if they mostly touch different files), keeping the main chat relatively clean for orchestration and validation work.

It totally is. The fact that this post has gotten this many upvotes is appalling.

Just wait sir. We are indeed doing inference on n64. We had serious issues with text. I am almost done resolving.

You mean to tell me the included screenshot hasn't convinced you?

https://github.com/sophiaeagent-beep/n64llm-legend-of-Elya/b...


I think the source code in the GitHub repo generates the ROM in the corresponding screenshots, but it seems quite barebones.

It feels very much like it’s cobbled together from the libdragon examples directory. Or, they use hardware acceleration for the 2D sprites, but then write fixed-width text to the frambuffer with software rendering.


Partially correct. The value is not the game interface right now. Its proof you can do actual inference on an LLM the surprise I am developing is a bit bigger than this, just have to get the llm outputs right first!

Can you elaborate on the “partially correct” bit? I’d like to understand the programming of the ROM better.

You’re right that the graphics layer is mostly 2D right now. Sprites are hardware-accelerated where it makes sense, and text is written directly to the framebuffer. The UI is intentionally minimal. The point of this ROM wasn’t the game interface — it was proving real LLM inference running on-device on the N64’s R4300i (93 MHz MIPS, 4MB RDRAM). Since the original screenshots, we’ve added: • Direct keyboard input • Real-time chat loop with the model • Frame-synchronous generation (1–3 tokens per frame @ 60 FPS) So it’s now interactive, not just a demo render. The current focus is correctness and stability of inference. The graphics layer can evolve later. Next step is exposing a lightweight SDK layer so N64 devs can hook model calls into 3D scenes or gameplay logic — essentially treating the LLM as a callable subsystem rather than a UI gimmick. The value isn’t the menu. It’s that inference is happening on 1996 silicon. Happy to answer specifics about the pipeline if you’re interested.

Delivered. please reconsider now. AI slop cannot build this without a human who has real risc cpu knowledge. The Emulator ---------------------------------------------- https://bottube.ai/watch/shFVLBT0kHY

The real iron! it runs faster on real iron! ---------------------------------------------- https://bottube.ai/watch/7GL90ftLqvh

The Rom image ---------------------------------------------- https://github.com/sophiaeagent-beep/n64llm-legend-of-Elya/b...

reply


What makes you think their fame will be ephemeral? All of the tech billionaires from the 90s, 00s, and 10s are still constantly in the news for better or worse.

They need to generate revenue to continue to raise money to continue to invest in compute. Even if they have the Midas Touch it needs to be continously improved because there are three other competing Midas Touch companies working on new and improved Midas Touch's that will make their's obsolete and worthless if they stay still even for a second.

But most of their funding comes from speculative investment, not selling their services. Also, wouldn't selling their own products/services generate revenue?

Making a profitable product is so much more than just building it. I've probably made 100+ side projects in my life and only a handful has ever generated any revenue.

When was this??


You don't need to build anything. Just tell the agent to write tickets into .md files in a folder and move them to a closed foler as you go along. I've been using Claude Code with the Max plan nonstop essentially every day since last July and since then I've come to realize that the newer people are the more things they think they need to add to CC to get it work well.

Eventually you'll find a way that works for you that needs none of it, and if any piece of all the extras IS ever helpful, Anthropic adds it themselves within a month or two.


I'm thinking a customized LLM would write notes in its own hyper compressed language which would allow it to be much much more efficient.

For debugging you could translate it out to English, but if these agents can do stuff without humans in the loop, why do they need to take notes in English?

I can't imagine creating this without hundreds of millions if not billions. I think the future is specialized models


They're literally trained on natural language to output natural language. You would need to create the hyper compressed language first, convert all of your training data to that, and then train the models with that. But token efficiency per word already does vary between different languages, with Chinese being like 30%-40% more efficient than English last I heard

Doesn't this mean the Chinese models have a significant advantage ?

This isn't my domain, but say you had a massive budget, wouldn't a special LLM "thinking" language make sense ?


Same but I imagine once prices start rising the prices of GPUs that can run any decent local models will soar (again) as well. You and I wouldn’t be the only person with this idea right?


I mean, will it? I would expect that all those GPUs and servers will ends up somewhere. Look on old Xeon servers, it all ended up in China. Nobody sane will buy 1U serve home, but Chinese has recycled these servers by making X99 motherboards which takes RAMs and Xeon CPUs from these noise servers and turning into PCs.

I would expect that they could sell something like AI computers with lot of GPU power created from similar recycled GPU clusters ussed today.


Wonder if it was more or less than the crowd that showed up for last week’s NeverNude protest. I remember them numbering in the dozens too.


For those wondering the reference: https://www.youtube.com/watch?v=lKie-vgUGdI


I really enjoy the actual content of the few chapters I read so far, but the styling is 100% LLM, and it's so hard to get through multiple pages of the same exact mannerisms repeated over and over and over.

It kind of feels like reading the world's longest LinkedIn post. I really wish this wasn't the case because I really want to take in the story and lessons, but it's literally too fatiguing to get through much in one sitting.


> It kind of feels like reading the world's longest LinkedIn post

I didn't realize how poignant of a criticism this could be. Holy hell that hit hard.


Yeah this is a bit sad. I think maybe this person actually had some real lived experience and wrote bullet points and then generated the book. I don’t even want to think about the possibility that the whole thing, including anecdotes, might be generated.

I skimmed the content (it has no immediate relevance to my life) but even the chapter headings are sloppadocious.


>lived experience

Not to derail your comment, but what is the purpose of prepending the word "lived" to the word "experience"? Is there experience that's not lived? It's strange to me to imply that knowledge gained from others telling you about something can be called "experience". I've seen the term pop up in particular circumstances in the last several years and it smacks to me of a dog whistle.


It’s a form of contrastive reduplication. Used to emphasize the realness of the experience, versus like second hand experience like interviewing those who have the actual experience.

Also consider a phrase like “work work” versus “school work”. For someone who both works a paid job and goes to school, clarifying that they need to do “work work” makes sense.


You can experience things second hand. I wouldn’t object to someone saying ‘my experience with chemo’ when talking about their spouse’s disease. They can tell you not just the symptoms but what their insurance company did etc.

Still while watching a loved one deal with cancer is an intense experience and gives you way more insight than you had before you didn’t have the lived experience of having cancer, thus the distinction.


You're right that it's become a stock phrase, is somewhat redundant, and I used it without thinking. I strive to avoid such stock phrases (see Orwell's Politics and the English Language), so I thank you for drawing my attention to that.

I don't think it's a dog whistle. A dog whistle is when you signal something to a subgroup of your audience, using language that only they will understand. I have not seen "lived experience" used as a dog whistle.

I have seen it used to contrast official or elite discourse with what happens in one's daily life. For example, official statistics may show that crime is down in your area, but that does not comport with how you are now avoiding certain areas of town completely. Or a woman might be told that their company does not penalize them for taking maternity leave, but in practice they see they are sidelined. The "lived experience" trope is usually deployed when you start trusting your own biography, even the reactions of your own body, as a source of knowledge, opposing dominant narratives.

According to my very some brief research, it seems to have entered English from German, in the writings of Simone de Beauvoir.


OP here, thanks for this feedback, my workflow was to first have a draft and then feed it into a LLM to fix grammar and improve conciseness. Wished there was a tool (I think folks are already working on) that is similar to what a book editor does which suggests changes as opposed to changing the styling.


You can simply ask the model to point out if there are any problems and then fix them yourself. You don't have to copy and paste its output into your book. You can also pay for an actual copyeditor to edit your book.


You can also edit it yourself and then ask a friend, relative, or colleague to read the parts you are struggling with improving. "Does this sentence flow? Is there a better way to say this? Is this confusing?"

If you're going to sink time into writing a book, it's worth spending some time editing it so your message gets through clearly. But that's just my opinion, your mileage may vary.


Perhaps this is what you are looking for: https://www.deepl.com/en/write

It corrects spelling errors and improves awkward wording. You can then go and choose alternative sentences or words. Just don't expect any sort of deeper intelligence.


If it is only to fix grammar and improve conciseness, I find grammarly quite useful. AI goes way beyond these things. Also, while making something concise, AI might make things more difficult for the readers to understand. Worse, it might write something that is totally wrong.


The workflow is fine, the content is fine. The LLM needs to lean in a little harder on your voice and condensing your content— focus on subtraction rather than addition.

The problem is: viewed as a one-off, it’s a gem. But put it on the AI slop conveyor many commenters here apparently are fed all day long, the voice is too similar, it seems like another chapter in that anthology.


Hiring a fairly competent editor is affordable (sometimes even cheap). Specially now that a lot of the commercial copywriting has taken a hit with the ai slop


LLM writing reads like shitty blogs turned into books. I'm not sure what this is called, but when the chapter can be summed up in a sentence or two, but fleshed out to cover 3 or four pages, with multiple anecdotes conveying the same concept.


> I'm not sure what this is called, but when the chapter can be summed up in a sentence or two, but fleshed out to cover 3 or four pages, with multiple anecdotes conveying the same concept.

This is how I felt when I read Tony Robbins. Or a modern business book.


I dunno how much folks should trust this as more of an account of his specific journey, given this guy apparently didn't do an 83b election and got stuck with a big tax bill (ch 11).


OP here, 83b didn’t apply in my case as I had only stick options, referenced in chapter 11


Yeah, but as a founder, why did you have stock options rather than stock with a vesting agreement?

Even early employees can early exercise and file an 83b.

This chapter is just self inflicted through bad planning, where the correct advice is to vest stock and make sure you file an 83b when you start the company.

The advice everyone should be getting here is not "don't take out a loan", but "make sure you get stock and an 83b"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: