Hacker Newsnew | past | comments | ask | show | jobs | submit | zygy's commentslogin

Glad you're working on this. Duolingo is garbage and I've been hopeful that AI can help accelerate language learning in a way that is actually effective.


alternate title: "The Urgency of Interpretability"


and why LLMs are still black boxes that fundamentally cannot reason.


Naive question: what's the intuition for how this is different from increasing the number of learnable parameters on a regular MLP?


Orthogonality ensures that each weight has its own, individual importance. In a regular MLP, the weights are naturally correlated.


Noob here. How would I figure out whether my machine would handle a particular model?


Check the RAM requirements for the model (can be hard to find, though) and compare to the available RAM you want to run it in (VRAM if you are running on GPU, system RAM if running on CPU.)

E.g., this extended-context Llama 3 70B requires 64GB at 256K context and over 100GB at 1M.


My Morning Jacket's "Touch Me I'm Going To Scream pt 2": https://www.youtube.com/watch?v=j3PciCWIGLE


Thanks for sharing. Do you have similar side projects today?


I never stopped :) both at work and outside.


The current Rust experts probably spent more than 10 years learning the concepts that made them Rust experts.


Have a particular one to recommend?


Whatever's cheap and slightly conical – most will do. Worst case you let it sit for ~1h so it melts a little.

From 10 seconds of Googling, something like this: https://browzefactory.com/products/tumbler-stainless-steel-2...


Any good publications to learn about this?


Congrats on the business, and thank you for sharing!


Thanks for your comment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: