Cannot wait for my eyes to suffer when ludicrous mode becomes the next popular CAPTCHA method. Also, this was an interesting response: https://files.catbox.moe/jiw75z.png.
This is awesome. Thanks for chatgpt-shell.el too. Could use authinfo for api-keys so don't have to put them inside config (saved in ~/.authinfo, format eqv. to ~/.netrc).
Rust Cookbook states it's a collection of examples to accomplish various tasks, some with brief explanation. But nothing that could be called article/talk/repo.
Should note, for anyone curious, that learning idiomatic Rust off those examples isn't straightforward.
The explanation simply states what the code is doing, the classic sin of code commenting. But no explanation given on why `anyhow::Result`, `main() -> Resut<()>`, `?` operator, `{:?}`, ... are used. If wanted to learn, those are what one would care about. But, if looking for a snippet to use for a task, what the cookbook is about, then is fine.
Seems like a simplified Rust with partial prefix notation (which the rationale that is better for LLMs is based on vibes really) that compiles to C. Similar language posted here not too long ago: Zen-C => more features, no prefix notation / Rue => no prefix notation, compiles directly to native code (no C target). Surprisingly compared to other LLM "optimized" languages, it isn't so much concerned about token efficiency.
I find Polish or Reverse Polish notation jarring after a lifetime of thinking in terms of operator precedence. Given that it's fairly rare to see, I wonder what about it would be more LLM-friendly. It does lend itself better to "tokenization" of a sort - if you want to construct operations from lots of smaller operations, for example if you're mutating genetic algorithms (a la Eureqa). But I've written code in the past to explicitly convert those kinds of operations back to infix for easier readability. I wonder if the LLMs in this case are expected to behave a bit like genetic algorithms as they construct things.
>It does lend itself better to "tokenization" of a sort - if you want to construct operations from lots of smaller operations [...]
That's an educated assumption to make. But therein lies the issue with every LLM "optimized" language, including those recent ones posted here oriented toward minimizing tokens. Assumptions, that are unvalidatable and unfalsifiable, about the kind of output LLMs synthesize/emit when that output is code (or any output to be real).
There's a risk but imo think there's some overreaction. Plus should keep backups anyway. Have had few disks failing on me that fear to have any important data with single copy. If want to be on the safe side though: Create container/VM. Do agentic work inside. Git commit/push outside often/every-step.
The LLM output isn't an unfiltered result of an unbiased model. Rather, some texts may be classified high-quality (where the em-dash, curly quotes, a more sophisticated/less-everyday vocabulary are more expected to appear), some low-quality, and some choices are driven by human feedback (aka fine-tuning), either to improve quality (OpenAI employs Kenyans, Kenyan/Nigerian English considered more colonial) or engagement through affirmative/reinforcing responses ("You're absolutely right. Universe is indeed a donut. Want me to write down an abstract? Want me to write down the equations?"). Some nice relevant articles are [1],[2].
reply