In my case I have almost all notifications disabled so maybe there's an option somewhere. Generally find those notification badges too powerful for me to not check and then get waylaid doom scrolling/watching, so I've made it a habit to always disable them everywhere.
Somewhat tempted to re-enable it as I only really comment on videos that are for very very niche communities and I'm usually answering or asking questions.
To create an ephemeral (docker-like) MacOS VM on a Mac with full performance and access (e.g. GPU) you have to use a virtualization API provided by Apple.
For most CI use, you can choose between:
Anka, a "contact-us for pricing" closed-source projet, where you have to pay expensive license (easy 3000 USD/yr per machine)
or
tart, which is a lightweight wrapper around the official Apple API.
But you have to know that on MacOS, there is an artificial limit of 2 VMs per Mac... but well:
This was pre-Anthropic but the fact that Bun automatically loads .env files if they're present almost disqualifies it from most tasks https://github.com/oven-sh/bun/issues/23967
It makes it hard to take them too seriously with such a design choice - a footgun really. It's so easy to accidentally load secrets via environment variables, with no way to disable this anti-feature.
1. Randomly peeking at process.argv and process.env all around. Other weird layering violations, too.
2. Tons of repeat code, eg. multiple ad-hoc implementations of hash functions / PRNGs.
3. Almost no high-level comments about structure - I assume all that lives in some CLAUDE.md instead.
It's implicit state that's also untyped - it's just a String -> String map without any canonical single source of truth about what environment variables are consulted, when, why and in what form.
Such state should be strongly typed, have a canonical source of truth (which can then be also reused to document environment variables that the code supports, and eg. allow reading the same options from configs, flags, etc) and then explicitly passed to the functions that need it, eg. as function arguments or members of an associated instance.
This makes it easier to reason about the code (the caller will know that some module changes its functionality based on some state variable). It also makes it easier to test (both from the mechanical point of view of having to set environment variables which is gnarly, and from the point of view of once again knowing that the code changes its behaviour based on some state/option and both cases should probably be tested).
That's exactly why, access to global mutable state should be limited to as small a surface area as possible, so 99% of code can be locally deterministic and side-effect free, only using values that are passed into it. That makes testing easier too.
environment variables can change while the process is running and are not memory safe (though I suspect node tries to wrap it with a lock). Meaning if you check a variable at point A, enter a branch and check it again at point B ... it's not guaranteed that they will be the same value. This can cause you to enter "impossible conditions".
Wait, is it expected for them to be able to change? According to this SO answer [0] it's only really possible through GDB or "nasty hacks" as there's no API for it.
I’m not strongly opinionated, especially with such a short function, but in general early return makes it so you don’t need to keep the whole function body in your head to understand the logic. Often it saves you having to read the whole function body too.
But you can achieve a similar effect by keeping your functions small, in which case I think both styles are roughly equivalent.
useCanUseTool.tsx looks special, maybe it'scodegen'ed or copy 'n pasted? `_c` as an import name, no comments, use of promises instead of async function. Or maybe it's just bad vibing...
Maybe, I do suspect _some_ parts are codegen or source map artifacts.
But if you take a look at the other file, for example `useTypeahead` you'd see, even if there are a few code-gen / source-map artifacts, you still see the core logic, and behavior, is just a big bowl of soup
This looks like a useful set of guidelines. I see the most value in reducing the bikeshedding which invariably happens when designing an API. I wonder if anyone is using AEP and can comment on downsides or problems they've encountered.
One thing I've noticed is that the section on batch endpoints is missing batch create/update. Also batch get seems a little strange - in the JSON variant it returns an object with a link for missing entities.
It also struck me as a bit of a sleight of hand - but maybe it's just rhetorical flourish. Or more charitably you could say it's inevitable - in a conference talk of finite length, you can't possibly back up every assertion with detailed evidence. "It turns out" or "it ends up" are then a shorthand way of referring to your own experience.
Literally every interview I've done recently has included the question: "What's your stance on AI coding tools?" And there's clearly a right and wrong answer.
In my case, the question was "how are you using AI tools?" And trying to see whether you're still in the metaphorical stone age of copy-pasting code into chatgpt.com or making use of (at the time modern) agentic workflows. Not sure how good of an idea this is, but at least it was a question that popped up after passing technical interviews. I want to believe the purpose of this question was to gauge whether applicants were keeping up with dev tooling or potentially stagnating.
To be fair, this topic seems to be quite divisive, and seems like something that definitely should be discussed during an interview. Who is right and wrong is one thing, but you likely don't want to be working for a company who has an incompatible take on this topic to you.
Nice write-up, thanks for sharing. How does your hand-vibed python program compare to frameworks like pipecat or livekit agents? Both are also written in python.
I'm sure LiveKit or similar would be best to use in production. I'm sure these libraries handle a lot of edge cases, or at least let you configure things quite well out of the box. Though maybe that argument will become less and less potent over time. The results I got were genuinely impressive, and of course most of the credit goes to the LLM. I think it's worth building this stuff from scratch, just so that you can be sure you understand what you'll actually be running. I now know how every piece works and can configure/tune things more confidently.
reply