Hacker Newsnew | past | comments | ask | show | jobs | submit | gopalv's commentslogin

> a clear explanation of a compression algorithm

The huffman tree, LZ77 and LZMA explanation is truly excellent for how concise the explanation is.

The earlier Veritasium video on Markov Chains in itself is linked if you don't know what a markov chain is.

I expected Veritasium to tank when it got sold to private equity & Derek went to Australia, but been surprised to see the quality of the long form stuff churned out by Casper, Petr, Henry & Greg.


I liked the presentation about paint mixing, however I think this is not impossible to find the missing key paint having the public paint and the message paint, but still, this is really close to what RSA is

The paint mixing is actually way closer to the idea of the Diffie-Hellman key exchange, more than RSA.

I couldn't help but notice it was quite similar to a Numberphile video 8 years earlier...

https://youtu.be/NmM9HA2MQGI


> You're encouraged to use AI to solve the problem. Whatever tools you would want to use as an employee, use them during the interview. We'll give you a Claude, Codex, Cursor, or Gemini license if you need one. I want to see you balance LLM-generated code against your own judgment.

I love this approach, because this is a good litmus test of both criteria I am looking for in engineers.

Part one is "can you use these tools fluently" and the second section is more like "what are you without that suit, Tony Stark?" (or Peter Parker, depending on your movie preference).

The AI part has been moved to a take-home test section in my scenario, but the 40 minutes of the interview is "I want you to make a minor change to the tool you built" & see how a change in requirements bounces through the person's head.

The part 3 is the turn, where I pull the rug out of the assumptions in the original business case.

My company has probably hit max engineering size already, but I've found there are people who build "technical savings" in their architectures and not debt, which makes their next 3 steps velocity without compromising on pr quality.


> the future is of course BEV

That's probably the reason - we only need dressage horses and pure bloods now that the real draft horse is getting put to pasture.

These are no longer workhorses.

Six cylinders are the smoothest engines out there.

Honda used to have a 1L 6 cylinder engine for their bikes - the Gold wing has a 6 cylinder still.

The perimeter of the piston goes down in relation to its area (& multiplied by BMEP) when the radius goes up - looking at you Africa Twin.

The perimeter is where the unburnt fuel lives and gets caught up in the emission rules. So fewer larger cylinders is better according to EPA - 500cc each, maybe.

If we're only going to have hobby vehicles with internal combustion, then a six cylinder or doubling up to a v-12 makes sense.

They're toys for the weekend, not to put a 100k miles on it.


>Six cylinders are the smoothest engines out there.

I dunno man, have you ever driven a rotary? The design has a lot of problems, but smoothness isn't one of them...


My older brother had a mighty CBX. Smooth as silk, and very fast. Carburetor synchronization was troubling!

> Like, what do you mean by "taste"?

Imagine the scene from Ratatouille, where Remy explains "taste" and the brother finds it impossible to understand what it is ("Food is food").

The dad goes from being annoyed that Remy is a picky eater instead decides to put him to work as a taster. Gives him the job of approving forage that comes into the family & protect others from being poisoned.

The reason we say "taste" is because that's the closest parallel.

When it is even more vague, I call it a "code smell".


Okay but you can define what good food is, right? Like if you're the best chef in the world, you can clearly define what "taste" of a particular food is the best. It might be subjective but it wouldn't be vague, the chef can clearly pinpoint what makes the food taste better instead of just being like "its what you feel" or other vague terms. My point is that the article doesn't delve into what is good taste in the context of coding. I understand the metaphysical meaning of what taste means but you need to define what it means in your particular context. If you leave it to be subjective, then everyone has good taste which means taste cannot be the difference between good and bad software which is the premise of the post.

> that's imposing these draconian rules on 3D printed guns

This is a bill with no votes - the first committee hearing is in March.

The purpose of the bill seems to be have some controversy & possibly raise the profile of the proposer.

The bill is written very similarly to how we enforce firmware for regular printers and EURion constellation detection.


In the intro to the "Crack Up", there's a quote which I used a a mantra

The ability to hold two conflicting thoughts and yet continue to function is a test of intelligence - be able to see that things are hopeless and yet be determined to make them otherwise.

You don't need to lie yourself that the world is not falling apart, but being truly optimistic instead of nihilistic at the face of that is a difficult test for any intelligent human being.

In the scale of the universe and history, most of what you do is not important, but it is very important that you do it (rambles on about Gita, Ecclesiastes and Plato ...).


Pessimism of the intellect, optimism of the will

If you have ever used something like yacc/bison, debugging it is relatively sane with gdb.

You can find all the possible tricks in making it debuggable by reading the y.tab.c

Including all the corner cases for odd compilers.

Re2c is a bit more modern if you don't need all the history of yacc.


Debugging Yacc is completely insane with gdb, for other reasons, like that grammar rules aren't functions you can just put a breakpoint on, and see their backtrace, etc, as you can with a recursive descent parser.

But yes, you can put a line-oriented breakpoint on your action code and step through it.


> Making models larger improves overall accuracy but doesn't reliably reduce incoherence on hard problems.

Coherence requires 2 opposing forces to hold coherence in one dimension and at least 3 of them in higher dimensions of quality.

My team wrote up a paper titled "If You Want Coherence, Orchestrate a Team of Rivals"[1] because we kept finding that upping the reasoning threshold resulted in less coherence - more experimentation before we hit a dead-end to turn around.

So we had a better result from using Haiku (we fail over to Sonnet) over Opus and using a higher reasoning model to decompose tasks rather than perform each one of them.

Once a plan is made, the cheaper models do better as they do not double-think their approaches - they fail or they succeed, they are not as tenacious as the higher cost models.

We can escalate to higher authority and get out of that mess faster if we fail hard and early.

The knowledge of how exactly failure happened seems to be less useful to the higher reasoning model over the action biased models.

Splitting up the tactical and strategic sides of the problem, seems to work similarly to how Generals don't hold guns in a war.

[1] - https://arxiv.org/abs/2601.14351


> Coherence requires 2 opposing forces

This seems very basic to any kind of information processing beyond straight shot predictable transforms.

Expansion and reduction of possibilities, branches, scope, etc.

Biological and artificial neural networks converging into multiple signals, that are reduced by competition between them.

Scientific theorizing, followed by experimental testing.

Evolutionary genetic recombination and mutation, winnowed back by resource competition.

Generation, reduction, repeat.

In a continually coordinated sense too. Many of our systems work best by encouraging simultaneous cooperation and competition.

Control systems command signal proportional to demand, vs. continually reverse-acting error feedback.


> This seems very basic

Yes, this is not some sort of hard-fought wisdom.

It should be common sense, but I still see a lot of experiments which measure the sound of one hand clapping.

In some sense, it is a product of laziness to automate human supervision with more agents, but on the other hand I can't argue with the results.

If you don't really want the experiments and data from the academic paper, we have a white paper which is completely obvious to anyone who's read High Output Management, Mythical Man Month and Philosophy of Software Design recently.

Nothing in there is new, except the field it is applied to has no humans left.


> Yes, this is not some sort of hard-fought wisdom.

By basic I didn't mean uninteresting.

In fact, despite the pervasiveness and obviousness of the control and efficiency benefits of push-pull, generating-reducing, cooperation-competition, etc., I don't think I have ever seen any kind of general treatment or characterization that pulled all these similar dynamics together. Or a hierarchy of such.

> In some sense, it is a product of laziness to automate human supervision with more agents, but on the other hand I can't argue with the results.

I think it is the fact that the agents are operating coherently with the respective complementary goals. Whereas, asking one agent to both solve and judge creates conflicting constraints before a solution has begun.

Creative friction.

I am reminded of brainstorming sessions, where it is so important to note ideas, but not start judging them, since who knows what crazy ideas will fit or spark together. Later they can be selected down.

So we institutionalize this separation/staging with human teams too, even if it is just one of us (within our context limits, over two inference sessions :).


More or less, delegation and peer review.


> The paper sounds too shallow. The errors data doesn't seem to have a rationale or correlation against the architecture. Specifically, what makes the SAS architecture to have lowest error rates while the similar architecture with independent agents having highest error rates?

I can believe SAS works great until the context has errors which were corrected - there seems to be a leakage between past mistakes and new ones, if you leave them all in one context window.

My team wrote a similar paper[1] last month, but we found the orchestrator is not the core component, but a specialized evaluator for each action to match the result, goal and methods at the end of execution to report back to the orchestrator on goal adherence.

The effect is sort of like a perpetual evals loop, which lets us improve the product every week but agent by agent without the Snowflake agent picking up the Bigquery tools etc.

We started building this Nov 2024, so the paper is more of a description of what worked for us (see Section 3).

Also specific models are great at some tasks, but not always good at others.

My general finding is that Google models do document extraction best, Claude does code well and OpenAI does task management in somewhat sycophantic fashion.

Multi-agents was originally supposed to let us put together a "best of all models" world, but it works at error correcting if I have Claude write code and GPT 5 check the results instead of everything going into one context.

[1] - https://arxiv.org/abs/2601.14351


> But for just the cost of doubling our space, we can use two Bloom filters!

We can optimize the hash function to make it more space efficient.

Instead of using remainders to locate filter positions, we can use a mersenne prime number mask (like say 31), but in this case I have a feeling the best hash function to use would be to mask with (2^1)-1.


This produced strange results on my ternary computer. I had to use a recursive popcnt instead.


this is my new favorite comment on this cursed website


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: