Hacker Newsnew | past | comments | ask | show | jobs | submit | compumike's commentslogin

> Schematics also help explain your circuit because the idioms of drawing them communicate intent.

I think this is a key point. One could imagine three layers:

(1) NETLIST: a list of text-only descriptions, like "R1 1 0 1k", or even a sentence description "connect a 1k resistor between node 1 and node 0"

(2) SCHEMATIC: a 2D drawing with canonical symbols, straight lines for wires, and labels/notes

(3) LAYOUT: a 3D (or multi-layer 2D) physical representation for PCB, or in this case breadboarding

All three layers are useful. (Obviously you need layout to make a PCB, and you need a netlist for simulation.)

But for most humans, where we have 2D visual representations baked in, if you're trying to understand or communicate what's going on:

- It's really really hard to keep track of a bunch of text sentences like a netlist and node numbers/names for all but the simplest circuits -- maybe 3-5 elements?

- It's really really hard to follow a 3D layout of PCB tracks that leads to pads, and then having to remember pin orders etc.

- It's easiest to follow a schematic diagram. It's browsable. It contains "idioms", as you say, about signal flow, logical block groupings, etc.: purpose and intent and functionality, in a way that netlists and physical layouts don't.

FYI, for medium-large digital circuits, I don't think this is true: probably just reading VHDL/Verilog, like reading source code, makes more sense. This is closer to the "netlist" level. I think that's because you'd name modules and inputs/outputs in a way similar to how you'd name functions and arguments in software, which doesn't really apply to "Resistor" or "Capacitor" as primitives.

But for a pretty big practical range of mixed-mode and analog things, I'd argue that schematics really are the easiest level for our brains.

(Disclosure: I'm one of the founders of CircuitLab https://www.circuitlab.com/ (YC W13) where we've been building an online circuit simulator & schematic editor for a long time. Although I'm mostly on the simulation engine / netlist side. My cofounder and other teammates have done most of the schematic GUI work.)

IMHO solderless breadboards still have their place for prototyping some slow circuits, ballpark maybe < 1 MHz signals, if you're aware of the extra capacitance and limitations. :)


You essentially can practice the classical art of memory on a 2D representation (think why you can remember who was at lunch by thinking about where they sat), and it's useful to have them at different levels. Almost any nontrivial schematic should have an associated block diagram (by this I do not mean it necessarily should be captured at the block diagram level in a "hierarchical design"; these can be more trouble than they're worth). There are other types of diagrams that can be useful, too.

I haven't done much HDL work so I don't have strong opinions there, but let's think about graphical representations source code and software intent for a moment. I like to see some block-level description of firmware even if it's just a few rectangles representing the main loop and each ISR other other process with text indicating what they do. I like to see statecharts. If someone has written an implicit flags-and-conditionals state machine, I'll draw the statechart and start probing for bugs and debugging from there. Tools like Scitools Understand are drastically underrated for figuring out other people's code, incidentally.

What kills me about the aforementioned style is that it eschews the schematic as a tool for thinking and implicitly sees schematic capture as a layer of friction you have to get through to get to layout. I suspect a lot of designs drawn that way were built up point-to-point on a breadboard and never sketched or even otherwise mentally visualized.


The big thing that articles like this miss completely is that we are no longer in the brief HTTP/1.0 era (1996) where every request is a new TCP connection (and therefore possibly a new DNS query).

In the HTTP/1.1 (1997) or HTTP/2 era, the TCP connection is made once and then stays open (Connection: Keep-Alive) for multiple requests. This greatly reduces the number of DNS lookups per HTTP request.

If the web server is configured for a sufficiently long Keep-Alive idle period, then this period is far more relevant than a short DNS TTL.

If the server dies or disconnects in the middle of a Keep-Alive, the client/browser will open a new connection, and at this point, a short DNS TTL can make sense.

(I have not investigated how this works with QUIC HTTP/3 over UDP: how often does the client/browser do a DNS lookup? But my suspicion is that it also does a DNS query only on the initial connection and then sends UDP packets to the same resolved IP address for the life of that connection, and so it behaves exactly like the TCP Keep-Alive case.)


  > patched an Encrypted DNS Server to store the original TTL of a response, defined as the minimum TTL of its records, for each incoming query

The article seems to be based on capturing live dns data from some real network. While it may be true that persistent connections help reduce ttl it certainly seems like the article is accounting for that unless their network is only using http1.0 for some reason.

I agree that low TTL could help during an outage if you actually wanted to move your workload somewhere else, and I didn't see it mentioned in the article, but I've never actually seen this done in my experience, setting TTL extremely low for some sort of extreme DR scenario smells like an anti pattern to me.

Consider the counterpoint, having high TTL can prevent your service going down if the dns server crashes or loses connectivity.


It's very local here. I'm in the suburbs of Philadelphia, in one of the highest income counties in the state, two blocks from a major hospital, one block from a suburban downtown. Despite that, I've experienced one or two 4-6 hour long power outages per year the past few years. (Mostly correlated with weather.) One outage in June 2025 was 50 hours long!

Many larger homes in this area have whole-house generators (powered by utility natural gas) with automatic transfer switches. During the 50-hour outage, we "abandoned ship" and stayed with someone who also had an outage, but had a whole-house generator.

Other areas just 5-10 miles away are like what you describe: maybe one outage in the past 10 years.


> If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.

I think a pattern like that is reasonable for a 6-day cert:

- renew every 2 days, and have a "4 day debugging window" - renew every 1 day, and have a "5 day debugging window"

Monitoring options: https://letsencrypt.org/docs/monitoring-options/

This makes me wonder if the scripts I published at https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... should have the expiry thresholds defined in units of hours, instead of integer days?


100%, I've run into this too. I wrote some minimal scripts in Bash, Python, Ruby, Node.js (JavaScript), Go, and Powershell to send a request and alert if the expiration is less than 14 days from now: https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... because anyone who's operating a TLS-secured website (which is... basically anyone with a website) should have at least that level of automated sanity check. We're talking about ~10 lines of Python!


Thought you might find this story interesting, concerning the Total Real Returns website which originally launched on HN a while ago. https://news.ycombinator.com/item?id=32081943

This is probably the same kind of "one new row per month" assumption that many data pipelines with any sort of primary date/time column make!


Lots of people are speculating that the price spike is AI related. But it might be more mundane:

I'd bet that a good chunk of the apparently sudden demand spike could be last month's Microsoft Windows 10 end-of-support finally happening, pushing companies and individuals to replace many years worth of older laptops and desktops all at once.


I worked in enterprise laptop repair two decades ago — I like your theory (and there's definitely meat there) but my experience was that if a system's OEM configuration wasn't enough to run modern software, we'd replace the entire system (to avoid bottlenecks elsewhere in the architecture).


Perhaps the memory manufacturers have seen how much Apple gets away with charging for the memory on their laptops and have decided to copy them ;-)


It’s not speculation, but it could also be both.


I have no idea about the number of people this has actually affected, but this is exactly my situation. Need a new workstation with a bunch of RAM to replace my Win10 machine, so I don't really have viable options than paying the going rate.


There's a tradeoff and the assumption here (which I think is solid) is that there's more benefit from avoiding a supply chain attack by blindly (by default) using a dependency cooldown vs. avoiding a zero-day by blindly (by default) staying on the bleeding edge of new releases.

It's comparing the likelihood of an update introducing a new vulnerability to the likelihood of it fixing a vulnerability.

While the article frames this problem in terms of deliberate, intentional supply chain attacks, I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.

On the unintentional bug/vulnerability side, I think there's a similar argument to be made. Maybe even SemVer can help as a heuristic: a patch version increment is likely safer (less likely to introduce new bugs/regressions/vulnerabilities) than a minor version increment, so a patch version increment could have a shorter cooldown.

If I'm currently running version 2.3.4, and there's a new release 2.4.0, then (unless there's a feature or bugfix I need ASAP), I'm probably better off waiting N days, or until 2.4.1 comes out and fixes the new bugs introduced by 2.4.0!


Yep, that's definitely the assumption. However, I think it's also worth noting that zero-days, once disclosed, do typically receive advisories. Those advisories then (at least in Dependabot) bypass any cooldown controls, since the thinking is that a known vulnerability is more important to remediate than the open-ended risk of a compromised update.

> I'm sure the majority of bugs and vulnerabilities were never supply chain attacks: they were just ordinary bugs introduced unintentionally in the normal course of software development.

Yes, absolutely! The overwhelming majority of vulnerabilities stem from normal accidental bug introduction -- what makes these kinds of dependency compromises uniquely interesting is how immediately dangerous they are versus, say, a DoS somewhere in my network stack (where I'm not even sure it affects me).


Could a supply chain attacker simulate an advisory-remediating release somehow, i.e., abuse this feature to bypass cooldowns?


Of course. They can simply wait to exploit their vulnerability. It it is well hidden, then it probably won't be noticed for a while and so you can wait until it is running on the majority of your target systems before exploiting it.

From their point of view it is a trade-off between volume of vulnerable targets, management impatience and even the time value of money. Time to market probably wins a lot of arguments that it shouldn't, but that is good news for real people.


You should also factor in that a zero-day often isn’t surfaced to be exploitable if you are using the onion model with other layers that need to be penetrated together. In contrast to a supply chain vulnerability that is designed to actively make outbound connections through any means possible.


Thank you. I was scanning this thread for anyone pointing this out.

The cooldown security scheme appears like some inverse "security by obscurity". Nobody could see a backdoor, therefor we can assume security. This scheme stands and falls with the assumed timelines. Once this assumption tumbles, picking a cooldown period becomes guess work. (Or another compliance box ticked.)

On the other side, the assumption can very well be sound, maybe ~90% of future backdoors can be mitigated by it. But who can tell. This looks like the survivorship bias, because we are making decisions based on the cases we found.


I’d estimate the vast majority of CVEs in third party source are not directly or indirectly exploitable. The CVSS scoring system assumes the worst case scenario the module is deployed in. We still have no good way to automate adjusting the score or even just figuring false positive.


Defaults are always assumptions. Changing them usually means that you have new information.


The big problem is the Red Queen's Race nature of development in rapidly-evolving software ecosystems, where everyone has to keep pushing versions forward to deal with their dependencies' changes, as well as any actual software developments of their own. Combine that with the poor design decisions found in rapidly-evolving ecosystems, where everyone assumes anyting can be fixed in the next release, and you have a recipe for disaster.


Could always just use a status page that updates itself. For my side project Total Real Returns [1], if you scroll down and look at the page footer, I have a live status/uptime widget [2] (just an <img> tag, no JS) which links to an externally-hosted status page [3]. Obviously not critical for a side project, but kind of neat, and was fun to build. :)

[1] https://totalrealreturns.com/

[2] https://status.heyoncall.com/svg/uptime/zCFGfCmjJN6XBX0pACYY...

[3] https://status.heyoncall.com/o/zCFGfCmjJN6XBX0pACYY


This is unrelated to the cloudflare incident but thanks a lot for making that page. I keep checking it from time to time and it's basically the main data source for my long term investing.


I appreciate that, thank you! :)


You can simulate a bunch of these (and edit too) in your browser in CircuitLab:

Diode half-wave rectifier https://www.circuitlab.com/editor/4da864/

Diode full-wave (bridge) rectifier https://www.circuitlab.com/editor/f6ex5x/

Diode turn-off time https://www.circuitlab.com/editor/fwr26m/

LED with resistor biasing https://www.circuitlab.com/editor/z79rqm/

Zener diode voltage reference https://www.circuitlab.com/editor/7f3ndq/

Charge Pump Voltage Doubler https://www.circuitlab.com/editor/24t6h3ypc4e5/

Diode Cascade Voltage Multiplier https://www.circuitlab.com/editor/mh9d8k/

(note: I wrote the simulation engine)


Looks great! Would you have a recommendation for intro materials to help me learn the basics of electronics using CircuitLab? I have a working understanding of signal processing but building an actual circuit without electrocuting myself, not setting my Raspberry Pi on fire, or selecting the right set of components for the simplest DIY project based on spec sheets are a mystery to me.


Not sure if it’s a fit for what you’re looking for, but maybe https://ultimateelectronicsbook.com/ (maybe more theoretical than practical).

I’ve heard good things about “Practical Electronics for Inventors” but haven’t gone through it myself.


> Diode Cascade Voltage Multiplier

A favorite of mine and one of the most common ways to generate a pretty high voltage DC. The full wave version pairs well with a center tapped secondary of a resonant transformer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: