Hacker Newsnew | past | comments | ask | show | jobs | submit | josh3736's commentslogin

There's no reason you have to run ESPHome on your Home Assistant server.

It's offered as a HA a̵d̵d̵o̵n̵ app for ease of use (makes it a one-click install), but you can also just `pip install esphome` or use the provided Docker image and get the exact same UI, but with everything (including compilation) running on your much beefier laptop.

So your binaries get compiled quickly and you can still do the OTAs directly from your laptop. HA needn't be involved.


> You can still install the whole kit and caboodle using pip in a Python virtual environment, but why would you?

This is how I did it, instead of the container or HA OS in a VM.

If you want the simplicity of everything preconfigured, managed, and hands-off, go with HA OS, whether in a VM on a beefier machine, standalone, or the HA Green/Yellow dedicated hardware.

But if you already have a home server and want to add HA, I found just pip installing to be easier than dealing with the container.

Maybe I'm just the silly type that enjoys fiddling with Linux, but I'd argue that it actually makes more sense to install HA bare metal over a container. HA doesn't actually have any major dependencies outside of what pip installs, so setup wasn't any more annoying than via container. And then you never have to deal with container annoyances like passing hardware through to it or weird failures and misconfigurations.

Contrast this with https://frigate.video/, which has so many fragile native dependencies and a super complex stack that trying to install manually is an exercise in futility. I gave up and used the container.


Docker would be the primary other dependency for Apps support.

There's nothing wrong with running it on bare metal but this is easier with the VM image.


The much more likely culprit is your VPN server's port. If it's running on some no-name port (such as the default 51820), that's likely to get throttled.

I'd bet that switching your VPN server port to 443 would solve the problem, since HTTP/3 runs on 443/udp.


This is a clever reuse of WireGuard's cryptographic design, and may indeed make sense as a way to slap some low-overhead encryption on top of your app's existing UDP packets.

However, it's definitely not a replacement for TCP in the way the article implies. WireGuard-the-VPN works because the TCP inside of it handles retransmission and flow control. Going raw WireGuard means that's now entirely up to you.

So this might be a good choice if you're doing something realtime where a small number of dropped packets don't particularly matter (such as the sensor updates the article illustrates).

But if you still need all your packets in order, this is probably a bad idea. Instead, I'd consider using QUIC (HTTP/3's UDP protocol), which brings many of the benefits here (including migration of connections across source IP address and no head-of-line-blocking between streams multiplexed inside the connection) without sacrificing TCP's reliability guarantees. And as the protocol powering 75% of web browsing¹, is a pretty safe choice of transport.

¹ https://blog.apnic.net/2025/06/17/a-quic-progress-report/


> However, it's definitely not a replacement for TCP in the way the article implies.

UDP isn’t TCP and that’s kind of the point. For a large number of use cases the pain TLS imparts isn’t worth it.

QUIC is flexible and fabulous, but heavyweight and not fit for light hardware. it also begs the question “If the browser supported raw UDP what percent of traffic would use it?”


Sure, but this article spends paragraphs talking about the (real) problems with TCP, then suggests that the solution is a UDP-based transport with WireGuard-ish crypto.

…but there's a giant guaranteed-and-ordered-delivery-sized hole in that argument, which is my point. The article never addresses what you lose when going from TCP to UDP. You can't just swap out your app's TCP-based comms with this and call it a day; you're now entirely responsible for dealing with packet loss, order, and congestion if that's important to your application. Why DIY all that if you could just use QUIC?

Granted I haven't personally tried to run QUIC on embedded hardware, so I can't speak to its weight, but I do see someone did it¹ on an ESP32 (ngtcp2 + wolfSSL), so it can be done with < 300 kB of RAM.

I wonder how much RAM this WireGuard-based approach requires. The implementation here is in .NET, so not exactly appropriate for light hardware either.

Regarding browser support for UDP, you'll never get raw UDP for obvious reasons, but the WebTransport API² gives you lowish-level access to UDP-style (unreliable and unordered) datagrams with server connections, and I believe WebRTC can give you those semantics with peers.

¹ https://www.emqx.com/en/blog/can-esp32-run-mqtt-over-quic

² https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...


This is an… interesting choice for archival purposes. What exactly do you think makes HFS+'s reliability better? The only thing I can think of is that HFS+ has journaling while FAT and derivatives do not, but that doesn't particularly matter after the data is on the disk and it's cleanly unmounted (which should be a safe assumption in most archival scenarios).

The Linux HFS+ driver is basically unmaintained, and cannot write to journaled disks. On Windows, the only choice a paid driver. I guess it's fine if you're strictly a Mac user, but it's a real problem if you need to access the disk on another machine. Even if you don't, I still wouldn't trust Apple for long-term support of anything.

Meanwhile exFAT has native support on Windows, Mac, and Linux, and there are drivers for BSDs and others.

So 20 years down the line, you'll certainly have something that can read an exFAT drive without much if any pain, regardless of which platform you're using at the time. HFS+? Who knows.

That said, I'd consider ZFS or btrfs for HDD archival. Granted broad (Mac/Windows) support is weaker than FAT, but at least the filesystems are completely open source. But what really makes them interesting is their automatic data checksumming to detect (and possibly repair) bitrot, which is particularly useful for archival.


> This is an… interesting choice for archival purposes. What exactly do you think makes HFS+'s reliability better? The only thing I can think of is that HFS+ has journaling while FAT and derivatives do not, but that doesn't particularly matter after the data is on the disk and it's cleanly unmounted (which should be a safe assumption in most archival scenarios).

Yes, journaling. Power cuts or unclean unmounts are enough of a risk for me that I don't see any reason to use a file system without journaling.

> The Linux HFS+ driver is basically unmaintained, and cannot write to journaled disks. On Windows, the only choice a paid driver. I guess it's fine if you're strictly a Mac user, but it's a real problem if you need to access the disk on another machine. Even if you don't, I still wouldn't trust Apple for long-term support of anything.

I just don't expect Linux or Windows support to be relevant to me or my family's use, or the cost of the Windows driver to be a problem if it ever came up.

If in a decade Apple drops HFS+, it's not something they're going to do without notice, it's something where I'll have plenty of notice to take the relatively small required effort to migrate my archives to a different file system.

> That said, I'd consider ZFS or btrfs for HDD archival. Granted broad (Mac/Windows) support is weaker than FAT, but at least the filesystems are completely open source. But what really makes them interesting is their automatic data checksumming to detect (and possibly repair) bitrot, which is particularly useful for archival.

I use btrfs for non-archival storage, but don't really see it as useful for archival storage - it's effectively unusable for my wife if I get hit by a bus.

> So 20 years down the line, you'll certainly have something that can read an exFAT drive without much if any pain, regardless of which platform you're using at the time. HFS+? Who knows.

You're optimizing for a problem that isn't in my risk assessment - i.e. I don't care if can shelf a drive and easily read from it in 20 years, I just want to maximize reliability over a 20 year timespan where I'm willing to take maintenance action if required. (And I think you're overly negative on Apple's support of old tech. e.g. Apple's didn't drop software Firewire support for a decade after they stopped selling their last Firewire device - that's plenty of time for a migration if my archival drives were using a Firewire connection. HFS+ is Apple's currently-supported file system for non-SSD storage, and I don't see a medium-term path where they extend APFS support to HDDs or drop HDD support entirely.)


It does really depend on how much data you want to store, but if you've got a lot of it…

Tape.

Obviously extreme prosumer, but for long-term archival of lots of data, LTO tape wins in several ways:

- Discs just aren't actually that high capacity relative to modern HDD capacities. BD XL maxes out at 128 GB, while there are now 30 TB HDDs readily available. That's 240 discs per HDD. Modern LTO tapes store 12-18 TB, or 2-3 tapes per HDD.

- Anything flash-based is a bad choice for long-term storage. SSDs are very fast, but also (relatively) expensive at 15-20¢/GB. Reputable SD cards are in the same neighborhood. Despite the OP redditor's results here, flash is only expected to retain data for 5-10 years.

- Tape is the absolute lowest cost-per-GB you can find of any storage medium. At the moment, LTO 8/9 tape can be had on Amazon for ½¢/GB. Compare with BD-R at 2¢/GB, or BR-R XL M-disc at 15¢/GB. HDDs (spinning rust) are 2-3¢/GB.

- Consider also write speed. LTO can write 300+ MB/s. BD 16x maxes out around 68 MB/s.

- Manufacturers rate tapes for 30 years sitting on a shelf, and it wouldn't be surprising if they still read after 50 years¹. Plain BD-R lasts 5-20 years. M-disc is the interesting outlier, rated 100-1000 years.

Of course, the biggest problem with tape is the drives. While the media is dirt cheap, the drives are crazy expensive. It looks like you can pick up a used LTO-6 drive (2.5 TB tapes) on ebay for around $500. A brand new LTO-9 drive (18 TB tapes) will be $4000-5000.

In terms of breakeven points, a used LTO-6 drive + tapes beats plain BD after about 25 TB. Because of the cost of M-discs, they stop making sense after 1-2 TB. Purely on cost, a brand new LTO-9 drive + tapes doesn't beat used LTO-6 + tapes until about 800 TB (LTO-9 tape is ½¢/GB while LTO-6 tape is 1¢/GB), but there's definitely a point in there where the larger capacity of LTO-9 makes dealing with the physical media a whole lot easier.

So if you're looking for long-term storage for your photo album, a M-disc BD XL is probably a good choice. If you only have a few hundred GB of data, a couple discs + burner can be had for $300 or so, and you can be pretty sure your mom could manage to read the disc if necessary.

But if you're looking to back up your 100 TB homelab NAS, discs are not really feasible. You'll have to spend the next month swapping discs every 25 minutes², and then deal with your new thousand disc collection. Here's where a used LTO-6 drive makes a lot of sense. This is a real sweet spot if you can find a decent drive; all-in you'd spend about $1500 to back up your 100 TB.

This is what I do to backup my NAS — found an old LTO-6 drive and got a bunch of tapes. The drive plugs in to a SAS port (you might need a HBA PCI card, $50), and that's pretty much it. Linux has the drivers built in; it will show up as /dev/st0 and you can just point tar³ at it.

Finally, just to compare with cloud options, storing that 100 TB in AWS Glacier Deep Archive would run you slightly over $100/mo, so you're ahead with your own tapes after little over a year. Oh and don't forget to set aside an extra $8000 for data transfer fees should you ever actually want to retrieve your data lol.

---

¹ eg the Unix v4 tape that was recently found and successfully read after 52 years — https://news.ycombinator.com/item?id=45840321

² Or get a disc-swapping robot, but those run $4000-5000, at which point… you're better off with a brand new tape drive.

³ Thus using the Tape ARchiver program for its original purpose. Use -M to span tapes, tar will prompt you to swap.


Honestly just wanted to thank you for this write up on the consumer side of tape storage. It doesn't look like my archival needs are at this level just yet, but this is an amazing overview and starting point if/when I get to that point. Thanks again!


> Maybe the motherboard? Could that have a speaker built into it? That must be terrible for acoustics, but maybe useful for a little beep when something is wrong?

Yes, it was called the PC Speaker, and that's pretty much exactly what it was used for. https://en.wikipedia.org/wiki/PC_speaker

It was standard equipment through the (mid?) 90s, and completely independent of the (optional) PCM sound card.

Now PCM sound is built in to motherboards and the PC Speaker long ago faded into irrelevance. Modern motherboards don't even have headers to connect a PC Speaker. Some motherboards will emulate the PC Speaker over the built in sound output, but of course you need speakers plugged in and on to hear those beeps.



Let's Encrypt has nothing to do with this problem (of Certificate Transparency logs leaking domain names).

CA/B Forum policy requires every CA to publish every issued certificate in the CT logs.

So if you want a TLS certificate that's trusted by browsers, the domain name has to be published to the world, and it doesn't matter where you got your certificate, you are going to start getting requests from automated vulnerability scanners looking to exploit poorly configured or un-updated software.

Wildcards are used to work around this, since what gets published is *.example.com instead of nas.example.com, super-secret-docs.example.com, etc — but as this article shows, there are other ways that your domain name can leak.

So yes, you should use Let's Encrypt, since paying for a cert from some other CA does nothing useful.


Another big way you get scooped up, having worked in that industry among other things - is that anybody - internal staff, customers, that one sales guy who insists on using his personal iPhone to demo the product and everybody turns a blind eye because he made $14M in sales last year - calls some public DNS resolver and the public DNS server sells those names --- even though the name didn't "work" because it wasn't public.

They don't sell who asked because that's a regulatory nightmare they don't want, but they sell the list of names because it's valuable.

You might buy this because you're a bad guy (reputable sellers won't sell to you but that's easy to circumvent), because you're a more-or-less legit outfit looking for problems you can sell back to the person who has the problem, or even just for market research. Yes, some customers who own example.com and are using ZQF brand HR software won't name the server zqf.example.com but a lot of them will and so you can measure that.


Statistically amount of parasite scanning on LE "secured" domains is way more compared to purchased certficates. And yes, this is without voluntary publishing on LE side.

I am not entirely aware what LE does differently, but we had very clear observation in the past about it.


You already can stream bodies as they're generated with chunked encoding; trailers aren't really needed for that.

Cookies might be useful, but I guess you could do

    <script>document.cookie = '…'</script>
right before the closing `</body>` if you really needed to set cookies late in the game.

I'd love to see something to send content hashes (that browsers would actually verify), replacing the obsolete `Content-MD5`. Maybe `Integrity`, matching the `integrity` HTML attribute used in SRI? It could be a header (for static content) or trailer (for dynamic content).


Eh, a cookie set with JavaScript can't be HttpOnly. It also requires JavaScript. None of this is ideal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: