Hacker Newsnew | past | comments | ask | show | jobs | submit | rnhmjoj's commentslogin

Well, for different reasons, but you have similar issues with IPv6 as well. If your client uses temporary addresses (most likely since they're enabled by default on most OS), OpenSSH will pick one of them over the stable address and when they're rotated the connection breaks.

For some reason, OpenSSH devs refuse to fix this issue, so I have to patch it myself:

    --- a/sshconnect.c
    +++ b/sshconnect.c
    @@ -26,6 +26,7 @@
     #include <net/if.h>
     #include <netinet/in.h>
     #include <arpa/inet.h>
    +#include <linux/ipv6.h>
     
     #include <ctype.h>
     #include <errno.h>
    @@ -370,6 +371,11 @@ ssh_create_socket(struct addrinfo *ai)
      if (options.ip_qos_interactive != INT_MAX)
        set_sock_tos(sock, options.ip_qos_interactive);
     
    + if (ai->ai_family == AF_INET6 && options.bind_address == NULL) {
    +  int val = IPV6_PREFER_SRC_PUBLIC;
    +  setsockopt(sock, IPPROTO_IPV6, IPV6_ADDR_PREFERENCES, &val, sizeof(val));
    + }
    +
      /* Bind the socket to an alternative local IP address */
      if (options.bind_address == NULL && options.bind_interface == NULL)
        return sock;

The temporary address doesn't stay active while there's a connection on it? I think that would be the actual "fix".

I think it does, but that's not the issue: if the interface goes down all the temporary address are gone for good, not just "expired".

If you're on a stable address, and the interface goes down, will it let your connection/socket continue to exist?

Because if the connection/socket gets lost either way, I don't really care if the IP changes too.


I'm not sure what happens to the socket, maybe it's closed and reopened, but with this patch I have SSH sessions lasting for days with no issues. Without it, even roaming between two access points can break the session.

Interesting! Is there anywhere a discussion around their refusal to include your fix?

See this, for example: https://groups.google.com/g/opensshunixdev/c/FVv_bK16ADM/m/R...

It boilds down to using a Linux-specific API, though it's really BSD that is lacking support for a standard (RFC 5014).


It would also seem to break address privacy (usually not much of a concern if you authenticate yourself via SSH anyway, but still, it leaks your Ethernet or Wi-Fi interface's MAC address in many older setups).

This is a good argument for not making it the default, but it would be nice to have it as a command line switch.

Well, yss, but SSH is hardly ever anonymous and this could simply be a cli option.

Not anonymous, but it's pretty unexpected for different servers with potentially different identities for each to learn your MAC address (if you're using the default EUI-64 method for SLAAC).

The magnetron itself has about about 65% efficiency, but the paper conjectures that the longer duration of the pulses is due to defects in the cavity that result in some emission at a lower frequency (1.4 rather than the normal 2.4 GHz), so the energy radiated must be a tiny fraction of the nominal power.


What is the point of restricting a certificate to "server" or "client" use, anyway?


Trust chains. Some implementations would accept an LE certificate for foo.com as a valid login for foo.com or something like that, because they treated all trusted certs the same, whether issued by the service being authenticated to, or some other CA.

It might be possible to relay communications between two servers and have one of them act as a client without knowing. Handshake verification prevents that in TLS, but there could be similar attacks.


I have been using GitHub since 2011 and it's undeniable that the performance of the website have been getting worse. The new features that are constantly being added are certainly a factor, but I think the switch to client-side rendering that obviously shifted the load from their server to our browsers and also tend to produce ridiculously large and inefficient DOMs[1] is the main cause.

If you want a practical example, here you go. I'm a Nixpkgs commiter and every time I make a pull request that backports some change to the stable branch, GitHub unprompted starts comparing my PR against master. If I'm not fast enough to switch the target branch within a couple of seconds it literally freezes the browser tab and I may have to force quit it. Yes, the diff is large, but this is not acceptable, and more importantly, it didn't happen a few years ago.

[1]: https://github.com/orgs/community/discussions/111001


> MSS clamping is non-negotiable with tunnels. Every layer of encapsulation eats into the MTU.

Can this tunnel be avoided somehow? If I have to choose between owning my prefix and having 1500 MTU, I'd probably take the latter: MTU issues are so annoying to deal with, and MSS-clamping doesn't solve all of them.


Kind of but not really.

The whole point of BGP is to influence your routing tables. This fundamentally makes very little sense to do when you have a bunch of routers whose routing policy you don't control between you and whoever you're speaking BGP to. eBGP is just TCP and supports knobs to run over multiple hops (so up to 255), but at that point you can't really do anything with the routing information you exchange because the moment you hand the traffic off, the other party can do with it how it pleases. Also, very few people have enough public IP addresses for this, and on the Internet you obviously can't route RFC1918 space. Therefore, you need tunnels, so that you can be one hop away even if the tunneled traffic is traversing the Internet, and so that you can reach peers that let you announce whatever IP space you want.

The other thing you can do, of course, is to just do the same thing internal to your lab. You can absolutely stand up multiple ASN at home. I'd even argue that if you really want to learn BGP, this is a great way to do it, especially if you use two different platforms (say, FRR on FreeBSD peering with a cheap Mikrotik running RouterOS). That way you learn the underlying protocol and not a specific implementation, which is something that is very hard to undo in junior network engineers that have only ever been exposed to one way of doing things.

That's different from some of the goals outlined in the article, but if your goal is to learn this stuff rather than have provider-independent IP space (which even for home labs isn't very valuable to most people), doing it all yourself works fine.


You can use who you're physically connected to. If you have a physical or point–to–point connection to iFog and Lagrange Cloud, you don't need tunnels to reach them. Both these companies offer VPS services.

If your goal is to learn this stuff join dn42, the global networking lab, instead of wasting money with real allocations.


Yes, this can be avoided. All the standard advice and examples are tailored toward avoiding IP packet fragmentation entirely even when the tunnel transport can encapsulate and transmit packets larger than the underlying path MTU. Mostly this is justified for performance reasons, but it also tends to avoid even more difficult to debug situations where there's an MTU or ICMP issue between tunnel endpoints.

I haven't used Wireguard before, but I believe if you force the wg interface MTU to 1500, things will just work. I use IPSec where the solution would be to use something like link-layer tunneling that, ironically, adds another layer of encapsulation to the equation. Most tunnel solutions don't directly support fragmentation as part of their protocol, but you get it for free if they utilize, e.g., UDP or other disjoint IP protocol for transport and don't explicitly disable fragmentation (e.g. by requesting Don't Fragment (DF) flag).

If I were to do this (and I keep meaning to try), I might still lower the MSS on my server(s) just for performance reasons, but at least the tunnel would otherwise appear seamless externally.


Traceroutes are already notoriously hard to interpret correctly[1] and yes, they can be trivially spoofed. Remember the stunt[2] pulled by tpb to move to North Korea? If you are an AS you can also prepend fake AS to your BGP announcements and make the spoofed traceroute even more legitimate.

I wonder if this thing will start a cat and mouse game with VPNs.

[1]: https://old.reddit.com/r/networking/comments/1hkm4g/lets_tal...

[2]: https://news.ycombinator.com/item?id=5319419


Barely? It's more than twice the mediage wage in Poland.


I had a lot of fun playing around with antimony (also from Keeter) a few years ago, but unfortunately it has been mostly abandoned. I heard libfive is supposed to be the next generation, but I haven't experimented with it yet.

Do you know how it compares?


> this could invalidate the information contained in that in the man file.

No, it doesn't. The point of modetc is precisely keep both myself and the programs happy: the files are actually stored where I like to keep them, but they can be accessed as if they were stored where the developer intended.


Well, it's just the natural extension of the FHS convention to the home directory.

I didn't come up with this idea, though, I think I saw this in a reddit thread and started doing it myself: I like that the directories are visible and follow the usual structure.


Why not push it under a hidden directory? Like ~/.local/etc? If we're reconstructing some of the hierarchy I think it makes sense to group and hide. Isn't the problem that the home folder is getting cluttered?


Why would I hide them? They're not really special and since I'm organising them with modetc they're not cluttered. For reference, my home looks something like this

    ~
    ├── bin         binaries and scripts
    ├── etc         configuration files
    ├── var
    │   ├── lib     program data
    │   └── cache   program caches
    ├── src         git repositories
    ├── img         pictures
    ├── mail        email in maildir format
    ├── note        text notes, todo
    ├── doc         documents
    └── down        downloads


I mean we hide in the first place because configs and we don't want to clutter

But more I was thinking that having ~/bin ~/etc ~/src and so on is just clutter. I use ~/.local/{bin,build,lib} so it's compact and reduces clutter in my home


But why would I want those directories visible in my home dir?


Why would I want them hidden? I access files in ~/.config almost daily, I think this is a really good idea


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: