There is an official (albeit experimental) NaCl implementation for Go which would probably have both been simpler and stronger to use: https://godoc.org/golang.org/x/crypto/nacl
Development seems to be.. on hold, at the very least. The last commit was almost 6 months ago (2016-08-23), and the most recently closed issue was closed over 8 months ago (2016-06-06), and before that a year ago (2016-02-22).
Not based on the open issues and roadmap. I realize you were probably joking, but it's a valid point anyway; I considered addressing it in my original comment.
How is this doing discovery of other nodes? Says it is fully decentralized but just doing a `meshbird new` to get a key and then running `MESHBIRD_KEY="key" meshbird join` doesn't explain the discovery mechanism to me. Haven't dug into it much though.
Good point. We are going to implement AES-GCM encryption based data transfer. Why AES-GCM to solve HMAC missing? Because Go have low-level asm optimisations. This is open way to full utilization of 10G/40G networks.
Seriously, I'd advise that you implement an HMAC today, and implement GCM tomorrow — using raw CTR mode really is that dangerous. And make sure that you never ever ever reuse IVs, ever.
This sort of thing is incredibly dangerous. Props to you for coming up with a great UX, but crypto is very, very difficult to get right.
It's a bit misleading to say no gateways are required. As far as I can tell it uses STUN/TURN for NAT busting. When NAT busting does not work (in case of corporate firewall for example), communication falls back to the TURN server as a relay.
IIRC according to google a few years ago, something like 10-20% of STUN/TURN traffic needs to be routed over the TURN relay server.
I don't fully understand purpose of this project after visiting the website and other links provided in comments. How is it different from regular VPN?
ZeroTier: it connects my laptop, my VDS, and my behind-two-NATs home machine pretty efficiently. That is, e.g., when my laptop is connected to the home wi-fi, the zerotier interface seems to exchange packets directly with the home machine.
Presence of an Android client is important for me. Auto-reconfiguation in a new network (laptop on a public wi-fi, phone on mobile networks) is nice.
"Peer-to-peer discovery" is not important for me, that is, I'm OK with my nodes discovering the network via a control center. (You can self-host the control center.)
Running a VPN between only 4 machines wasn't that useful, and it needs a central server. I quite like Meshbird's idea of using DHT instead. If it ever evolves to improve its crypto and setup, I might take it up instead.
There is a PDF by the original author that explains the difference from "VPNs".
A reachable IP address and a TAP device are the only requirements.
For example, two edges can also be supernodes. A third party supernode would only be needed for the initial connection. Once connected, then each can use the supernode run by the other. The third party is no longer needed. No central server.
As for DHT, who runs the DHT bootstrap server?
Do DHT users run their own bootstrap servers?
Do users exercise any control over the DHT? Who does?
By central server I mean a referral hub, yes. But I just couldn't keep one up reliably (in the sense that I didn't want to, since the mesh had to be fairly dynamic in my case).
"curl ... | sh" is absolutely fine. If you want to complain about something, complain about the fact that the URL being used is an http URL instead of an https one.
If you want to complain about something, complain about the completely pointless fragmentation of the Linux ecosystem that pretty much mandates "curl|bash" to ship software for "Linux."
The problem is tools -- plural. Users think "I have Linux, where do I get the Linux version?" You have to provide arcane instructions for how to add a package repository on Every. Single. Linux. Distribution.
Or you can script it and users can run a command. Still painful, but less so for the user.
As far as the distributions themselves go: they are harder to get software into than the Apple App Store. The rules are arcane and the docs either barely exist or are on wikis that have not been updated in over a decade. The whole process is unnecessarily arcane beyond belief.
"curl | sh" is not in itself any less secure than "npm install" or "go get", but it is often a good indicator of a project that takes usability more seriously than security. IMO, it's also seen as "the new way" to do installs, and implies a lack of respect for the fodgy old way to do things (e.g. with a package manager).
> … is not in itself any less secure … takes usability more seriously than security.
You're contradicting yourself. If it's not any less secure, then how does using it mean you're not taking security securely? And you're also treating usability as if it's not important, when in fact usability is very nearly the most important part. If your software isn't usable, then nobody will use it, and if nobody is using it then it doesn't matter how secure it is.
Many of these scripts actually install through package managers, dockers curl | sh like a year or two ago basically just set up an apt repo and ran some apt commands. I think the hurdle they're gunning for is having X number of distro targets and the explanation cost for a user that just wants to jump in.