Basically, specify as ProxyCommand whatever command needs to be run to give you i/o to the remote sshd - in this case, sshing to a bastion host and running netcat. This allows me to do, for example:
ssh internal-dev.example.net
Which will (in background) ssh to the bastion host external.example.net. I can even do port forwards to internal hosts using -L or LocalForward directives. It's a huuuuge timesaver.
ssh even automatically replaces %h and %p in the ProxyCommand with a host and port, though you can of course replace those tokens with static values if it works better.
(Also, note above that one can use wildcards in Host declarations.)
I use the same technique to access VPS machines and mobile devices, except instead of a fixed bastion host I relay the connections through PageKite (disclosure: I'm the author). It feels a bit magical to just be able to ssh into a laptop or Android phone, no matter which network it is on. :-)
2. Scripting an scp, rsync etc from the internal machine is easier now, since the ssh from the external to internal machine is handled transparently for you.
> Personally, I use quite a few public/private keypairs for the various servers and services that I use, to ensure that in the event of having one of my keys compromised the dammage is as restricted as possible.
If you keep all those private keys on the same machine and tend to load them all into ssh-agent frequently, then there's little point in that. People forget that keypairs are not like passwords -- if Github gets compromised, nobody can do anything with the public key you gave them.
Unless you treat the keys very differently (like having a special key that you rarely ever decrypt), there's no reason to have more than one per device.
My first thing with SSH is to generate unique keys per machine which never leave that machine (except in backups encrypted with a backup key unique to that machine). If my mba13 gets jacked, I'll be able to revoke all mba13 ssh keys without locking out mbp17 or imac27 or iphone5 or ipad3 keys. This is related to the private keys on same machine" thing you mention.
However, the other reason for segmenting keys is to do agent forwarding.
I might have a CLIENTA key and then allow ssh auth forwarding from a bastion host at CLIENTA to other CLIENTA machines. I also have a CLIENTB key and allow ssh auth forwarding from a bastion host at CLIENTB to other CLIENTB machines. (or, prod/dev at the same company, or personal/work, or whatever).
I don't want anything CLIENTA does on a subverted bastion host or other host to be able to affect CLIENTB in any way.
I also keep some keys totally offline (to manage logging servers, which are normally read-only); ideally with some better way to do 2 party control as well.
> People forget that keypairs are not like passwords – if Github gets compromised, nobody can do anything with the public key you gave them.
Oh, I'm well aware of the difference between a public/private asymmetric encryption scheme and a symmetric one.
My concerns are more along the lines of my laptop/desktop being stolen, or my home being robbed and my backup disks/USB keys being taken, or even my computer being seized at the US border. There are ways to mitigate those concerns (e.g. full-disk encryption), but I'm very much a proponent of defence-in-depth whenever possible.
I should probably clarify that in the post itself, so that readers aren't misled as to the reasoning behind password-protecting your private keys.
The arguments you've presented so far in favor of multiple keypairs on one computer (different keypairs for different remote services) make no sense.
Typical ssh usage is one keypair per account per machine (or one keypair per type, e.g. I have an rsa keypair and ecdsa keypaor). It doesn't matter if you use the same keypair for github and ec2 instances [1]. The only way for the key to be compromised is if your local machine is compromised. If the local machine is compromised, you can't trust any keys stored on it unless you know when it was compromised and you know you haven't entered the passphrase for some of the private keys since the compromise. More than likely, you won't know that, so you will have to treat all keys on the compromised machine as compromised. You'll have to regenerate and redistribute N keys instead of 1.
In your parent post, you identified physical theft as your main concern. Assuming you have a good passphrase, physical theft is a non-issue. Border crossing seizures and court proceedings are different; in some cases they can demand that you enter your passphrase(s), but multiple keypairs will not help you there.
[1] caveat: of course if you use unprompted authentication forwarding, this becomes an issue... a compromise at github for instance could allow the github hacker to ssh into your EC2 instances using forwarded credentials, but that's a time-limited attack and only works while you're connected to github. Private keys never leave the machine(s) they're hosted on.
You make some good points that are making me rethink my key-per-service approach. Though, other than the need to replace N keys when my machine(s) is compromised, there's not that much key management overhead.
I now use Mosh exclusively over ssh. It's great on slow connections as well as on fast ones. For example, I can start an ssh connection at home on my laptop, drive to the office and resume like nothing ever happened. One of the best discoveries of the past year for me.
I haven't heard of Mosh before, so I can't comment (I'll try it soon), but I just watned to point out that ssh+screen (or ssh+tmux) gives you exactly the same, and is an apt-get/yum-install/pacman-S away.
From reading about mosh, it seems to require a UDP connection, thus non-trivial routing. I forward ssh connections through ssh tunnels (sometime multiple layers), and it works great. Can mosh do that?
It's not quite the same to use ssh + screen/tmux. mosh resumes automatically (no need to log in again), and it also will show your keystrokes as you type them, even if the remote machine hasn't yet received them and then sent back to your local machine the updated text. This "buffered" text is displayed with an underline and when your computer receives communication from the server, it gets updated to the correct text. This makes the terminal feel a lot more responsive, in my experience. mosh can also be installed with apt-get, at least in Ubuntu.
I'll try it for the predictive text ... but, what do you mean "you don't have to login again"? I use a public key login on ssh (so login is invisible), and you can set up your ssh command line in your config file to do so, e.g. I often use
ssh beagle3@remote.host -t 'screen -x || screen'
And it works beautifully. (I'm heavy screen user, so even if I switch to mosh, there will be screen underneath...)
Ah, but I don't want it detached! I often do pair programming or pair sysadminning through screen. Is there an equivalent that works with screen sharing ? ( -x )
I use mosh from our office in Sydney to work on a server on in Vancouver. The character prediction makes this much easier, even if you do see the occasional literal character in a vim session before the server responds. It's so much better than the alternative!
IMO terminal protocols are more suited to UDP than TCP and I use ssh over UDP all the time. There are three main advantages:
* There is no connection session, so you can close your laptop or put it to sleep and open it up again and the connection will still be there.
* You don't have the lag of sending and then responding, it appears locally immediately
* It is much easier to get UDP around firewalls and it can't be blocked easily in the same way most VPN protocols or SSH can. I have yet to find a network where I can't get my terminal UDP packets through.
The alternative to Mosh is setting up OpenVPN[1]. It is especially worthwhile if you have a network of public servers that you administer. It is easy to setup[2] and works on Windows, Mac, Linux, BSD etc.
The best tip is to add a second interface to all your machines and setup a private VLAN across them. This way if you are experiencing a DoS attack or high traffic you can still login and administer the machine (this also applies with standard ssh - you put it in a different range of IPs and on your public machines then only have 80 and 443 open).
EC2[3], Linode[4] et al all support adding a second network interface to each machine (or to just one of the machines, which is then used as a gateway to the remainder) which can be assigned an IP address in a different range. You then setup a separate hostname to this network, or even register a separate administrative domain name (eg. company-admin.com) which you keep on a different registrar, whois record, etc.
> It is much easier to get UDP around firewalls and it can't be blocked easily in the same way most VPN protocols or SSH can. I have yet to find a network where I can't get my terminal UDP packets through.
Err, not in my world. Many hotels block udp. Amtrak's on train internet does too (they block a lot of tcp ports as well, and proxy http to stop videos).
I haven't tried recently, but most guest networks (at conferences, companies I visited, etc) did not let UDP through.
I started using Mosh a month or two ago, and I can never, ever go back. The automatic connection resumption is pure gold! I can close my laptop at work, go home, open it back up and resume work. And if I switch between wifi and ethernet, that's no problem either. Flaky office network connections are no longer a bother.
I really can't say how much mosh has improved my work life. You owe it to yourself to give it a try.
This looks great, would love to use this instead of ssh for my AWS servers, but not so sure about opening firewall ports 60000-61000. Currently only 80, 443, and 22 are open, that's a big addition.
Is that SOP for mosh, or do you guys proxy through a mosh-only server to your actual servers, or something else?
You only need to open 1 udp port afaik. That's what I do anyways.
Opening a high port in the 60000s is less risky than 22. You should probably remap ssh to something else. Or you can use ec2 security groups to limit access to certain ips.
Except anyone on the machine can bind to a port above 1024 so if your mosh server process ever exits for any reason a compromised account could bind a backdoored version.
If they're trying to sell us on this free software, why don't they provide a diagram of the SSP packet instead of telling us about UTF-8 support? I can tell whether I'm seriously interested in this just from looking at how they've structured the packet. Do I have to go digging in the source code just to get a preview? I looked at the USENIX paper and nothing in there either. When I have time I'll take a look at the src.
It makes the connection appear completely lagless, since the Mosh client echos typed characters locally instead of waiting for a round-trip with the server.
It also buffers all command output server-side and doesn't send more output than the network connection allows. This means that even if you have a runaway process dumping lots of output, you can still immediately Ctrl+C it.
A great option to enable for servers where you're constantly SSHing to (either opening a shell or pushing a repo) is ControlMaster, which lets you multiplex a single connection and cut down on the initial connection time (including authentication).
If you have a new enough ssh client, I'd personally recommend setting ControlPersist yes.
This fixes the UI wart where your first ssh connection to a server has to stay open for the duration of all your others (or your others all get forcibly disconnected).
It's not perfect. If the name of a server changes but you already have a control socket, it'll use the socket and connect to the old server. And it also takes it a while to pick up networking changes that break your connectivity, though I've hacked around that with a script I keep running in the background (Linux only, at the moment; requires gir1.2-networkmanager-1.0):
#!/usr/bin/python
import os
from gi.repository import GLib, NMClient
def active_connections_changed(*args):
for sock in os.listdir(os.path.expanduser('~/.ssh/sockets')):
os.unlink(os.path.join(os.path.expanduser('~/.ssh/sockets'), sock))
c = NMClient.Client.new()
c.connect('notify::active-connections', active_connections_changed)
GLib.MainLoop().run()
There's some contention with my coworkers about whether ControlPersist is actually desirable given the tradeoffs, but I personally think it's a huge improvement.
...quick and to the point article explaining it, and comments to get you going further with it: http://www.debian-administration.org/articles/290 (I know, it sucks when RTFMP gets you into a sort of dead end...)
I've run into a few local networks that have routers, or other network security appliances, that are configured in such a way where my SSH connection would get dropped after XX seconds of inactivity.
Placing the following wildcard entry in my SSH config resolved the issue for those times when I had to use one of these networks...
# Set Global KeepAlive to avoid timeouts
Host *
ServerAliveInterval 240
Caveat, though, is that this will force connections to break if you have a transient network outage that SSH could otherwise cope with. Which can be annoying if you're somewhere with a flaky connection. Putting it on the specific hosts you need rather than globally keeps it from causing you grief anywhere you don't need it.
While that's true, it tries 3 times (default) before it gives up - so, setting to 240 would require 12 minutes of no connection before disconnecting.
In any environment that I've been working in recently, if I lose connection for more than 5 minutes, I've lost it for far longer - I might as well break and reconnect anyway.
A few other useful things about SSH aliases, especially w.r.t. not just using shell aliases:
They set you up with a layer of indirection that you can change later. Git-svn doesn't like having the URL to the SVN server changed, but if you set up a git alias to "svn" instead, when the SVN server moves for some reason you won't have to do anything except change the svn alias contents. You can also share the resulting tree between multiple people easily because they can plop in their own "svn" alias that uses their own user instead. In general you can safely reference the SSH alias in any number of places (beyond just shell scripts) and know that you can trivially change the alias later without having to change all those things.
There are many things that will use SSH, but won't accept any parameters, or will accept only a small subset. Emacs can use SSH to access remote file systems by opening "/ssh:username@ip:port:/file", but it will only take username, ip, and port (AFAIK). With SSH aliases, you have the full power of SSH available to you, so you can use all these other nifty things people are talking about. I've also been using ddd to remotely debug perl lately and that pretty much seems to demand 'ssh host' with passwordless login and nothing else.
Author here. Glad to see that this post was useful; I wrote it a while ago when I realized that people weren't password-protecting their Github private keys because it was "too complicated".
I've been meaning to start writing again, and my post showing up on the HN front page is a pretty good motivator. Thanks for that, everyone :-).
i just switched over from using bash aliases (as described in the article) to an SSH config file last week. The best thing for me is that it doesn't just make ssh easier to use, it makes all the ssh family of tools easier. scp, sshfs, rsync etc all suddenly require less typing to use.
~? shows the few things you can do over the admin channel of ssh.
~ $ ~?
Supported escape sequences:
~. - terminate connection (and any multiplexed sessions)
~B - send a BREAK to the remote system
~C - open a command line
~R - Request rekey (SSH protocol 2 only)
~^Z - suspend ssh
~# - list forwarded connections
~& - background ssh (when waiting for connections to terminate)
~? - this message
~~ - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)
I wish the damn thing would support DNS. We have a bunch of servers to SSH into and I have to use the fully qualified domain name unless I want to hardcode all of them (and there are too many).
Considering the large percent Linux users frequently using SSH, I'd stick a link to this (and to http://www.debian-administration.org/articles/290) in all newbie targeted Linux tutorials... I just hate the world for letting me live without this knowledge for close to a year since diving into Linux.
Nice article. If only I had seen this last week, it would have saved me some time. I was trying to configure multiple github accounts last week, but I don't have enough experience with ssh.
...but now that I did manage to configure it, I wonder if it was really necessary. Github has a nice identity control, I think it was foolish of me to think I needed both a personal and a work account.
A "rather large online retailer based in Seattle" is a Linux shop and started using Kerberos for ssh to excellent results a few years back. Once properly set up, the ability to ssh around between Linux and Windows boxen is just pure bliss. Once you're kerberized, just click an ssh: link in a Nagios report and boom, you're on the host.
I find myself wishing ~/.ssh/config had include statements, so I could mix and match blocks which are only useful on certain networks / in certain contexts.
I really wish this was an as-shipped feature with OpenSSH. Even if it is the config file being able to have multiple Hostnames to try for a Host.
That way I can just specify two hostnames/IP addresses to try and ssh can automagically do the right to to get to an internal machine depending if I am at home or work.
On a Mac I used to use Marco Polo to detect my location and it would change my symlinks to ~/.ssh/config. You can use Marco Polo to detect SSIDs or various other changes.
Good old HN, where a building a website with Twitter Bootstrap on Heroku's free tier makes you a startup founder, and learning to use one the oldest and most basic features of one of the simplest packages in your operating system makes you a hacker.
e.g.:
Basically, specify as ProxyCommand whatever command needs to be run to give you i/o to the remote sshd - in this case, sshing to a bastion host and running netcat. This allows me to do, for example: Which will (in background) ssh to the bastion host external.example.net. I can even do port forwards to internal hosts using -L or LocalForward directives. It's a huuuuge timesaver.ssh even automatically replaces %h and %p in the ProxyCommand with a host and port, though you can of course replace those tokens with static values if it works better.
(Also, note above that one can use wildcards in Host declarations.)