Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Simplify your life with an SSH config file (nerderati.com)
330 points by koide on Oct 20, 2012 | hide | past | favorite | 83 comments


This overlooks ProxyCommand, the single most useful reason for using an ssh config file.

e.g.:

    Host internal-*.example.net
        ProxyCommand ssh -T external.example.net 'nc %h %p'

Basically, specify as ProxyCommand whatever command needs to be run to give you i/o to the remote sshd - in this case, sshing to a bastion host and running netcat. This allows me to do, for example:

    ssh internal-dev.example.net
Which will (in background) ssh to the bastion host external.example.net. I can even do port forwards to internal hosts using -L or LocalForward directives. It's a huuuuge timesaver.

ssh even automatically replaces %h and %p in the ProxyCommand with a host and port, though you can of course replace those tokens with static values if it works better.

(Also, note above that one can use wildcards in Host declarations.)


You can also use ssh -W [1] instead of relying on nc, i.e:

   Host madeup
       Host internal_name
       ProxyCommand ssh external.example.net -W %h:22

[1] - http://manpages.debian.net/cgi-bin/man.cgi?query=ssh&apr...


Note there appears to be a bug where the use of ControlPersist breaks the use of '-W'. You end up with 'bad packet'.

openssh-6.0 release notes has this:

   * sshd(8): unbreak stdio forwarding when ControlPersist is in use; bz#1943
I haven't tested to see if that resolves the issue yet though (on osx which of course does not have 6.0).

However, the netcat version works fine.


A generalized version of that:

https://github.com/ryancdotorg/ssh-chain

It will let you do

    ssh internal-dev.example.net^external.example.net


I use the same technique to access VPS machines and mobile devices, except instead of a fixed bastion host I relay the connections through PageKite (disclosure: I'm the author). It feels a bit magical to just be able to ssh into a laptop or Android phone, no matter which network it is on. :-)

The PK/SSH HowTo: https://pagekite.net/wiki/Howto/SshOverPageKite/


Why not directly ssh into "external.example.net"? Why is this a time saver? I'm not sure what you use this for...


1. Two commands instead of one. :)

2. Scripting an scp, rsync etc from the internal machine is easier now, since the ssh from the external to internal machine is handled transparently for you.


i've seen this used for security, in that you can have an extra external server you must have access to in order to get to internal servers.


Ah ok, this make sense especially if some of your machines are on the same LAN but don't have an external IP.


Ah, very cool. I remember stumbling upon ProxyCommand in some SSH docs, but didn't really see an immediate use for it. Thanks!


That is freaking awesome.


> Personally, I use quite a few public/private keypairs for the various servers and services that I use, to ensure that in the event of having one of my keys compromised the dammage is as restricted as possible.

If you keep all those private keys on the same machine and tend to load them all into ssh-agent frequently, then there's little point in that. People forget that keypairs are not like passwords -- if Github gets compromised, nobody can do anything with the public key you gave them.

Unless you treat the keys very differently (like having a special key that you rarely ever decrypt), there's no reason to have more than one per device.


My first thing with SSH is to generate unique keys per machine which never leave that machine (except in backups encrypted with a backup key unique to that machine). If my mba13 gets jacked, I'll be able to revoke all mba13 ssh keys without locking out mbp17 or imac27 or iphone5 or ipad3 keys. This is related to the private keys on same machine" thing you mention.

However, the other reason for segmenting keys is to do agent forwarding.

I might have a CLIENTA key and then allow ssh auth forwarding from a bastion host at CLIENTA to other CLIENTA machines. I also have a CLIENTB key and allow ssh auth forwarding from a bastion host at CLIENTB to other CLIENTB machines. (or, prod/dev at the same company, or personal/work, or whatever).

I don't want anything CLIENTA does on a subverted bastion host or other host to be able to affect CLIENTB in any way.

I also keep some keys totally offline (to manage logging servers, which are normally read-only); ideally with some better way to do 2 party control as well.


> People forget that keypairs are not like passwords – if Github gets compromised, nobody can do anything with the public key you gave them.

Oh, I'm well aware of the difference between a public/private asymmetric encryption scheme and a symmetric one.

My concerns are more along the lines of my laptop/desktop being stolen, or my home being robbed and my backup disks/USB keys being taken, or even my computer being seized at the US border. There are ways to mitigate those concerns (e.g. full-disk encryption), but I'm very much a proponent of defence-in-depth whenever possible.

I should probably clarify that in the post itself, so that readers aren't misled as to the reasoning behind password-protecting your private keys.


I don't think I understand you here. Are you not keeping these different keys on the same machine?


At the time when I wrote this, I had three separate machines that I used regularly: personal laptop, work laptop, work desktop.


The arguments you've presented so far in favor of multiple keypairs on one computer (different keypairs for different remote services) make no sense.

Typical ssh usage is one keypair per account per machine (or one keypair per type, e.g. I have an rsa keypair and ecdsa keypaor). It doesn't matter if you use the same keypair for github and ec2 instances [1]. The only way for the key to be compromised is if your local machine is compromised. If the local machine is compromised, you can't trust any keys stored on it unless you know when it was compromised and you know you haven't entered the passphrase for some of the private keys since the compromise. More than likely, you won't know that, so you will have to treat all keys on the compromised machine as compromised. You'll have to regenerate and redistribute N keys instead of 1.

In your parent post, you identified physical theft as your main concern. Assuming you have a good passphrase, physical theft is a non-issue. Border crossing seizures and court proceedings are different; in some cases they can demand that you enter your passphrase(s), but multiple keypairs will not help you there.

[1] caveat: of course if you use unprompted authentication forwarding, this becomes an issue... a compromise at github for instance could allow the github hacker to ssh into your EC2 instances using forwarded credentials, but that's a time-limited attack and only works while you're connected to github. Private keys never leave the machine(s) they're hosted on.


You make some good points that are making me rethink my key-per-service approach. Though, other than the need to replace N keys when my machine(s) is compromised, there's not that much key management overhead.


But three keypairs (one for each machine) is still not "quite a few"... And if you had more than one per machine, can you clarify why?


> even my computer being seized at the US border

If your computer is seized at the US border, the security of your SSH keys is the last thing you need to be worrying about: http://xkcd.com/538/


I now use Mosh exclusively over ssh. It's great on slow connections as well as on fast ones. For example, I can start an ssh connection at home on my laptop, drive to the office and resume like nothing ever happened. One of the best discoveries of the past year for me.

http://mosh.mit.edu/


I haven't heard of Mosh before, so I can't comment (I'll try it soon), but I just watned to point out that ssh+screen (or ssh+tmux) gives you exactly the same, and is an apt-get/yum-install/pacman-S away.

From reading about mosh, it seems to require a UDP connection, thus non-trivial routing. I forward ssh connections through ssh tunnels (sometime multiple layers), and it works great. Can mosh do that?


It's not quite the same to use ssh + screen/tmux. mosh resumes automatically (no need to log in again), and it also will show your keystrokes as you type them, even if the remote machine hasn't yet received them and then sent back to your local machine the updated text. This "buffered" text is displayed with an underline and when your computer receives communication from the server, it gets updated to the correct text. This makes the terminal feel a lot more responsive, in my experience. mosh can also be installed with apt-get, at least in Ubuntu.


This. I love the predictive typing feature when I ssh oversees. For this feature alone it's worth using Mosh.


I'll try it for the predictive text ... but, what do you mean "you don't have to login again"? I use a public key login on ssh (so login is invisible), and you can set up your ssh command line in your config file to do so, e.g. I often use

    ssh beagle3@remote.host -t 'screen -x || screen'
And it works beautifully. (I'm heavy screen user, so even if I switch to mosh, there will be screen underneath...)


According to the manpage, `screen -DR` will detach the remote screen if it exists and reattach your session. No need to use shell conditionals or ||.


Ah, but I don't want it detached! I often do pair programming or pair sysadminning through screen. Is there an equivalent that works with screen sharing ? ( -x )


I have my .profile set up to auto-attach to a default screen session, which works with -x.

http://blog.ryanc.org/?p=5


I use mosh from our office in Sydney to work on a server on in Vancouver. The character prediction makes this much easier, even if you do see the occasional literal character in a vim session before the server responds. It's so much better than the alternative!


> it seems to require a UDP connection

What is a UDP connection?


By "it seems to require a UDP connection", OP really meant that Mosh requires the server to have an open UDP port so the client can send it packets.

Practiaclly, this means that if your server is behind a firewall or NAT, you need to poke a hole in your firewall to be able to connect with Mosh.


IMO terminal protocols are more suited to UDP than TCP and I use ssh over UDP all the time. There are three main advantages:

* There is no connection session, so you can close your laptop or put it to sleep and open it up again and the connection will still be there.

* You don't have the lag of sending and then responding, it appears locally immediately

* It is much easier to get UDP around firewalls and it can't be blocked easily in the same way most VPN protocols or SSH can. I have yet to find a network where I can't get my terminal UDP packets through.

The alternative to Mosh is setting up OpenVPN[1]. It is especially worthwhile if you have a network of public servers that you administer. It is easy to setup[2] and works on Windows, Mac, Linux, BSD etc.

The best tip is to add a second interface to all your machines and setup a private VLAN across them. This way if you are experiencing a DoS attack or high traffic you can still login and administer the machine (this also applies with standard ssh - you put it in a different range of IPs and on your public machines then only have 80 and 443 open).

EC2[3], Linode[4] et al all support adding a second network interface to each machine (or to just one of the machines, which is then used as a gateway to the remainder) which can be assigned an IP address in a different range. You then setup a separate hostname to this network, or even register a separate administrative domain name (eg. company-admin.com) which you keep on a different registrar, whois record, etc.

[1] http://openvpn.net/

[2] http://openvpn.net/index.php/open-source/documentation/howto...

[3] http://aws.typepad.com/aws/2012/07/multiple-ip-addresses-for...

[4] http://www.linode.com/wiki/index.php/Multiple_IPs


> It is much easier to get UDP around firewalls and it can't be blocked easily in the same way most VPN protocols or SSH can. I have yet to find a network where I can't get my terminal UDP packets through.

Err, not in my world. Many hotels block udp. Amtrak's on train internet does too (they block a lot of tcp ports as well, and proxy http to stop videos).

I haven't tried recently, but most guest networks (at conferences, companies I visited, etc) did not let UDP through.


you can proxy UDP over HTTP proxies, which is supported by OpenVPN and is what I do. so if you have web access you have SSH over UDP access.


Of course you can do that. But in the context we are discussing - that of Mosh - does it still provide any benefit over regular ssh/tcp?


I started using Mosh a month or two ago, and I can never, ever go back. The automatic connection resumption is pure gold! I can close my laptop at work, go home, open it back up and resume work. And if I switch between wifi and ethernet, that's no problem either. Flaky office network connections are no longer a bother.

I really can't say how much mosh has improved my work life. You owe it to yourself to give it a try.


The nice thing is that most ssh config settings are also used by mosh.


This looks great, would love to use this instead of ssh for my AWS servers, but not so sure about opening firewall ports 60000-61000. Currently only 80, 443, and 22 are open, that's a big addition.

Is that SOP for mosh, or do you guys proxy through a mosh-only server to your actual servers, or something else?


You only need to open 1 udp port afaik. That's what I do anyways.

Opening a high port in the 60000s is less risky than 22. You should probably remap ssh to something else. Or you can use ec2 security groups to limit access to certain ips.


Except anyone on the machine can bind to a port above 1024 so if your mosh server process ever exits for any reason a compromised account could bind a backdoored version.


If they're trying to sell us on this free software, why don't they provide a diagram of the SSP packet instead of telling us about UTF-8 support? I can tell whether I'm seriously interested in this just from looking at how they've structured the packet. Do I have to go digging in the source code just to get a preview? I looked at the USENIX paper and nothing in there either. When I have time I'll take a look at the src.


What does that get you over ssh and screen?


It makes the connection appear completely lagless, since the Mosh client echos typed characters locally instead of waiting for a round-trip with the server.

It also buffers all command output server-side and doesn't send more output than the network connection allows. This means that even if you have a runaway process dumping lots of output, you can still immediately Ctrl+C it.


You can do that with client-side rlwrap too.


Auto-reconnect is another advantage.


If you add the following to your .bash_profile, you'll get command line completion of your hosts:

  function _ssh_completion() {
    perl -ne 'print "$1 " if /^[Hh]ost (.+)$/' ~/.ssh/config
  }
  complete -W "$(_ssh_completion)" ssh


I use ZSH, which if configured properly will also do this for you. Huge timesaver when I need to connect to a half-dozen hosts.


The standard bash_completion[0] package includes SSH host completion.

[0] http://bash-completion.alioth.debian.org/


A great option to enable for servers where you're constantly SSHing to (either opening a shell or pushing a repo) is ControlMaster, which lets you multiplex a single connection and cut down on the initial connection time (including authentication).


Syntax for anyone looking for it:

  Host *
  ControlMaster auto
  ControlPath ~/.ssh/cm_socket/%r@%h:%p


If you have a new enough ssh client, I'd personally recommend setting ControlPersist yes.

This fixes the UI wart where your first ssh connection to a server has to stay open for the duration of all your others (or your others all get forcibly disconnected).

It's not perfect. If the name of a server changes but you already have a control socket, it'll use the socket and connect to the old server. And it also takes it a while to pick up networking changes that break your connectivity, though I've hacked around that with a script I keep running in the background (Linux only, at the moment; requires gir1.2-networkmanager-1.0):

  #!/usr/bin/python
  import os
  from gi.repository import GLib, NMClient
  def active_connections_changed(*args):
      for sock in os.listdir(os.path.expanduser('~/.ssh/sockets')):
          os.unlink(os.path.join(os.path.expanduser('~/.ssh/sockets'), sock))
  c = NMClient.Client.new()
  c.connect('notify::active-connections', active_connections_changed)
  GLib.MainLoop().run()
There's some contention with my coworkers about whether ControlPersist is actually desirable given the tradeoffs, but I personally think it's a huge improvement.


I'm an apologist of RTFMP, which is why I didn't include it ;)


It's generally appreciated anyway, since man pages don't always have examples [1]:

$> man controlmaster

No manual entry for controlmaster

$> man ssh|grep -i controlmaster

required before slave connections are accepted. Refer to the description of ControlMaster in ssh_config(5) for details.

ControlMaster description of ControlPath and ControlMaster in ssh_config(5) for details.

$> man ssh_config|grep -i controlmaster

ControlMaster

with ControlMaster set to “no” (the default). These sessions will try to reuse the master instance's network connection rather than

Specify the path to the control socket used for connection sharing as described in the ControlMaster section above or the string

When used in conjunction with ControlMaster, specifies that the master connection should remain open in the background (waiting for

[1]: http://www.reddit.com/r/fossworldproblems/comments/v7hi1/man...


...quick and to the point article explaining it, and comments to get you going further with it: http://www.debian-administration.org/articles/290 (I know, it sucks when RTFMP gets you into a sort of dead end...)


Oh wow, this is great. I had no idea that existed.

Thanks!


I've run into a few local networks that have routers, or other network security appliances, that are configured in such a way where my SSH connection would get dropped after XX seconds of inactivity.

Placing the following wildcard entry in my SSH config resolved the issue for those times when I had to use one of these networks...

  # Set Global KeepAlive to avoid timeouts
  Host *
    ServerAliveInterval 240


Caveat, though, is that this will force connections to break if you have a transient network outage that SSH could otherwise cope with. Which can be annoying if you're somewhere with a flaky connection. Putting it on the specific hosts you need rather than globally keeps it from causing you grief anywhere you don't need it.


While that's true, it tries 3 times (default) before it gives up - so, setting to 240 would require 12 minutes of no connection before disconnecting.

In any environment that I've been working in recently, if I lose connection for more than 5 minutes, I've lost it for far longer - I might as well break and reconnect anyway.


A few other useful things about SSH aliases, especially w.r.t. not just using shell aliases:

They set you up with a layer of indirection that you can change later. Git-svn doesn't like having the URL to the SVN server changed, but if you set up a git alias to "svn" instead, when the SVN server moves for some reason you won't have to do anything except change the svn alias contents. You can also share the resulting tree between multiple people easily because they can plop in their own "svn" alias that uses their own user instead. In general you can safely reference the SSH alias in any number of places (beyond just shell scripts) and know that you can trivially change the alias later without having to change all those things.

There are many things that will use SSH, but won't accept any parameters, or will accept only a small subset. Emacs can use SSH to access remote file systems by opening "/ssh:username@ip:port:/file", but it will only take username, ip, and port (AFAIK). With SSH aliases, you have the full power of SSH available to you, so you can use all these other nifty things people are talking about. I've also been using ddd to remotely debug perl lately and that pretty much seems to demand 'ssh host' with passwordless login and nothing else.


Author here. Glad to see that this post was useful; I wrote it a while ago when I realized that people weren't password-protecting their Github private keys because it was "too complicated".

I've been meaning to start writing again, and my post showing up on the HN front page is a pretty good motivator. Thanks for that, everyone :-).


Good article, especially the LocalForward config was new to me.

One real usage that is invaluable for me, is the fact that the config is used for SVP also. This saves a lot of typing.

With a key based login set up, copying files to a server is a matter of

scp file dev:~/


Usually $HOME is the default path, so you don't even need the ~/, this works fine:

  scp file dev:


Great tips but this one in my opinion is pure gold http://blogs.perl.org/users/smylers/2011/08/ssh-productivity...


i just switched over from using bash aliases (as described in the article) to an SSH config file last week. The best thing for me is that it doesn't just make ssh easier to use, it makes all the ssh family of tools easier. scp, sshfs, rsync etc all suddenly require less typing to use.


you can setup port forwards on-the-fly with ~C

~? shows the few things you can do over the admin channel of ssh.

  ~ $ ~?
  Supported escape sequences:
    ~.  - terminate connection (and any multiplexed sessions)
    ~B  - send a BREAK to the remote system
    ~C  - open a command line
    ~R  - Request rekey (SSH protocol 2 only)
    ~^Z - suspend ssh
    ~#  - list forwarded connections
    ~&  - background ssh (when waiting for connections to terminate)
    ~?  - this message
    ~~  - send the escape character by typing it twice
  (Note that escapes are only recognized immediately after newline.)


I wish the damn thing would support DNS. We have a bunch of servers to SSH into and I have to use the fully qualified domain name unless I want to hardcode all of them (and there are too many).


What damn thing? .ssh/config? 'Host foo' and the domain in the list in the 'search' option in /etc/resolv.conf works fine for me.


Considering the large percent Linux users frequently using SSH, I'd stick a link to this (and to http://www.debian-administration.org/articles/290) in all newbie targeted Linux tutorials... I just hate the world for letting me live without this knowledge for close to a year since diving into Linux.


Nice article. If only I had seen this last week, it would have saved me some time. I was trying to configure multiple github accounts last week, but I don't have enough experience with ssh.

...but now that I did manage to configure it, I wonder if it was really necessary. Github has a nice identity control, I think it was foolish of me to think I needed both a personal and a work account.


Yeah, IdentityFile + IdentitiesOnly is useful for those hosts.


Here's another reason why. For specific hosts, ssh can sometimes feel terribly slow, especially with connecting, and especially on a mac!

Host -host-name-here- GSSAPIAuthentication no GSSAPIKeyExchange no

Fixes this issue. source: http://hints.macworld.com/article.php?story=2011102011541796...


Been doing this for some time now—getting my precious seconds back one login at a time.


no one ever uses kerberos outside of windows shops anymore?


A "rather large online retailer based in Seattle" is a Linux shop and started using Kerberos for ssh to excellent results a few years back. Once properly set up, the ability to ssh around between Linux and Windows boxen is just pure bliss. Once you're kerberized, just click an ssh: link in a Nagios report and boom, you're on the host.


I find myself wishing ~/.ssh/config had include statements, so I could mix and match blocks which are only useful on certain networks / in certain contexts.


Try my fugly script, http://cmad.github.com/sshfu/ you can do imports for example, and things like

inside public_place or home { host web address 192.168.10.1 gw office_fw } otherwise { host web address 192.168.10.1 user $MY_USER }

or whatever like host .... agent yes port 8


I really wish this was an as-shipped feature with OpenSSH. Even if it is the config file being able to have multiple Hostnames to try for a Host.

That way I can just specify two hostnames/IP addresses to try and ssh can automagically do the right to to get to an internal machine depending if I am at home or work.


On a Mac I used to use Marco Polo to detect my location and it would change my symlinks to ~/.ssh/config. You can use Marco Polo to detect SSIDs or various other changes.


Oh man, I have been thinking about this problem for a while now. Glad to see the start of some simple solutions to make this more bearable.


  man ssh_config


But this is known for years. I've been using this since the second week of using Linux.


Good old HN, where a building a website with Twitter Bootstrap on Heroku's free tier makes you a startup founder, and learning to use one the oldest and most basic features of one of the simplest packages in your operating system makes you a hacker.


I was very pleased when I discovered this. I was thrilled when I discovered that Emacs tramp mode makes use of this! :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: