fab-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Fab-user] EC2 host keys


From: Patrick J McNerthney
Subject: Re: [Fab-user] EC2 host keys
Date: Sat, 09 May 2009 15:39:45 -0400
User-agent: Thunderbird 2.0.0.21 (X11/20090409)

Jeff,

Many of Christian's suggestions could be leveraged and more importantly probably *should* be leverage, when the server involved has been configured into a known state and should always be of that known state. One could then launch multiple instances of these servers, all sharing the same ssh keys.

The wrinkle in my scenario is getting to that known configured state. To do this, I am launching a brand new, generic Linux (technically, in my case an Ubuntu 8.10 Server) EC2 instance. This is pretty much exactly like installing Linux on a new piece of hardware for the first time. The ssh installation on this new EC2 Linux instance initializes itself, which means that is generates a clean set of ssh keys as it's identity.

I was looking over the paramiko source and noticed that one can have a "local host file". I think that maybe I can do something with that to have my cake and eat it to. What is needed is the explicit clearing of a particular server's key only at the time of EC2 instance creation. Once the instance is up and running, then I do in fact want to enforce the host key validation on all subsequent accesses of that server.

I'll work with that and see where I get.

Short term, that makes sense to make it an explicit override to ignore the system host file, so that it is not the default behavior.

Thanks,
Pat

Jeff Forcier wrote:
Christian raises some good points; I had been wondering the same
thing, but was willing to overlook "how things should work" in favor
of "how things apparently work, in some cases" since I tend to prefer
pragmatism over dogma.

However, I was just writing up some docs about this aspect of
Paramiko/Fabric, and realized something that should have struck me
before: the recent change means that man-in-the-middle attacks will
now *not be detected by default* and that makes me pretty
uncomfortable.

I think I will split this out into two distinct settings, because it
really is two distinct problems being solved (adding new/unknown, but
valid, hosts, versus detection of changed host keys for known hosts).
Thus, Pat would still be able to control the loading of host keys, but
the default behavior will still be honoring the security spirit of
SSH.

Thoughts?

-Jeff

On Sat, May 9, 2009 at 2:31 PM, Christian Vest Hansen
<address@hidden> wrote:
On Sat, May 9, 2009 at 7:42 PM, Patrick J McNerthney
<address@hidden> wrote:
Jeff,

Thanks for the response.

Sorry, I did use IP address in my original message, but probably should have
used DNS name.  It actually does not matter, the problem exists for both.
 You are correct that I could disable the additional checking of IP address
conflicts, but I have the same exact problem with the DNS name.

The problem is that there is an entry in my known_hosts file for a
particular DNS name that has a key that does not match the DNS name being
used.  This is caused by the recycling of both DNS names and IP address by
Amazon's EC2.
Out of curiosity: do the servers get new IPs while they are running,
or do the recycling only happen when servers are rebooted?

Also, is there a way to partition the servers? I mean, you may have
two kinds of servers, like front-end and back-end, and then you say
all front-ends are in this IP range and all back-ends are in that
range. Is that possible?

Where this all leads is; I'm wondering if it might make sense if all
servers of a kind shared the same key pair, so it wouldn't make any
difference if they swapped IP addresses amongst one another.

This way, I think, the servers could swap IPs all they want and we
could still connect to them in a manner that is both reliable and
resistant to DNS cache poisoning (once their fingerprints are in
known_hosts, that is).


To replicate, I did the following:

1.  removed both my known_hosts file and my .ssh config file.
2.  ssh'ed into ServerA, then exited.  There are now two lines in the
known_hosts file, one for the dns name and one for the ip address.
3.  ssh'ed into ServerB, then exited.  There are now four lines in the
known_hosts file.
4.  modified the known_hosts file so that ServerA's entries contains
ServerB's key.
5.  attempted to ssh to ServerA and was rejected because the host key has
changed.
6.  attempted to ssh to ServerA with "StrictHostKeyChecking no", this
worked.

So it is with this setup, where the known_hosts file contains the wrong key
for a server, I am unable to use Fabric against such a server.  What I want
to do is replicate the effect of "StrictHostKeyChecking no" in Fabric.

In this scenario, this is a case of an invalid server key, not a missing
server key.  The missing host key policy never gets called.  This is why the
env.reject_unknown_keys currently has no effect.

Any clearer?

Pat


Jeff Forcier wrote:
Hi Pat,

First, I think this partially falls under an existing TODO item, which
I plan to have in place for 1.0 and hopefully 0.9: honoring any
.ssh/config options that we have functionality for. In this case,
Fabric would probably check your StrictHostKeyChecking option and use
that to override the default value of env.reject_unknown_keys.

Secondly, I've read over your use case and I may be missing something:
as things currently are, why isn't setting env.reject_unknown_keys to
False good enough? Simply loading the host key list is not what drives
the reject/don't-reject decision: that's driven by the policy given to
set_missing_host_key_policy.

I just double checked this by tweaking a (static) IP in my known_hosts
file so that it was incorrect, then ensuring my fabfile had
env.reject_unknown_keys = False. When connecting to that server, the
"new" IP was added as a new entry to my known_hosts and the connection
was created without issue.

Is the problem that you don't want the new IPs added to your host list, or
what?

Best,
Jeff

P.S. In checking "man ssh_config" I found that there's an even more
specific setting, CheckHostIP, which sounds like it fits your
situation better than StrictHostKeyChecking. Unfortunately, Paramiko
doesn't appear to support that level of granularity, so we're out of
luck with that for now. Wanted to mention it anyways, though, in case
you weren't aware of it, for non-Fabric use.

On Sat, May 9, 2009 at 11:10 AM, Patrick J McNerthney
<address@hidden> wrote:

I have an issue with Amazon EC2 instances where ssh host keys have been
saved in .ssh/known_hosts but are incompatible with an EC2 instance ip
address.  This occurs when the ip address has been reassigned to a new
EC2
instance.  So the basic sequence of events are:

o Start an EC2 instance which is assigned an ip address.
o ssh to that ip address and that server's ssh key is associated with
that
ip address in the known_hosts file.
o Terminate that EC2 instance.
o A new EC2 instance is started and it happens to get assigned the same
ip
address.

At this point, if I first ssh to it, I have ssh configured with
StrictHostKeyChecking set to no, so ssh will emit a warning about this ip
address having a new key, but still allows me to continue.

However, if at this point I try to use Fabric to execute some commands,
it
always will fail.  This is because the SSHClient.load_system_host_keys is
always called, causing the connection to fail if there is an
incompatibility
between the ip address and the server key.

I have addressed this in my own fork here:

 
http://github.com/iciclespider/fabric/commit/08ad1c491e5643990c2a35e865784d2b61aa742f

What this does is replace this:

 client.load_system_host_keys()
 if not env.reject_unknown_keys:
      client.set_missing_host_key_policy(ssh.AutoAddPolicy())

with this:

 if env.reject_unknown_keys:
     client.load_system_host_keys()
 else:
      client.set_missing_host_key_policy(ssh.AutoAddPolicy())

I also considered using another env setting value to control this, but my
conclusion that this behavior is in fact in line with the implied
behavior
of the "reject_unknown_keys" name.  In other words, the list of known
keys
should only be loaded if the intention is to reject those keys that are
not
known.

Pat McNerthney
ClearPoint Metrics, Inc.


_______________________________________________
Fab-user mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/fab-user



_______________________________________________
Fab-user mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/fab-user


--
Venlig hilsen / Kind regards,
Christian Vest Hansen.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]