SSH Start to Finish Architecture – GnuPG keys generated on the Yubikey 4

To get GnuPG gpg-agent to work on the Yubikey 4, we need to put the keys on the device.  We can either generate them off of the device, and then copy them up, or we can generate them directly on the device.  We will do both versions of this before we are through, but today is “generate directly on the device.”

As I was getting set up to work on this again this weekend, I gave this a try on the new Beaglebone Black Wireless, on a whim.  The last time I had tried this on the BBBW, it didn’t go so well.  There were library issues that prevented GnuPG from accessing the card correctly, and the whole thing was an exercise in frustration.  Then I “did the dumb” and managed to brick the device while working on a project.  I already wrote up the procedure I used to unbrick it, which worked fine.  Apparently something in the unbrick firmware is different from what I had before, because when I tried this “on a whim” I had no issues.

Here are the steps I used, and I’ll link the articles I followed myself at the bottom.

In order to use the Yubikey with GnuPG, we first need to generate the keys on the device, (or import them.)  Unfortunately, when I was following this, the largest key I could actually generate for all three sections was 3072 bits, not 4096, even though GnuPG supports 4096, and the specs for the Yubikey 4 state it can handle 4096 bit keys.  Still, 3072 is larger than the 2048 limit imposed by the PIV SmartCard standard.  I believe this may be because the GnuPG that is being used might be GPG not GPG2.  I’ll research version more in a later update.

The top section of this article is what I followed for this.

Of course, when I was finished, I found that the Debian Jessie image didn’t include gpg-agent.  I had to configure the wireless with connmanctl, turn off “wifi tether” because it was on by default in the unbrick firmware, and was preventing wifi scan from working, and then do an apt-get update to make things happy so I could apt-get install gnupg-agent.

Then I ran into the issue of actually loading gpg-agent.  I got assertion errors when trying to load gpgkey2ssh.  I double checked the card, and it was missing the “Encrypt” key, but had the “Sign” and “Authenticate” keys listed.  I tried re-generating and got an assert error during the generation.  Things went downhill from there.

After reading several posts, including seemingly ignored bug reports, regarding these assertions, I am beyond frustrated with this side of the SmartCard options.  I will of course continue to attempt to make this work, but at this time the only recommendation I can make is to use the PIV SmartCard solution when possible.  It was beyond painless.

While every document I can find from Yubico says that this can “generate the keys on the device,” everything I am reading about actually getting the public key off of it for SSH use seems to want to “fetch” (which pulls from one of the public servers such as used for the Web Of Trust.)  This makes me think that there is some pre-setup that needs to happen with GnuPG, first, so I will work on that (I have the book) before I make another attempt at this.  Also, I can’t seem to ssh back into the BBBW since my last attempt.  It may be unrelated, but I think another unbrick event is due, which will give me a clean slate to work from, anyway.

I just wanted to share what has been done thus far, what speed bumps have been encountered, and what questions those have garnered.  I’ve been banging on this all weekend, so I’ll leave it until another week.



SSH Start to Finish Architecture – very broad overview discussion of gpg-agent

Thank you to everyone who sent get well notes.  I appreciate it.  I’m doing slightly better.  Since I’m still not 100%, I haven’t gotten the deeper details of how the Yubikey 4 will work (as in, compiling software to use it, configuring it to be used, and then explaining the step by steps of using it.)

I have, however, learned enough to give a broad overview of the expected behavior, so we’ll start with that.

Earlier in this series, we talked about loading our private ssh key using ssh-add.  This required another service to also be running: ssh-agent.  The same is still true of ssh-add, but we need to use a different agent when dealing with using OpenPGP keys as an authentication mechanism.  For GnuPG, the tool is gpg-agent.

The general gist is that GnuPG needs to be configured to support SSH keys in its configuration file.  Once this is done, we can switch to using gpg-agent instead of ssh-agent.  This agent is capable of loading standard SSH private keys as normal, but it also allows for presenting an OpenPGP key as an authentication mechanism for SSH private key logins.

One thing we need to remember about OpenPGP is that it handles multiple keys that have multiple uses.  One use is “signing” which we might use for say… email.  One use is “encrypting” which we again might use for say… email.  When we load a key with gpg-agent, we want to make sure that the signing and encrypting capabilities of the key being presented are turned OFF.  Instead, we want only the “authentication” capability to be turned ON.  We are, after all, using this to authenticate to a server, not using it to encrypt or sign static files.

There are also two ways to handle the private key in general.  It can be generated “on the device” or loaded “to the device” after being created “off the device.”  The desired solution is to generate this “on the device” so that the private key never touches a hard drive where it can be retrieved via forensic tools.  This does, however, tie the key to that specific device, so if the physical key is lost, the keyring is lost, and the web of trust for that would be difficult to re-build.  Except we’re talking about using this device for nothing more than authentication.  We should not ever actually use the OpenPGP keys on this device for signing or encrypting emails.  It has no reason to be in the OpenPGP concept of web of trust.  It should ONLY be used for authenticating ssh connections.

The other argument is that you can generate a sub-key from your primary OpenPGP key that everyone knows you buy in the web of trust, and assign this sub-key authentication roles, then upload that to the Yubikey device.  If the device is lost, the subkey can be revoked, and a new key generated to go onto the new device that would surely be purchased to replace it.

My thoughts are… go with whatever you are more comfortable with.  I personally feel that it is better to generate the private key on the device and just don’t include it in the web of trust, since it’s sole purpose is authentication.  However, if you handle the key ring like a pro because you use OpenPGP for email correspondence on the regular, and you’re more comfortable using your single OpenPGP keyring for everything, by all means, go ahead and generate the sub-key and upload it to the device.  You’ve already got a feel for handling your keys in a sanitary environment, if you’ve been doing that a while, right?

While the OpenPGP keys for authentication was my primary reason for purchasing  and testing the Yubikey 4, there are other capabilities that may also tie in for a more robust secure login regime.

The server can be configured to take Yubikey One Time Pad (OTP) passwords, if there is a PAM module (or BSD Auth module) available for your OS.  Linux and I believe FreeBSD both have PAM modules.  OpenBSD has a BSD Auth function, but it’s local authentication, only.  This means it doesn’t report to a centralized server when the OTP is used, and therefore it doesn’t keep things synchronized across multiple environments.

The device also can be configured to send a static password much like a simple two line USB Rubber Ducky payload.  You can configure this to be a “password” or you can put a pass-phrase on it.  If you do this, Yubico recommends that you only store part of the password or passphrase on the device.  Type part of it manually, and then press the button for the correct duration to trigger the rest of the passphrase to be sent via the Yubikey.

There is also reference to a PIV SmartCard capability, which seems be an OpenSC implementation that potentially may also work for SSH authentication using PKCS11 instead of OpenPGP standards.  I will make an attempt to configure each of these and demonstrate both before this series is finished.  Of course, I retain the right to state I may be confused, and the PIV SmartCard and OpenPGP SmartCard functions may be the same thing on this device.  I’ll know for sure when I dig deeper and try both.

SSH Start to Finish Architecture – A Look at Standards Recommendations

Standing up a service, getting a handle on controlling key distributions, and configuring things to make life easier are all great, but there may be restrictions on what you can and can not do in your environment.  These are sometimes imposed by outside bodies, not just corporate policy.

For example, Payment Card Industry standards (PCI), National Institute of Standards and Technology Internal/Interagency Reports (NISTIR), and the various governance requirements from Sarbanes Oxley (SOX), ISO 27001, and so on all give broad guidelines for server hardening.  Some of them get less broad (more specific) such as the up and coming PCI DSS requirement to have multi-factor authentication.  Whatever your requirements, you should get intimate with the policies that define what you must, may, and must not do, whether they are internal corporate policies or broader reaching standards body policies such as listed above.

We’re going take a quick look at PCI DSS today, and briefly also mention NIST IR 7966 which deals with SSH Key Management.

Whether it is required in your environment or not, I recommend at least a good once over of the NIST IR 7966 document. Sections 4 and 5 are useful in helping to understand the magnitude of a poorly designed SSH deployment. For example:

Most importantly, SSH key-based trust relationships can enable an attacker who compromises one system to quickly move (or pivot) from one system to another and spread through an organization—individual trust relationships often collectively form webs of trust across an organization’s systems.

Imagine an attacker gains access to a little used server. A lazy systems administrator generated a public private key pair there, and sometimes uses it to bounce to other servers in the same subnet, but isn’t really supposed to do that. Now the attacker gains root, and rummages in home directories looking for anything juicy. SSH keys are definitely juicy. A quick perusal of known_hosts and attempt after attempt come up gold when testing this private key that was discovered. Because the admin didn’t lock down the public key at the end points, the private key would work from anywhere. The attacker makes a copy for himself, and goes about his business.

Attackers tend to think in “graphs.” Defenders tend to deal with lists of assets. Yes, those are best organized as graphs, but most defenders don’t treat them that way. A “graph” is organized by a “source” asset, a vector (direction), and a “target” asset that can be affected by that source. Trust relationships are graphs by nature. “Tom” on “Workstation_A” can log into “Tom” on “Server_B.” But “Tom” on “Server_B” can’t log into “Tom” on “Workstation_A.” The trust is one way. Some trusts will be bidirectional. Mapping these trusts is how attackers are able to plant deep roots into a compromised network.

As for the PCI requirements mentioned before, a new requirement is coming that sets up a policy of required multi-factor authentication. In the Scoping and Segmentation guidance document, we can find an example of using a Jump Box for accessing the sensitive systems where we don’t want to apply multi-factor authentication directly to those systems.  That jump box is required to run its own firewall as part of hardening of the system.  This is called a “bastion host.”  The systems administrators are heavily restricted in how they can manage that host, per the rules laid out.  An excellent example of a company that is already doing a Jump Box work flow utilizing SSH Certificate Authorities that get generated via the work flow is brought to us by Facebook.

This article shows how they are using it to meet these PCI (or similar) requirements, but are still able to maintain a high level of efficiency in their daily administrative tasks.   Pay close attention, because they also talk about “zones” in their set up.  You may or may not want to try something similar if you need to stand one up for your company.

Another requirement from that same PCI document is for ONE of the authentication factors to be “something you have.”  It requires some kind of security token.  I’ve used several in my career.  Some are as simple as “my phone” using a one time pad software app.  Other have been as complex as a RADIUS token that kept a one time pad for each VPN login session.

Since I’ve been researching how to best incorporate tokens into an SSH deployment, I settled on making my first purchase for the lab a Yubikey 4.  It arrived in the mail over the weekend, and next Monday will cover how to get it working for ssh public key authentication.

I also want to take the time to thank 0ddn1x and TechRights for the recent pingbacks.  I meant to include a shout out last week, but I was up late, got frustrated with the recording situation, and frankly, forgot.

The new recording should be available this week.  (Probably Tuesday.)

SSH Start to Finish Architecture – Standing up the CA

Before we get to the meat of the discussion, we need to set up some definitions.  Last week we mentioned that the Certificate Authority can produce certificates for both hosts and users.  We’re going to cover both today.  If it looks like we’re being repetitive, we’re really not.  Pay attention to which section you are in when following along, since flags will vary.


  • Certificate Authority (CA) – The trusted third party that signs keys to produce certificates.
  • User Key – The user public key that will be signed by the CA to produce a user certificate.
  • Host Key – The host public key that will be signed by the CA to produce a host certificate.
  • User Certificate – The certificate generated by the CA from the user key provided.  This reduces the need for AuthorizedKeysFile or AuthorizedKeysCommand.
  • Host Certificate – The certificate generated by the CA from the host key provided.  This simplifies the known_hosts management, and makes this process more secure.
  • Principal – A means of restricting validity of the certificate to a specific set of user/host names. By default, generated certificates are valid for all users or hosts.
  • Trust – In order for a CA issued certificate to work, the server needs to be told to trust the CA before it will accept user certificates, and the client needs to be told to trust the CA before it will accept host certificates.
  • Key Revocation List – A means of revoking keys and certificates when they are no longer valid.
  • Validity Lifetime – A means of restricting the lifetime of a certificate’s validity.  If a certificate becomes invalid after a limited time frame, it will need to be re-issued with a new validity lifetime.  This allows for automatic revocation of certificates in case managing the Key Revocation List overlooks an intended removal.
  • Additional Limitations – Further restrictions can be applied to the certificates along the same lines as the public key prefix options discussed in a previous blog post.

The first thing we need to do after standing up and hardening the machine where the CA will live is add the unprivileged user that will be used for signing keys to issue certificates.

sudo groupadd -g 3000 sshca
useradd -m -u 3000 -g sshca -G sshca -c "SSH Certificate Authority Signing User" -s /bin/bash -d /home/sshca sshca

Now we need to build out the directory structure.

sudo -i -u sshca
mkdir -p {hostca,userca}

Next, we need to create the key what will be used for issuing HOST certificates.

cd hostca
ssh-keygen -t rsa -b 4096 -f host_ca -C "Host Certificate Signing Key"

We also need to create the key that will be used for issuing USER certificates.

cd ../userca
ssh-keygen -t rsa -b 4096 -f user_ca -C "User Certificate Signing Key"

At this point, there are two files in each directory. The private key file will have no extension, and the public key file will have the “.pub” extension. All certificates will be signed using the private key file, but we also need that public key file, so don’t discard it.

In order to create the TRUST needed for a server to recognize USER CERTIFICATES signed by our CA, we need to push that USER CA public key to each host, and set a configuration option.  You can place it anywhere, but I recommend making a subdirectory under the /etc/ssh directory to store these keys.

sudo mkdir -p /etc/ssh/sshca

Then copy the pub file over from the CA and stick it in this directory. Edit the /etc/ssh/sshd_config file to include this directive:

TrustedUserCAKeys /etc/ssh/sshca/

Restart sshd (or force it to reload its configuration file) and this trust should now be created.

In order to take advantage of this trust, the user’s logging into the server need their public key to be signed by the USER CA.  This issues a certificate that will need to be given back to the user.

The syntax for signing a key looks like this:

ssh-keygen -s <ca_key> -I <certificate_identity> [-h] -n <principals> -O <options> -V <validity_interval> -z <serial_number> <public_key_to_be_signed>

The “ca_key” is the private key for the USER CA when signing user public keys, or the private key for the HOST CA when signing host public keys.

The “certificate_identity” is a “key identifier” that is logged by the server when the certificate is used for authentication. It is a good idea to use a unique identifier for this that is recognizable by your organization, since you can set up trust for multiple CAs.  For our example, the certificate_identity will be “unixseclab.”

If this is a HOST KEY being signed, ensure that you include the “-h” flag.

The “principals” are a list of users that can be authenticated with this USER CERTIFICATE.  Alternatively, it is a list of hosts that can be authenticated with this HOST CERTIFICATE.  Multiple principals may be allowed, separated by commas.  It is highly recommended that you actually set the principal to the username of the user or hostname of the server it is being issued for.  Blanket authentication can create forensic issues.

The “options” are a list of restrictions that can be applied.  These are like the prefixes we mentioned before.  Be aware that the newest versions of OpenSSH have changed one behavior regarding forced commands. Also note, that “options” are only valid for USER CERTIFICATES.  You would leave off the “-O <options>” when issuing HOST CERTIFICATES.

From Undeadly:

As of OpenSSH 7.4, when a forced-command appears in both a certificate and an authorized keys / principals “command=” restriction, sshd will now refuse to accept the certificate unless they are identical.  The previous (documented) behavior of having the certificate forced-command override the other could be a bit confusing and error-prone.

The “validity_interval” is used to set not only the expiration date of the issued certificate, but to also set a beginning date in case it should only become valid in the future.

Finally, the “serial_number” is an arbitrary number that can be assigned to make KEY REVOCATION easier.

The HOST CERTIFICATE that gets issued should go in the same directory as the HOST KEYS.  The sshd_config file needs to be modified to include a new “HostCertificate” for each new HOST CERTIFICATE issued.  The HOST KEY must also still exist, and must have its own “HostKey” entry in the sshd_config file.  Don’t remove them in exchange for the certificate entries.

HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostCertificate /etc/ssh/
HostCertificate /etc/ssh//etc/ssh/

When the server has been configured to offer a HOST CERTIFICATE, the client side needs to also be configured to TRUST the CA that signed it.  To do this, we need to add the following entry to the user’s “known_hosts” file:

@cert-authority * <public key of the HOST CA that signed the host keys>

It may be necessary to remove existing host key entries in the known_hosts file for this host if it was recently migrated to use certificates.  A clean way to handle this is to make a back up copy of your known_hosts, zero the file out, and add only the certificate lines (by hand.)  Then any time you run into a non-certificate host, you can compare the offered key to your known good key in your backup copy, and accept if it’s good for the hosts that don’t use certificates, yet.

This is a good stopping spot for today’s post.  It ran longer than I expected, so next week we’ll cover Key Revocation Lists, Certificate Inspection, and run the actual example of generating our initial CA, signing a host key, and signing a user key, then using them to allow a client to connect to a server.  I wanted to include that recording this week, but I didn’t realize how long this post was going to get before I planned that out.

Thanks for reading, and a quick “thank you/shout out” to the folks at the CRON.WEEKLY newsletter for the pingback on last week’s article in this series for their issue #63!

SSH Start to Finish – Certificate Authority Basics

The way the OpenSSH Certificate Authority works depends on a few components.  First, there needs to be one trusted signing authority.  This can be any system, and it does NOT have to actively be connected to the network for the client/server handshake to take place using the CA signed keys.  There should also be a Key Revocation List, as well as a means for keeping the KRL updated.  A proper Identity and Access Management (IAM) platform could possibly handle this.  A close second would be a proper Configuration Management / Server Automation tool such as Puppet, Chef, Salt, or Ansible.  We will not cover using either of these tools in this series, but we will (most likely) cover an alternative solution when neither of the prior recommendations is available.  That’s for another day, though.  Today, we’re only going to introduce the basic concepts and fundamentals of how the OpenSSH Certificate Authority works.

Let’s set up the players.  There is a person (User_A) that needs to log into a target machine (Server_A) as himself.  He is coming from his laptop (Workstation_A.)  Normally, User_A would generate his key pair, log into Server_A as himself, and place the public key into the authorized_keys file in his home directory.  Instead, we’re going to pull in a new player that acts as a trusted third party.  This will be the Certificate Authority (CA.)  The CA should be run by a non privileged user on a server that is either not directly connected to the network, or is heavily protected.  The actual privilege of signing should also be restricted to a small team of people with a job role title that allows for this type of provisioning.  For our example, we will assume it is network isolated.

We are assuming the CA is already set up, but here are the steps that should have been taken to do so.  Create the non privileged user (and group) for signing.  Switch to that user and create the CA signing directory structure(s.)  Use ssh-keygen to create the certificate signing key(s.)

There are two types of certificates that can be signed.  The user certificate authenticates users to servers.  The host certificate authenticates hosts to users.  Why do we need both?

A host certificate gives us the ability to stand up a new server in our environment, sign its host keys with the certificate authority, and then the client knows that the new key is okay without prompting the user to trust the key first.  This reduces some of the issues with managing the known_hosts file.

A user certificate gives us the ability to tell the server that our key is okay without having to place the key on the server first.  This removes some of the issues with managing key distribution.

A signed user certificate can place restrictions on the signed public key, including all of the restrictions we discussed with the pre-amble section to the authorized_keys entries.

Let’s look at the broad overview work flow for today.  Next week, we will cover the commands needed to stand up that certificate authority structure listed above, plus the commands to sign both host and user certificates.

Work flow scenario: a new machine is built.  The host keys are regenerated (if for example this is a cloned virtual machine) and signed by the Certificate Authority.  This signed certificate is placed back onto the new machine, and that’s all that is needed, as long as the clients are configured correctly.  For the client to take advantage of this, the client needs a special known_hosts entry that begins with @cert-authority and is followed by the public key for the signed host certificates.  When the user goes to log into the new machine, the connection flow will include the server presenting a host certificate to the client, who then checks that the known_hosts “@cert-authority” entry can decipher the host certificate, and the connection is then accepted on success.  This helps prevent confusion on intentionally newly built systems when IP or hostnames change regularly.

Work flow scenario: a new user needs access to a system.  The user generates their key, sends the public key to be signed, and when the certificate is received, places it in their .ssh directory with the rest of the key related files.  The host machines have already been configured to trust the certificate authority in the sshd_config file.  When the user goes to connect with ssh, the client presents the signed certificate to the target machine.  The target machine’s sshd opens the TrustedUserCAKeys entry to open the appropriate public key to decode the certificate.  If this is successfully decoded, the connection is trusted as if the key were in authorized_keys for that user.  This helps reduce the heavy work load of managing multiple authorized_keys files per user.

Of course, there is more to it than this, but we’ll go into the finer details over the next few weeks.  Next week will be an explanation of the commands needed to set up the CA, (including revocation lists, and why they are important.)

Thanks for reading!

SSH Start to Finish Architecture – AuthorizedKeysCommand

Last week we brushed upon this briefly, but this week, I’ve stood up a scenario in my lab, and am digging into the details a bit deeper.

So to set this up, there are three systems involved, currently.
1) The windows 10 laptop with the “ubuntu on windows 10 on crack” option. Using the bash shell, I created an ssh key pair, and stopped there until everything else was ready.
2) The “target” system to log into. This is an OpenBSD server that I stood up to play with asciinema earlier this weekend, and decided to utilize for this particular lab. This is the machine that will be configured to use AuthorizedKeysCommand and AuthorizedKeysCommandUser instead of AuthorizedKeysFile. On this server, I created two new groups, and two new users:
groupadd testgrp
groupadd sshpub
useradd -m -g testgrp -G testgrp -c “Test User” -s /bin/ksh -d /home/testuser testuser
useradd -m -g sshpub -G sshpub -c “SSH Public Key Provider” -s /bin/ksh -d /home/sshpub sshpub

I also created a new script: /usr/local/bin/query_ssh_pub.ksh with permissions 750 and owned by root:sshpub.

HOSTNAME=$(hostname -s)
ssh -i ~sshpub/.ssh/id_rsa sshpub@ “/usr/local/bin/query_ssh_pub_keys.ksh ${USER} ${HOSTNAME}”

3) The “query server” system to be a central repository of ssh keys for system accounts and/or human users (hypothetically.) I created the same sshpub user and group on this system, but added a new script: /usr/local/bin/query_ssh_pub_keys.ksh with permissions 750 and owned by root:sshpub as well.

if [ $# -ne 2 ]; then
exit 255


echo ${USER} | grep -q -E -e ‘^[a-zA-Z0-9]+$’ || exit 255
echo ${TARGET} | grep -q -E -e ‘^[a-zA-Z0-9]+$’ || exit 255

ls /home/sshpub/key-store/ | grep -q “^${TARGET}\.${USER}\.pub\$” || exit 255
cat /home/sshpub/key-store/${TARGET}.${USER}.pub

I generated an ssh key pair from sshpub on the “TARGET” server, then copied the public key file over to the “QUERY” server so that it could do a remote ssh call. If I were going to use a system like this in production, I would apply a few more sanity checks on all of the inputs, as well as consider a force command for this user either by sshd_config or by modifying the public key file, but that’s neither here, nor there. This is not an ideal way of retrieving the keys, but it demonstrates how it works in a simple manner.

Once everything was in place, I dropped a copy of the public key from the “CLIENT” laptop into /home/sshpub/key-store/ file on the “QUERY” server, then tested that all commands worked as intended.

Finally, I updated sshd_config to use the following entries on the “TARGET” server, and restarted sshd:
AuthorizedKeysCommand /usr/local/bin/query_ssh_pub.ksh
AuthorizedKeysCommandUser sshpub

After all of this was done, I was able to test that I could log in as “testuser” to the “TARGET” machine, and it retrieved the public key from the “QUERY” machine successfully, allowing login as expected.

The query script can call any service, really. You can call keys stored in LDAP, SQL, or any other database. The final returned result from the script should be zero or more public keys, and nothing else. The most common use of this is to query LDAP, and there are examples of an LDIF file for OpenLDAP floating around freely on the internet, if you choose to go that route. Just make sure your LDIF works for your particular LDAP service, and that you are able to sanitize the output to only present the keys in the end, when you query.

Later in the day on Monday, I may modify this post to include the asciinema player embedded to show off how this whole thing works. I’m still working on getting WordPress to let me do that, but it has gotten late on me.

I would like feedback if anyone likes this kind of thing, though. Just leave a comment!

Fun-Day Friday – Not really

I am exhausted. This on call week has been rough. Not “bad” or anything, just rough. The mobile alerting device has been pretty quiet for the most part, but when it does decide to go off, it likes to do so at short intervals right in the middle of my normal sleep window. Interrupted sleep makes me more tired than a total lack of sleep, so this is taking its toll.

You didn’t come to read my blog on that, though. So here’s some other sad news, though I can’t recall if I might have shared already. I did not complete NaNoWriMo this year. I had some issues one night that meant there was no way I would get my quota for the day in. I introspected about the coming week on that day, and made the executive decision that this year was not a good year to try. The days my family were gone visiting other friends and family, I spent home alone. I got quite a few Honey Do items taken care of in their absence, but not as many as I would have liked. They are back home, and routines that got disrupted from the trip are just now really starting to get back into a rhythm.

The one good thing I can report is that I’m re-focusing on a product I started around the same time I started this blog. My small e-book on Dancer’s shell/Distributed shell is getting another work over before I go live with it as a product. It’s going to be in the ten dollar range when it goes live, but it’ll also be a “beta” launch, so the folks that buy during the beta window will help me make it better. We’ll see how that goes.

I’m not sure how much interest there will be, but after that product is up, I’m going to start on a second one. I’m thinking either a product on sudo or tmux, but I’m undecided. If you have any interest in one over the other, let me know in the comments. I’ll also be asking on social media, so hopefully I’ll get enough responses to make a clear decision.

That’s all for now. Monday we get back to the SSH stuff, and dip our toes into the land of making public keys more manageable.

SSH Start to Finish Architecture – Dealing with key distribution

This is going to be a brief post with not as much content, but it will explain where we’re heading from here.

One of the biggest advantages of SSH is the added encryption layer of security it provides. A second major benefit is the lack of a need for a password. Passwords can be replaced by pass phrases, which helps make it even more secure. However, there is a risk involved with the vanilla authorized_keys file type of system.

The public keys must be managed. What does that mean? The public key needs to be provisioned to the appropriate authorized_keys files manually for every user that takes advantage of this system. When a user needs access revoked, the authorized_keys files must also be analyzed to make sure there isn’t a key sitting out for an account where that user shouldn’t have that access any longer. When you de-provision a user in your production environment, their own user account authorized_keys files get deleted with the home directory, if you use a standard “userdel -r ” command. However, if they had keys to access a shared service account, such as www, apache, or similar, there needs to be an extra step involved to check every system where that key might exist.

This can be handled with something like puppet, chef, or one of the other common provisioning and configuration management type tools, or it can be done with something as simple as a distributed ssh (dsh) check, but having multiple files is a provisioning nightmare. Assume a system is down when the user is de-provisioned. Without a configuration management / identity access management solution, your manual checks might miss a single key floating out in an authorized_keys file.

So how do we handle this? There are multiple ways, and they don’t all involve configuration management tools. One way to deal with this is to use the “AuthorizedKeysCommand” option in sshd_config. It requires a second option (“AuthorizedKeysCommandUser”) in order to use it. These allow you to set a script called by a (hopefully) non-privileged user that can query some service to retrieve a key for a given user. This can be LDAP, SQL, a flat file on a remote machine… anything that returns the expected results. We’ll look at this more in depth next week.

Another option is to set up a Certificate Authority for OpenSSH, and sign your user’s public keys. For provisioning, this does not require a configuration management / identity access management tool, but for de-provisioning, it might. However, you can potentially automate this de-provisioning on a time schedule, to avoid the CF/IAM solution, and we’ll look at that kind of set up when we get into the Certificate Authority stuff in the weeks after next Monday.

Ideally, you might consider combining these methods in the end, but we’ll present them initially as separate philosophies and methods for managing the keys.

Thanks for reading

SSH Start to Finish Architecture – Dealing with laggy networks

One of the aspects of servers and clients that speak to each other over a network that sometimes needs handling is how to keep the connection open. Sometimes the network is flakey, for lack of a better term, and this means packets can get dropped. OpenSSH is not exception, and it has several options that can be set in the sshd_config, ssh_config, and local ~/.ssh/config files to help determine how to handle such cases.

Both the server and client share the “TCPKeepAlive” option. This option determines whether or not to send TCP keepalive messages. This helps terminate a connection is a client or server crashes, but can be problematic on a network that has considerable lag. If it’s not set, though, it can mean lots of ghost users in the connection pool for the server, so it’s recommended to turn this on unless you have a very bad network. The options are “yes” or “no” depending on whether to turn it on or off.

There are also the “ClientAliveCountMax” (sshd_config) and “ServerAliveCountMax” (ssh_config and ~/.ssh/config.) These set the number of alive messages that may be sent without the client or server receiving a response back from the other end. These messages are different from TCPKeepAlive messages. The alive messages are sent via the encrypted channel, and thus are not able to be spoofed. TCP keepalive messages on the other hand can be spoofed, so this may be a better option in a more (potentially) hostile environment. This setting is a number, and defaults to “3.” Remember that this is the COUNT so it represents the number of times a message will be sent before a session it terminated due to lack of reply from the other end.

The other variable needed to make the above work is the “ClientAliveInterval” (sshd_config) and the “ServerAliveInterval” (ssh_config and ~/.ssh/config.) These determine the number of seconds between messages to be sent. The default is “0” which means nothing is sent, so you must give this a value greater than 0 to turn this option on. If this value is set to “5” on either end, and the alive count max is left at a default of “3,” then the connection would be terminated after 15 seconds if the messages don’t get a response.

Outside of TCPKeepAlive and the *AliveCountMax and *AliveInterval settings, there is also a setting for determining how long to wait for a user to successfully log in. To keep people from making the initial ssh request handshake, but then not logging in and tying up a socket, this can be set and that user will be dropped after that set amount of time. The default value is 120 seconds, but if you don’t want this at all, you can change it to 0 to turn it off, just like the *AliveInterval options.

That’s all we’re going to cover for today. We’re getting closer to the certificate authority stuff, which makes maintaining keys so much better.

Thanks for reading!

SSH Start to Finish Architecture – X11 Forwarding

I was reviewing previous posts, and realized I haven’t really covered this aspect of forwarding, yet. I would be remiss to leave it out.

X11 is a client/server protocol, which means you can run software on one machine, and display it’s graphical output on another. It also has some inherent security risks, so a common way to mitigate some of those risks is to allow SSH to forward the X11 client connection to your local X11 server when you start a client on a remote system.

It seems backwards for some people, but you run the server on your workstation, and you run the remote graphical command as a client that calls back to your server. The server generates the graphics on behalf of the client. If you’re running a workstation with Linux, a BSD derivative, or something like one of the OpenSolaris forks, you are likely already running X11 for your desktop needs. We will make the assumption that you are for this process.

In order to do X11 forwarding, the remote server needs to be configured to allow such forwarding. The settings that matter are: “X11Forwarding yes” to turn on the forwarding, “X11DisplayOffset 10” (default) to determine the offset for the display that will be used, “X11UseLocalhost loopback” (default) to tell sshd to bind the forwarding server to the loopback device, and “XAuthLocation /usr/X11R6/bin/xauth” (default) if you need to provide a path to the xauth program, because it’s not in the default location on your system.

It may be that the only setting you need to adjust is “X11Forwarding” from “no” to “yes” on your target system.

Once you’ve done this, you can make your connection to the target system by passing the -X or -Y flags to the ssh client. The -X flag enforces the ForwardX11Trusted settings for the ssh client, which set higher restrictions on how the X11 forwarding can be used, as well as setting a 20 minute expiration timer on the xauth token. The -Y flag does not set these restrictions. It’s up to you to decide which flag you want to use.

After you connect, you can check that your environment is set up appropriately. You should have a “.Xauthority” file in your home directory, and you should have an environment variable already set called ${DISPLAY} that should probably show “localhost:10.0” when you echo it out.
ls -ld .Xauthority
echo ${DISPLAY}

After you’ve confirmed these, you can test your forwarding with something simple, such as xeyes, or xclock, if either of those are installed on the target machine. If they are not, go ahead and try whatever X11 program you intended to run. You should see the program show up on your desktop once you do.

Finally, if you have need of running an X11 program as a different user, you can use the xauth command to merge your .Xauthority tokens with the other user’s environment and then switch to that user to run your command. You will need to extract your xauth token for your DISPLAY, and merge it for the other user’s environment. The standard way to do this is with “xauth extract” piped to “xauth merge” as shown in the full session example below.

ssh -Y User_A@Server_B
ls -ld .Xauthority
echo ${DISPLAY}
xauth extract – ${DISPLAY} | sudo -i -u User_B xauth merge –
#OR xauth extract – ${DISPLAY} | sudo su – User_B xauth merge –
sudo -i -u User_B #(or sudo su – User_B)
echo ${DISPLAY} #(May need to manually set this to what you were given upon login)

The client configuration has several settings to always or never set this for you. These should probably be set in Match blocks for just the servers you need to run X programs on regularly, and not set at all otherwise.

ForwardX11 yes/no
ForwardX11Trusted yes/no

The time format will be a number followed by a modifier unit. “S” or “s” for seconds. “M” or “m” for minutes, and so on all the way up to weeks. No unit indicates seconds by default.

You can use xauth commands to delete your tokens manually, when you are done by doing “xauth remove ${DISPLAY}” if you so desire.

Hopefully this helped shed some light on how to get X11 Forwarding working from a basic to complex scenario. This is one of the most commonly asked questions I’ve had in the past, and I’m sorry it wasn’t covered sooner.

If you have any questions on this, leave a comment. Thanks for reading!