SSH Start to Finish Architecture – very broad overview discussion of gpg-agent

Thank you to everyone who sent get well notes.  I appreciate it.  I’m doing slightly better.  Since I’m still not 100%, I haven’t gotten the deeper details of how the Yubikey 4 will work (as in, compiling software to use it, configuring it to be used, and then explaining the step by steps of using it.)

I have, however, learned enough to give a broad overview of the expected behavior, so we’ll start with that.

Earlier in this series, we talked about loading our private ssh key using ssh-add.  This required another service to also be running: ssh-agent.  The same is still true of ssh-add, but we need to use a different agent when dealing with using OpenPGP keys as an authentication mechanism.  For GnuPG, the tool is gpg-agent.

The general gist is that GnuPG needs to be configured to support SSH keys in its configuration file.  Once this is done, we can switch to using gpg-agent instead of ssh-agent.  This agent is capable of loading standard SSH private keys as normal, but it also allows for presenting an OpenPGP key as an authentication mechanism for SSH private key logins.

One thing we need to remember about OpenPGP is that it handles multiple keys that have multiple uses.  One use is “signing” which we might use for say… email.  One use is “encrypting” which we again might use for say… email.  When we load a key with gpg-agent, we want to make sure that the signing and encrypting capabilities of the key being presented are turned OFF.  Instead, we want only the “authentication” capability to be turned ON.  We are, after all, using this to authenticate to a server, not using it to encrypt or sign static files.

There are also two ways to handle the private key in general.  It can be generated “on the device” or loaded “to the device” after being created “off the device.”  The desired solution is to generate this “on the device” so that the private key never touches a hard drive where it can be retrieved via forensic tools.  This does, however, tie the key to that specific device, so if the physical key is lost, the keyring is lost, and the web of trust for that would be difficult to re-build.  Except we’re talking about using this device for nothing more than authentication.  We should not ever actually use the OpenPGP keys on this device for signing or encrypting emails.  It has no reason to be in the OpenPGP concept of web of trust.  It should ONLY be used for authenticating ssh connections.

The other argument is that you can generate a sub-key from your primary OpenPGP key that everyone knows you buy in the web of trust, and assign this sub-key authentication roles, then upload that to the Yubikey device.  If the device is lost, the subkey can be revoked, and a new key generated to go onto the new device that would surely be purchased to replace it.

My thoughts are… go with whatever you are more comfortable with.  I personally feel that it is better to generate the private key on the device and just don’t include it in the web of trust, since it’s sole purpose is authentication.  However, if you handle the key ring like a pro because you use OpenPGP for email correspondence on the regular, and you’re more comfortable using your single OpenPGP keyring for everything, by all means, go ahead and generate the sub-key and upload it to the device.  You’ve already got a feel for handling your keys in a sanitary environment, if you’ve been doing that a while, right?

While the OpenPGP keys for authentication was my primary reason for purchasing  and testing the Yubikey 4, there are other capabilities that may also tie in for a more robust secure login regime.

The server can be configured to take Yubikey One Time Pad (OTP) passwords, if there is a PAM module (or BSD Auth module) available for your OS.  Linux and I believe FreeBSD both have PAM modules.  OpenBSD has a BSD Auth function, but it’s local authentication, only.  This means it doesn’t report to a centralized server when the OTP is used, and therefore it doesn’t keep things synchronized across multiple environments.

The device also can be configured to send a static password much like a simple two line USB Rubber Ducky payload.  You can configure this to be a “password” or you can put a pass-phrase on it.  If you do this, Yubico recommends that you only store part of the password or passphrase on the device.  Type part of it manually, and then press the button for the correct duration to trigger the rest of the passphrase to be sent via the Yubikey.

There is also reference to a PIV SmartCard capability, which seems be an OpenSC implementation that potentially may also work for SSH authentication using PKCS11 instead of OpenPGP standards.  I will make an attempt to configure each of these and demonstrate both before this series is finished.  Of course, I retain the right to state I may be confused, and the PIV SmartCard and OpenPGP SmartCard functions may be the same thing on this device.  I’ll know for sure when I dig deeper and try both.

Fun-Day Friday – Still sick

So some of you already know that I got sick.  We checked my temps right after I finished up yesterday’s email content for the mail subscribers, and I was running 101.1F, apparently.

The best part about having a large family is the puppy pile of kids you get when you complain about feeling cold.

And I’m going to cut this post short, before I get completely incoherent with it, since I’m still running a low grade as I type, and I’m a touch light headed.  The emonlaid helps.

The Lab – Gear Check – Unbricking the Bricked BeagleBone Black Wireless

Unbricking the bricked BeagleBone Black Wireless was mostly painless.  I needed a power source.  I chose to use the USB/microUSB cable that came with it for communicating over the HOST USB port.  This is the port that lets you log in via if you are using the stock debian install.

I also needed the USB TTL serial cable, so that I could watch the console for the boot/reboot process.  This wasn’t absolutely needed, but it was very useful.  I highly recommend that you use one if you need to do this procedure yourself.  I used “cu” to connect to the console like this:

cu -l ttyUSB0 -s 115200

The first step was to download the correct recovery image.  I navigated from to find it based on the board I had on hand.  I started at the troubleshooting page and worked my way to the latest images link to grab the image I needed.

Once I downloaded the .img.xz file, I ran unxz to unpack it, then copied it to the microSD card via the SD card adapter:

unxz bone-debian-8.6-lxqt-4gb-armhf-2016-11-06-4gb.img.xz
sudo dd if=./bone-debian-8.6-lxqt-4gb-armhf-2016-11-06-4gb.img of=/dev/mmcblk0

Once this was done, I put the microSD card into the BeagleBone Black Wireless, hooked up the TTL serial cable, connected to it with cu, and plugged in the other USB cable to power it on.  I had already booted the device while pressing the button that tells it to boot from microSD instead of eMMC, but if you are in this pickle and haven’t done that, make sure you do so now.

Over the console, I watched it boot until it gave a login prompt, and then I logged in as root (no password.)  Then I checked the flashing the eMMC page to get the instructions on what file to modify, and uncommented this line in the /boot/uEnv.txt file:


A reboot from there, and the console took a while to flash the eMMC, but once it was done, everything was working again.  I’ll do another write up on getting OpenBSD to work on either the wireless or the RevC in a later post.

A case of the Mondays

Once again, the SSH stuff is delayed.  What I’ve figured out so far is this:

  1. The Yubikey 4 is too new for the default installed package of GnuPG and underlying libraries that talk to it as an OpenPGP card.
  2. I have tried the default installation on Debian 7, Debian 8, Linux Mint 17, and several others.  I need to compile stuff from source to try to make this work.
  3. I have limited time at the moment due to some major in-house projects (honey-do list) over the weekends, and so I will have to try to get the custom compile taken care of a little bit here and a little bit there.

On top of this, one of my bonus children has had issues with the SD card in her handheld game.  I tried to donate one of my microSD cards to replace it, but it wasn’t working.  I may need to get a friend that has done work on NES type games in the past to take a look at the card slot.

In the process of trying to get the donation card formatted, I think I bricked the new Beaglebone Black Wireless.  I need to go through the unbricking process this week.  Yay me.

I will try to document what happens, but in light of all of the frustrations from this weekend, I have to apologize for the lack of an actual technical post for today.  I will try to post short updates on Twitter throughout the week to keep people in the loop on progress, and hopefully I’ll have things ready to pick back up with the Yubikey SSH client keys discussion NEXT Monday.

Fun-Day Friday – Food Forest card game

This week’s family game time discussion revolves around a set of playing cards that were initially sold via a Kickstarter campaign.  The theme of the cards (and the game) helps to teach Permaculture principles, especially tying elements together to form a system.  The decks I received each have 66 cards that are already defined elements with pictures, descriptions, and clear markings for how that element may interact with other element cards.  Each deck also contains two “blank” cards so that you have the option to fill in your favorite element, (plant, animal, structure, etc) if it’s not already in the deck.  Finally, there were four non-playable cards.  One is a brief introduction to the deck with a description for basic card interaction.  One card is a “Key” card that explains the symbolism on each card for quick look up.  The other two cards explain various games that can be played with the cards.  Some of the playable cards are “disaster” cards, with a description of how to handle the disaster if one is drawn.

I bought three decks, initially, so I’m not sure what the packaging is like if you order a deck today, but the only complaint I have about these cards is the packaging.  The box is structurally okay, but the flaps were held in place by little “dot” stickers that didn’t hold their sticky.  This means the flaps come open fairly easily at either end of the box.  I played a game of 72 card pick up when I picked up the first deck from the shipping package, because the flap came open and the cards fell out so easily.

The games that are suggested tend to run along the line of matching cards based on their inputs/outputs.  I also like the way that you can pull some pairs and set up games of “memory” for younger players.  The cards lend themselves to a variety of game types, if you’re tired of the basic games suggested by the creators.

Similarly to the Wildcraft! board game, these cards are educational, the art is well designed, and the games that can be played are fun.

If you’re interested in these, you can pick up your own deck at the Food Forest Card Game website.  The site has dates of “2016” on it, so if you want to be sure that things are up to date, try their FaceBook Page or Karl Treen (one of the creators) on Twitter.

General Rant – If you work for a software vendor read this

Today’s Lab Gear post has been delayed to next week.  Hopefully you stick around for this, but I understand if this post turns a lot of people off.  Still, I feel strongly enough about this that it needs to be shared.

I have an hour commute to work every day.  That means I also have an hour commute home every day.  This extra two hours out of my day is often spent chewing on how to deal with work tasks for the week, or what to share on this blog for the week.  Today’s drive home was spent fuming, instead.

I will not name the vendor, but we have a product we’ve been wrangled into purchasing due to certain certification requirements we are required to maintain.  As such, we went through a small array of vendors for this kind of product, and settled on “The One That May Not Be Named.”  All was well and good for the on site product demonstration.  We bought the product, and then made arrangements to have professional services on site for installation.

The problem is there were two products purchased.  We got two weeks of services.  One week was used for the installation of each product.  Normally this would not be a problem.  In fact, it’s pretty standard practice in the industry to take about that long to get things stood up, configured, and turned over for production.  In this case, it was handled poorly.

Over the five day period that the vendor representative was on site, at least four of those were used to do what SHOULD have been done prior to engaging us in person.  Architecting the solution.

I think I’ve mentioned this before, but it might have only been in the bonus content email my subscribers get.  I’ll say it again, just in case.

My job title at a previous company was Unix Systems Engineer/Architect/Admin.  Those are three distinct roles, and most people in this field will wear all three hats.  An “Admin” deals with day to day mundane configuration tasks, such as user provisioning (add/remove/modify) and configuration file changes to services such as SSH.  And “Engineer” deals with break/fix scenarios, troubleshooting, and engaging vendor provided support when necessary.  And “Architect” deals with vendors, end users, and other teams to design and provide a solution that meets the end users’ goals.

This particular company, (The Vendor,) dodged almost every attempt at architecting the solution before coming on site.  The end result was that by the end of the 5 day on site professional services visit (for BOTH products… happened twice) we ran out of time to finish configuration.  Things weren’t working 100% when the services people got on the plane.  Emails to keep them engaged were less than helpful, and a lot of our questions were left unanswered.

Then we ran into a lot of technical issues.  Again, these issues should have been identified during an ARCHITECTURE pre-on-site-meeting.  This project has stagnated several times, sometimes due to internal company politics at a level or three above my head.  This means every time I get the green light to go back to installation, I have to refresh myself on what all has been done.

The Vendor has been on site a few times to try to help make things right with us.  They see our frustration (as a customer) and I feel that they honestly want to help, but today I just got a bad taste in my mouth about the whole deal.  One person responded to an email with “should we get you on the line to purchase more professional services?” as a response to an email I sent regarding one more technical road block I hit.

I was as polite as I could be in my response, though I fear I still came across a bit caustic.

Part of my frustration is this whole deal feels almost like “The Dale.”  If you have Netflix streaming, check out the “White Rabbit Project” season 1, episode 4.  In it, they talk about a car called “The Dale.”  It was the most hyped car the year it was supposed to be released.  Only three were produced in the end, and none of them really worked.  While this software doesn’t quite meet that bill, the way it was marketed certainly does.  Getting it installed using the custom script that the vendor provided works well, when it works, so I can’t do a direct comparison.  When it doesn’t work, though… digging into the guts to figure out why quite literally sucks.  It sucks my soul and my give a damn, and I’m running really short on both these days.

I have until the end of this month to finish rolling this thing out.  I’ll keep rolling with it, and I’ll keep engaging them as best I can, but the damage is done.  I’m less likely to tell people how “great” this vendor is if asked, simply because of how the services for standing this thing up were handled.  It’s hard to make things right when there’s a missing limb spurting blood in the mix.

If you work for a software vendor, please, please, PLEASE make sure you review how you engage your customers.  Learn about their environment.  Let them explain to you how things are done “today” so that you can understand how to help them use your product to make those things easier, more efficient, and better in general.  Don’t take advantage of them by sending “professional services” to architect the solution when those services really should be focused on just installing/configuring in general.  Don’t be “The Vendor.”

General Rant signing off.

Sudo Policy Fixes and Fails – Importance of auditing your sudoers policy files.

Wait, what?  Yes.  That is correct.  This week is skipping the SSH series to begin discussions on Sudo policy review.  We will return to the SSH series soon, but I really, really, really want the next post to cover the use of token devices (using the Yubikey 4 for the first discussion on this,) and I’m simply not prepared for it.  I’ve run into several road blocks on getting my new key programmed, all of which will be covered when I write that post.

Since I don’t want to skip a day of content, I decided to introduce the next major topic I’ll go into after wrapping up the SSH series.

Sudo isn’t the only privilege elevation policy engine available.  It isn’t even the only open source privilege elevation policy engine.  It is, however, one of the most popular, and powerful in its own right.  However, it is also one of the most easily misconfigured tools, and this can (and does) lead to very dangerous policy.

For our first foray into what does and does not constitute “Bad Sudo,” we will look at one of the most overlooked utilities provided by the sudo package: sudoedit.

Before we discuss “sudoedit,” we should look at the problem of granting elevated privileges to editors in general.

Let’s start with the classic “edit a file with vi” entry.

Cmnd_Alias EDITFILE = /usr/bin/vi /path/to/file

The user “someuser” now has permission to call /usr/bin/vi to edit the file /path/to/file as root.  When this happens, sudo calls the /usr/bin/vi command as the root user.  Why is this dangerous?  The “vi” editor allows for calling out to the shell to execute commands.  You can do this by hitting the escape key, then “:!/path/to/bad/command.”

There is a tag that is often used to try to suppress this command from being able to do harm.  The NOEXEC tag can often be seen in configurations where this was attempted.

Cmnd_Alias EDITFILE = /usr/bin/vi /path/to/file

Unfortunately, this is a bandaid that may or may not work as intended.  It relies on interrupting a program that was built with dynamically linked libraries.  It also relies on the sudo command itself being compiled with the NOEXEC support built in.  If the command in question were “ed” instead of “vi” (or on older systems, “/bin/vi” instead of “/usr/sbin/vi” where “/bin/vi” was often statically compiled so that it could be used to help repair a bad /etc/fstab for boot issues, and /usr wasn’t mounted properly) this would be useless.  The tag would do nothing to prevent calling out to the shell.

Now let’s look at the correct way to handle this.

Cmnd_Alias EDITFILE = sudoedit /path/to/file

As you can see, instead of calling “/usr/bin/vi” we call “sudoedit” to modify that file.  What does this do?  It’s simple.  The user will call “sudoedit /path/to/file” instead of “sudo vi /path/to/file.”  The sudoedit command will then obtain elevated privilege to access the file, make a copy of it to a temp file, then open an editor for the user AS THAT USER.  This means in our example, that the editor will be called as if “someuser” had called it himself.  The user can call out to the shell for command execution all day, but it will only run those commands without any more privilege than the user already had.

How does sudoedit know which editor to use?  From the man pages:

The editor specified by the policy is run to edit the temporary files.  The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order).  If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used.

So we need to be sure to set a policy of allowing a handful of editors for our users, and then clean up any policy that is granting access directly to editors to fix bad policy.  In order to do this, we need to set a Defaults directive to list the available permissible editors.

Defaults editor="/usr/bin/vi:/usr/bin/emacs:/bin/ed:/usr/bin/nano"
Cmnd_Alias EDITFILE = sudoedit /path/to/file

This will let users use either vi, emacs, ed, or nano, depending on preference. There is an “env_editor” option, but if it is set, it lets users set any editor they like. This could be something that isn’t actually an editor, so the safe thing to do is just give the list of approved editors in the option I listed, instead.

I will cover some more “gotchas” like this eventually, but I’m hoping I get all of the kinks worked out of my Yubikey adventure before next week, so I can get back to the SSH posts, first.

I hope this was useful to some of you!

Fun-Day Friday – The Classic Board Game (Checkers)

I’m sure everyone reading this has played Checkers at least once or twice.  It’s a classic game of strategy that doesn’t require as much mental stress as games like Chess.  We’re going to cover it today (and this will be a short post) because a Checker set is a very versatile way to pass the time on a rainy day.

The classic game arranges twelve pieces for each color (red vs. black) at opposite sides of the 8×8 board, all on the same color of square (red or black) so that pieces are diagonal.  Each player moves diagonally across the board with the option to jump (and remove) the opponents checkers if the space behind that checker is empty.  If you get to the opposite side, that checker gets “kinged” and can now move backward as well as forward around the board.

The next most commonly recognized game with this board is “give away.”  This is played exactly like the classic game, except that the object is to be the first player to lose all of the pieces.  Also, if a jump can be made, it MUST be made, so a player can set up the opponent to force them to take pieces.

A less common game is sometimes called “fox and hounds,” but the locals called it “fox and geese” when I was growing up, even though that’s technically a different kind of board layout, and played slightly differently.  One player places four of the same color checker on the row closest to himself (again, all the same color, and moves are diagonal.)  The other player places the other colored checker anywhere on the row closest to herself.  The four are the “geese” (or “hounds) and the one is the “fox.”  There is no jumping allowed, and pieces are not removed.  The object for the fox player is to get to the other side.  The object for the geese is to corner the fox until it can no longer move, thus preventing it from reaching the other side.  The fox moves like a king, any direction diagonally.  The geese move like normal checkers, one direction down the board diagonally.  This game is not balanced in the favor of the fox.  A perfect game can be played if you know the correct pattern for the geese in chasing the fox.  Since this is not a “fair” game, I prefer not to play this often unless I’m playing the fox, to let the other side learn how to develop the strategy (what patterns of movement make a perfect game?)  I won’t put the solution for the geese in this post, but if someone strongly wishes to see it, I don’t mind sharing privately.  You can leave a comment, and I’ll respond to the email address provided with the solution if asked.

Other names for this game include the word “Draughts.”  Board sizes and rules vary from country to country, but the general idea is mostly the same.  And there are other games that can be played using the standard 8×8 board.  You can even easily develop new games and challenges friends and family, such as “treat a piece like a knight from Chess and without hitting the same square twice, hit every square on the board.”  Or perhaps, “treat each piece as if it can move like a queen from Chess, and place as many on the board as you can without being captured by any other piece.”  It is possible to threaten every square on the board eventually, but it takes some thought.  Set aside a 3×3 section and play tic-tac-toe.  This last game is trivial to force a tie.  Or play “connect four” by placing your pieces along one side of the board, building out from there (as if they were dropped from the opposite side.)  Add your own house rules to the basic classic game.  Just learn to think outside the box and make things more challenging over time, and you can find enough different variations on this basic game to keep your mind busy while enjoying time with friends or family.

Sorry this was short and sweet, but this has been a rough week.  I will (again) try to get the recording done over the weekend for the SSH CA thing that I keep promising.  Hopefully I don’t get overwhelmed with honey-do tasks, and can get caught up.

Thanks for reading, and I hope I have inspired some of you to try new things with an old game.

The Lab – Gear Check – New Arrival (another Bone)

Last week, I obtained a new BeagleBone Black in the mail.  This is the newest revision of the device, and it replaces a few components for newer ones.  This is the BeagleBone Black Wireless.

Instead of the RJ45 ethernet jack, it has on board 802.11.  Instead of the miniUSB it has microUSB for the Host USB connection (the one that you plug in to get ethernet over USB with the address.)

It also comes with a newer version of Debian.  Instead of Wheezy (7) you get Jessie (8.)  This means it comes with the dreaded systemd software, but that does give one box to bang around on with that monster installed.

Beyond that, this machine is much like the last, and as long as you can find a place to orient the antennas, you should be gold.

The price is higher, but the on board wifi might be worth it.  I certainly felt it was worth the purchase to try.  So far I haven’t been disappointed.

The same serial cable works for this board as for the Rev C board, so if you need one, use the link from the previous article.

The new board was available as a kit with case, microUSB cable (for the Host USB connection,) and pre-installed antennas for the wifi, plus a power brick (same as the old board) from the same folks that provided the last kit I listed.  Here’s the link for the new one.

I will likely do a demonstration of using the serial connection to install OpenBSD onto a microSD card for this machine at some point, assuming the wifi works with this board.  I want to play with it some to be sure before I commit to that, though.  If not, I’ll likely at least demonstrate on the old board, where I know it works.

Thanks for reading!