Pi-Top Kali – The Idea

As recently mentioned, I’ve been working on a few projects of late.  In preparation for an OpenSSH based class I might offer, I found myself wanting to offer a shorter class on OpenBSD’s VMM/VMD virtual machine hypervisor system.  In researching this VMM/VMD system, one of my tests involved booting a linux live disk.  I chose Kali for this.  Getting it to boot wasn’t straight forward, due to the lack of a graphics KVM style console.  The VMM/VMD hypervisor uses serial connections to the guest operating systems, so I had to find all of the bells and whistles to pass to the Kali boot loader to make it boot to a usable login prompt.

Secondary to the above, and partially why Kali was chosen, is the fact that my GCIH is half its lifetime old.  It’s a 4 year certification, and I’ve had it for 2 years, now.  I got a reminder that renewal is coming up, and I began the refresher research on what’s involved in keeping this certification maintained through the renewal process.  One option is to take another SANS course, and get a new certificate from it.  While I would love to do this at some point, their courses are very expensive.  I also have an itch to try a different certification provider, and one of those stands out above the rest, to me.  I’ve decided I will likely go for the PWK (Pentesting With Kali) class from Offensive Security, and take the OSCP (Offensive Security Certified Professional) exam and certification.  This certification has “teeth” in that you don’t memorize a question/answer pool in order to answer a bunch of questions that are similar, but not exactly the same.  Instead, they give you about 48 hours (2 days) for the total exam.  The first day is to do an actual penetration test of a 5 machine environment, and the second day is to give you time to do a professional quality write up/report of the pentest as if you were presenting the report to a client.  The cost is within reason, and my family supports me in this endeavor.  To that end, Kali is on my radar as a “use this frequently” system this year.

I have several options for running Kali moving forward, and I will cover many of them as I go on this journey.  I will eventually go over running it in virtual machine environments up to and including VMware Workstation, Oracle VirtualBox, as a ProxMox guest, and of course, through the serial console as an OpenBSD VMM/VMD guest.  I may or may not get around to covering running it as a live bootable USB stick, or as a physical install to a typical x86_64 laptop.  All of these are things I’m looking at, but the first thing I’ll cover is installation and use on a Raspberry Pi.

I’ve made a few attempts at using Kali on a Raspberry Pi before.  I had trouble getting the TFT displays working satisfactorily, and I benched those projects due to the level of hassle and my own time constraints.  I knew that when I circled back around to this idea, I’d want a bigger screen than either of those TFT displays offered.  I want the device to be portable enough that I can take it almost anywhere and set up shop, but I need a display that gives me enough work space to actually … work.

The smallest display I was willing to look at was the 7 inch displays available, but my wife has a 7 inch tablet, and it’s only a little larger than a modern day smart phone.  My latest failed Kali attempt was on my own tablet, where Kali Nethunter never seemed to get installed properly no matter how many times I went through the process.  I like this screen size, and there are a few 10 inch displays available.  I almost settled on a device that used one of these when I discovered that there is actually a kit that turns a Raspberry Pi into a laptop form factor.

The two versions of the kit available on AdaFruit are the first version of this product.  One is green, and one is grey, but the kit itself is otherwise the same.  The project site has an updated “pi-top 2” design, which moves the trackpad down below the keyboard, and gives room for the keyboard to be full size, which works better for me.  I never liked trackpads in general, because I tend to brush the thing while I’m typing, but I’m sure I’ll work around this limitation somehow.  This case is also green, with no grey option available.  I would prefer grey, but I can live with the green case as long as it is as functional as I hope it will be.

After all of the research I’ve done, I have decided on the pi-top as my next Kali attempt.  I’ve made the purchase for the pi-top 2 style case, and will cover the experience of how the order/tracking went, unboxing, setting it up, running the pi-top polarisOS that comes with it, and getting Kali installed and running on the new machine.

The order arrived today, but the write up for that will be next week.

Back from the ether – sort of

UnixSecLab fell off without warning or explanation last quarter.  There were several factors involved in this sudden disappearance, but I won’t list them all.  Some were family related, and some were “hey everyone, I have a cool new project in the works, and I want to announce it to the world in a big way when its done” related.  So here’s the skinny on what was relevant for last quarter that I didn’t report.

  • The first big project I started was for a class on OpenSSH.  I’m working on breaking down the man pages, re-organizing them into related / relevant sections, and writing up a presentation on each section to go into deep detail on even the most esoteric settings, plus discuss security implications of some of the potentially dangerous ones.  This has been bouncing around in my head for a while, and is part of why one of my first organized series of posts was SSH Start to Finish Architecture.  This project tapered off over the quarter due to the above mentioned family issues, and the inspiration for a smaller product offering…
  • The smaller second project was to develop a class on OpenBSD’s virtual machine hypervisor.  The VMM/VMD class idea was due to how new this software is to the OpenBSD ecosystem, and the lack of documentation on its use and set up outside of (the excellent) man pages.  The man pages do make it seem straight forward to use, but one of my first hurtles was getting an off the cuff live Linux CD running.  I chose Kali (since I’m also doing security related stuff on the side unrelated to the OpenSSH class I intend to use this for.)  The first hurtles involved figuring out how to make Kali boot to a root prompt in multi-user mode without getting hung up on trying to load the graphics.  It’s not a VMM/VMD issue, it’s a Linux boot options issue I had to research.
  • Since I started both of those (still in progress) projects, Michael W. Lucas has put a brand new edition of SSH Mastery into sponsorship, and I’ve learned that there may be another author working on a book about the OpenBSD hypervisor software.  This author’s Twitter is @pb_double.
  • Hak5 announced a new device that I will want to cover a bit near the end of the year, as well.  The Packet Squirrel is a nifty hardware man-in-the-middle device that has a switch similar to the Bash Bunny so that you can set it to different modes on the fly without having to reprogram it every time you want to use it.  It comes with three pre-programmed modes, including a raw tcpdump mode, an OpenVPN mode, and a DNS spoof mode.  Some setup is required for the last two.
  • I got a notice that my GCIH certification will expire in two years.  I knew this already, but it reminded me that I need to get some continuing education credits, and possibly look for a new certification, as well.  The SANS institute’s on demand classes are a steep price for an individual, and while obtaining a new SANS/GIAC cert would meet all of the requirements to renew the GCIH, I’m looking at other options.  One of those is the Offensive Security Certified Professional.  This is the cert for their PWK class (Penetration testing With Kali.)  From what I’ve been reading, it’s a rigorous class with a lab full of 50+ target machines, and the certification exam is a 5 machine live pentest.  A little under 24 hours are spent testing these machines, and then another 24 hours are given to finish and submit a report on findings.  I’m strongly leaning this direction.
  • Since I’m leaning that direction, I need to brush up on my offensive skills a bit.  I found an article that covers a bit on how to prepare for the OSCP.  It has some suggested links to online capture the flag sites, as well as some general advice and resources on brushing up.  The last two days I’ve done two full CTF machines from Over The Wire, and it was a lot of fun.  I completed Bandit and Leviathan.
  • Another new find (for me) is an online security training site that doesn’t cost anything for the classes.  Cybrary.it has a lot of good content, from what I can gather thus far, and it’s worth a look if you’re on a budget and trying to get a foothold into this space.

The posts will still be a little sporadic for a bit, but we’re back, and we’re going to catch up on some lost work.  I’ll share some tidbits of things I’ve learned while doing the CTFs (without doing any walkthroughs or mentioning any specific machines) as well as try to wrap up some of the dangling series posts from last year.

Happy New Year (2018) and thanks for sticking with us during the information drought!

Book Review – Networking for Systems Administrators chapter 8

There are 5 chapters left, including this one, and I would like to finish my chapter by chapter review before next year, so I’ll have to do more than one of these per month, at least one month.  I’ll probably do two a month to accelerate the process.  I apologize for the interruption to the current series, but we’ll return to it next week.

Chapter 8 is about DNS, or the Domain Name System.  As Michael W. Lucas states, there are books much larger than this one that cover this single topic alone, so this chapter is a very brief overview into the fundamentals of viewing it from a troubleshooting perspective.

The service runs on port 53, both UDP and TCP.  Many organizations only allow port 53 UDP traffic, but TCP is required for larger requests.  The chapter discusses how DNS servers keep a mapping of name and IP address relationships for translating requests between the two.

The name mappings are defined within zones.  Each layer of an address (read right to left) represents another zone.  For example, the “.net,” “.com,” and “.org” endings we often use are top level zones.  The book’s examples include “google.com” and “michaelwlucas.com” as child zones of the top level “.com” zone.  Any zone inside another zone is a child zone.  This could mean a1.www.mysite.noip makes www.mysite.noip a child zone of mysite.noip, which is in turn a child zone of .noip, in this scenario.

DNS servers are either authoritative, or recursive.  Authoritative nameservers contain the information for specific domains.  Recursive nameservers provide DNS lookups for clients.  These servers find the authoritative servers, then query it, then returns the result to the client.

Ideally, authoritative and recursive nameservers should be on separate machines.  This is for security reasons, as well as simplification of configuration.

Next, the author covers the DNS hierarchy, explaining how DNS is a distributed database, and how the queries work their way up the chain until a server is capable of providing an authoritative response.  Then, he goes into forward and reverse lookups.  A forward lookup is the response given when querying what IP(s) belong to a name.  A reverse lookup is the response when querying what name belongs to an IP, also known as an PTR record.  The protocol allows for multiple PTR records for the same name, but in practice, this can break things.

The next section covers the different types of records that are relevant to most situations, such as A (name to IPv4,) AAAA (name to IPv6,) SOA (start of authority,) PTR, CNAME (canonical name… name to name alias mapping,) and MX (mail exchange) records.

A brief discussion of caching follows, which explains that changes can take time to propagate.  Then he covers why checking DNS is important.  If a server is responding with incorrect or even inconsistent information, it will likely cause issues with other troubleshooting steps.

He suggests using “nslookup” on Windows and “host” on Unix systems, but the “host” command may not be available.  He covers both of these tools in detail before briefly introducing the more advanced “dig” and “unbound-host” commands.  Finally he explains the “hosts” file for local name to IP mappings that may override responses from DNS, depending on how a system is configured.

And I’ll wrap this review with a sentence that leads to a footnote, then the footnote attached.

“A few failed DNS requests can drag some server software to a crawl or make it entirely fail.”

Footnote:
“Should software be written such that it handles DNS failures gracefully? Of course. And in that world, I have a pony. No, wait — a unicorn. No, better still — a ponycorn!”

Persistence through job control – RC scripts

For SystemV style systems, the next phase of the boot process after inittab is to kick off the rc scripts.   This is often one of the last entries in inittab, even.  The rc scripts on these systems typically begin with a script called “rc” that does some initial environmental setup, then it goes through and calls the different runlevel scripts based on which runlevel the system is booting into.

The rc scripts described here will be the same on both SystemV style systems, and “Upstart” init systems such as on Red Hat Enterprise Linux 6.  The “systemd” affliction does things differently, and we’ll cover it next week.

These runlevel rc scripts live in a structure that varies from system to system, but is often either directly under /etc or under /etc/rc.d as the parent directory.  The structure often looks like this:

/etc/rc.d/init.d
/etc/rc.d/rc#.d (where "#" is the runlevel number.)

The init.d directory contains the actual scripts that start, stop, restart, or show status of various services.

The rc#.d directories contain symbolic links which point to the scripts in the init.d directory.  The names of these scripts determine whether to start or stop the script, and define which order they should be started or stopped.

For example, we might have a script called “httpd” that starts our web service.  We want this to be one of the last things started, and one of the first that gets stopped, so we might have a structure like this:

/etc/rc.d/init.d/httpd (the actual start/stop script)
/etc/rc.d/rc2.d/S99httpd (symbolic link to ../init.d/httpd)
/etc/rc.d/rc2.d/K01httpd (symbolic link to ../init.d/httpd)

The “S99httpd” link says to “S”tart it, and the high number puts it as a lower priority when starting services.  The “K01httpd” says to “K”ill it, (or stop it,) and the lower number gives it a higher priority when stopping services.  The standard rc script that parses these directories will take use the name to figure out what order to do things in, and will pass either a “start” or a “stop” based on the “S” or “K” at the beginning of the name.  The capitalization of the “S” and “K” are important.  An easy way to disable one of these temporarily, is to rename it with a lower case character, but keeping the name the same, otherwise.  This way you know what order it SHOULD be started or stopped if and when you want to re-enable it.

On complex runlevel systems, those numbers will vary depending on single or multi user mode, graphical environment mode, and so on.  On AIX, there are only two runlevels, so most things will be in rc2.d.  On BSD style systems, there is no actual concept of “runlevels” so much as there is a “local” rc file that has all of the settings inside of it, and the order is based on where they fall within the monolithic script.

In order to take advantage of the rc scripts for persistence, we would want to inject a call to our persistent shell within an existing script, or add one of our own.  Remember to put it after networking stuff is started up.

When we suspect this has been done, the routine is similar to inittab inspection.  Review all of the rc scripts, including “rc” itself, and for every call made, check that the file exists, is executable, and only contains what you expect it to contain.  A comparison from a known clean system (such as a fresh install to another machine) is a fast way to check the common items.  Anything that exists on our suspect machine, or any of the existing files that are different than originally delivered are worth digging deeper into.  Use diff, sdiff, and the like to make fast work of this.

Persistence through job control – inittab replacements

Last week we looked at a traditional “inittab,” using AIX’s inittab as an example.  This week, we’ll look at some of the “inittab” replacements that have popped up on various flavors of Linux.

As we mentioned last week, “Upstart” replaced “inittab” in Red Hat Enterprise Linux version 6, and “SystemD” unit files replaced it in RHEL7.

For Upstart, instead of a single “inittab” file, there is a directory called “/etc/init” that contains individual files that each control a single program to run.  There is no specified order by name of file, and the configuration files don’t contain any ordering themselves, other than to say “start me after this other process.”  With inittab, the order is controlled by where it falls within the file, so you get less fine grained control with this method, but you can be sure that a process that depends on another process already being up will wait, at least.

An interesting note about Upstart is that it doesn’t just check “/etc/init.”  It can also live in “~/.init/” which would start jobs for a user rather than for the system, and these jobs wouldn’t run as children of PID 1.  This gives some flexibility in dropping persistence in less explored locations on the file system, which means more places to audit for these kinds of persistence scripts.

An Upstart init file contains directives that respond to “emitted events.”  The basic events are “starting, started, stopping, and stopped.”  You can use a custom event that can be called by the emitter manually.

The program for sending “emitter events” to the Upstart “inittab” scripts is “initctl,” rather than “telinit.”  Instead of a “telinit q” you would use “initctl <emitter event> <file name>” to stop, start, and so on.

For SystemD, there is no inittab, either.  Everything is a Unit file, and lives in the same directory structure for both “init” type processes and “rc” type processes.  For that reason, we won’t go into SystemD too much, today, but we will say this much that relates:

In the [Service] section of the Unit file, include the ExecStart directive to call the process you want to be started, the Restart directive to inform whether it should be “always” or “on-failure,” as well as a RestartSec directive to tell it to throttle the restart attempts so that it doesn’t restart as fast as possible.

SystemD also includes a different command from “telinit” for controlling the Unit files.  The command “systemctl” will allow for starting, stopping, and status of the services controlled by Unit files.

Digging through all of the Unit files for the Exec lines and whether and how they are restarted is the key to looking for potentially persistent shells dropped in this manner.  Remember, though, that just because a process isn’t set to “Restart” by directive doesn’t mean this isn’t calling a script that automatically loops on itself similarly to how we will cover other job types, later.

Persistence through job control – inittab

The inittab file is often one of the first things loaded during the initialization process.  This file contains a list of processes that need to be run very early in the boot process, may need a “watchdog” to restart them on crash, and so on.  It also often contains the settings for defining and securing TTYs.  What we are concerned with is the “respawn” feature, since we want a place to drop a persistent process on the box.

Before we proceed, note that more modern varieties of Linux don’t use inittab, but have similar functions.  Red Hat Enterprise Linux (RHEL) version 6 has “upstart” which replaces the inittab with a directory containing individual files per process to be started.  RHEL version 7 has “systemd” which uses “unit” files to control just about everything.  We’ll cover those in next week’s article.  The “inittab” is a SystemV-ism, and OpenBSD doesn’t use it.  Instead of “inittab” the startup files begin in the “rc” scripts, and the TTYs are defined and secured in the “/etc/ttys” instead.  The reason for this is inittab is tied to the concept of run levels, and OpenBSD doesn’t really have those.

On AIX, the inittab is managed by a set of commands similar to many other base AIX commands (mk-, ch-, rm-, and ls- prefixed.)  The mkitab, chitab, rmitab, and lsitab commands create, change (modify,) remove, and list inittab entries (respectively.)  Let’s break down an inittab entry, using AIX’s inittab as a base line.

Each entry is colon “:” separated, similarly to /etc/passwd.  Lines are commented with a semi-colon “;” at the beginning of the line.  The first column is the label for the entry.  The second colomn is the run level that triggers the running of this command.  The third column is the “action” to take on the command.  This is where we define “run this one time at boot” vs. “run this again if it dies.”  The last column is “the command(s) to run.”  The reason I said “command(s)” is this can be a chain of commands piped together, not just a singular command.

To create an inittab entry (again, using AIX as an example,) we would do something similar to this:

mkitab "softaud:2:respawn:/usr/bin/audit start"

In the above example, we created an inittab entry labeled “softaud” that starts at runlevel 2, respawns on death, and calls the “/usr/bin/audit” command passing it “start” as input.  Since “audit” lives at “/usr/sbin/audit” on AIX, this entry is suspect.  If we were to leave a persistent script named “audit” in /usr/bin and then add this inittab entry, our script would be respawned on death, and would likely be overlooked by the systems administrators reviewing the file.  If we wanted to inject this after the existing entry already labeled as “audit” we could pass that mkitab command the “-i” flag followed by the label we want to append after.  Why “-i” for “append?”  You would have to ask IBM.

mkitab -i audit "softaud:2:respawn:/usr/bin/audit start"

The possible “actions” that can be passed to an inittab are listed in the man pages, but a few common ones are:
"once" - When the init command enters the run level specified for this record, start the process, do not wait for it to stop and when it does stop do not restart the process. If the system enters a new run level while the process is running, the process is not restarted.
"respawn" - If the process identified in this record does not exist, start the process. If the process currently exists, do nothing and continue scanning the /etc/inittab file.
"wait" - When the init command enters the run level specified for this record, start the process and wait for it to stop. While the init command is in the same run level, all subsequent reads of the /etc/inittab file ignore this object.

Some systems include a commands to make changes to inittab (such as we see with AIX.)  Other systems require you to modify the file directly using your favorite editor (vi, for example.)

Whatever your system requires, after a change is made, it only takes effect if you tell the “init” process to re-read the inittab.  To do this, you need the “telinit” command, and pass it the “q” option.

telinit q

Once it has been re-read, it will happily respawn the persistent script that was injected. A thorough audit of the inittab would involve going through the list of all commands in the file, checking that those processes exist at the locations given, doing a file type check, checksum check, and when possible, a “contents” check to see that they are legitimate.

Persistence through job control – Introduction

The next few weeks will be a quick discussion on the different kinds of “job control” available, and how to potentially use them for persistence post compromise.  This is an outside of the box type of thinking that is needed when hunting for “persistence” issues if you believe you’re machine has been compromised.

Before we get to job control “proper,” though, I wanted to walk down the top level (from an OS perspective) options, and that means we need to look at what happens when you boot your server.  Typically there is a phase with some kind of power on self test, the BIOS or UEFI equivalent initializes hardware, then through magic smoke and mirrors finds and loads the boot loader.  The boot loader loads the kernel and starts the init process.

There are attacks at the various hardware/firmware levels.  We won’t look at those today.  There are attacks at the boot loader level.  We also won’t look at those today, though we may come back to this topic at a later date.  A rootkit can replace your kernel, and we’re not going to look at those today, either.  Instead, we’re going to start with “init” and work our way down from there.  The techniques and topics we’ll cover are things that are “less intrusive” (since they don’t replace or modify firmware, the kernel, or user land programs to hide activity.)

The init system has several components, and depending on what “style” of init, are configured in different ways.  We’ll briefly cover these pieces today, then go into detail on how to focus on any single component, later.

The first piece we’ll cover in more detail later is the ‘inittab’ component.  This is a file that controls respawning of critical processes if they die, among other things.

The next piece is the ‘rc’ system.  This includes SystemV style initialization scripts, systemd “units” (shudder,)  and similar.  There are many variations on this, but I’ll try to cover the most common SystemV, BSD, and systemd styles, and most systems will use some variation on these, so if you’re familiar with what I present, you should have little trouble picking up what’s going on with one that’s just similar to these, but not identical.

Finally, we’ll look at the ‘inetd’ or ‘xinetd’ systems, as well as the systemd equivalent.

After we get through the “init” system, we’ll continue the topics with actual job control and scheduling programs such as cron, at, and shell background jobs.

Book Review – Networking for Systems Administrators chapter 7

Since it’s been a few weeks since we did one of these, and I’d like to have this book review finished before the end of the year, it’s time for another chapter review of Networking for Systems Administrators.

Chapter 7 is fairly short, and focuses on “Network Testing Basics.”  It doesn’t cover tools so much as mind set.  From a Systems Administrator point of view, when troubleshooting network issues, our goal is to determine what is coming into or going out of the server.  Is the data we believe is leaving the server the data actually leaving the server?  Is the data coming in the data we expect?  There are plenty of tools to determine this at various levels, and in the end what we’re looking for is performance and correctness.  The data should match, and the performance should be within our expected parameters.  Anything outside of the norm should be investigated.

If the issue is at our end, it could be something as simple as a configuration issue with the application, or a bastion host firewall rule that shouldn’t have been turned on.  If it’s not at our end, it could be related to firewall, network access control lists (ACLs,) packet filters, or even proxy services.  Data can be blocked or mangled by some combination of the above, and once you can determine that it’s not the fault of your server or application, you can show the evidence to the network or firewall teams, and engage them for assistance in troubleshooting upstream from your machine.  Don’t blame the firewall first.  Check your own stuff first, gather evidence, then engage.

TMUX concepts

There are two tools I use religiously at work to deal with sending commands to multiple servers at the same time.  I use DSH for the quick and dirty one-liners, re-usable functions, and most things that don’t require a TTY/PTY allocation to complete.  For everything else, I use TMUX.

The tmux command works on the concept of sessions, windows, and panes.  A session represents a single detachable “connection” to the windows you want to work with.  A window, is a single screen inside of a session.  A window can be a single “pane” that takes up the entire window, or it can be multiple “panes” that break the window up into multiple pieces, each with its own “shell” instance.

While “screen” has the ability to open a serial connection and behave as a console session, much like cu, tip, or minicom, tmux does not.  If you want a serial connection from a tmux pane, you’ll need to set up your pane, then call the serial connection from cu or tip (or similar.)  The developer(s) didn’t want to make it anything more than just an (awesome) multiplexer.  It does what it does very well, and that’s enough.

If you’ve ever worked with “screen” you know there’s a “command” prefix that needs to be passed in order to tell the multiplexer that you’re talking to it, and not just typing input to whatever pane you’re working from.  In “screen” that’s “ctrl-a” by default.  In tmux, it’s “ctrl-b” instead.

In order to split a screen horizontally (panes side by side,) you would do “ctrl-b %” for example.  In order to split it vertically (panes over each other,) you would do “ctrl-b "” instead (that’s ctrl-b followed by the double quote key.)  If you want to detach a session, you use “ctrl-b d” to get back to the shell you launched the session from initially.  Once it is detached, you need to know the session name.  By default, the first session created is session “0” (zero.)  To re-attach you pass this name to the “-t” flag like so:

tmux attach -t 0

If you need to see how many (and names of) sessions, you can use “tmux ls” which will list each session along with some basic session information (including how many windows are in it.)

If you want to make a new window within an attached session, you can use “ctrl-b c” to create the next available window.  Split this into any panes you need to use, and set up your work flow from there.

In order to synchronize across panes, you can use the “ctrl-b :” and then “set synchronize-panes on” to tell it to send input to all panes simultaneously.  To turn that off, do the same, but change the “on” to an “off,” instead.

To move from one window to the next, you can use “ctrl-b n” and “ctrl-b p” (for next and previous.)  This lets you get to each window sequentially.

To move from one pane to the next, you can use “ctrl-b <arrow key>” to move input focus up, down, left, and right from the pane you are currently in.

Those are the bare essentials for dealing with tmux sessions, windows, and panes on a basic level.  More advanced features let you set hot keys (that may or may not require the prefix command,) change the prefix sequence, and so on.  And for scripting purposes, you can run “tmux” commands from the command line outside of an attached session, to control what’s going on inside that session.  This allows for some “expect” like behavior, if you’re inclined to dive deep and learn how to work with it.  I may dive deeper into the more advanced stuff, and provide samples of my “.tmuxrc” configuration file later.

Appropriate Technology – FIFO (Named Pipes)

A week or three back, I posted about how I (stubbornly) managed to do a “one-liner” for generating an OpenSSL certificate signing request that included a subject alternative name section using X.509 v3 extensions.  The OpenSSL command does not read the extensions from a standard input for piping purposes, so it expects to open an actual file, instead.  The correct thing to do in that case is create a custom configuration file containing the extensions you want to use, and pass that as an argument.  My “one-liner” to avoid that took advantage of BASH’s “process substitution” feature.  This is really just a temporary FIFO structure that gets used “on the fly” by the shell, which means the output of any commands within the “<()” structure is fed to that temporary FIFO, and the path to the FIFO is what is actually returned to the program that needs it.  It tells the program how to open a file (really a named pipe) to pull the contents that were dynamically generated.

This was an abuse of the technology to fit my needs.  There are, however, appropriate times and places to use FIFO/Named Pipe structures.  Here are a few to help make it clear why they are a “good thing” to know how to create and utilize.

Sometimes an administrator may have a desire to reboot a system after a major event, such as after a full system backup has completed successfully.  In this scenario, we have two scripts that would get called by cron (or some other scheduler.)  The first script to get called would be the “reboot” script that listens for an event that indicates the backup ran successfully.  The second script would be the backup script that calls the backup software, monitors for successful completion, and then notifies the “reboot” script to reboot what the status was.  If the reboot script receives a “backup failed” message, it would end gracefully, and perhaps send an email stating things went south.  If it receives a “backup was successful” message, it would send an email stating things were successful, and then reboot the system.

“Why wouldn’t you use one script to just run the backup, then figure out whether to reboot or not?”  I can hear you asking this to yourself, but let’s complicate the scenario a bit.  Let’s say one of these scripts is kicked off from the backup server, itself.  Let’s say the other script is kicked off by the machine that needs to be rebooted.  Let’s say that there is a firewall that only allows the backup server to open a connection to the client, but the client can’t open a connection to the backup server.  A touch file can be used, but typically that means polling to see if the file exists, with a specific timeout duration before giving up entirely.  With a pipe, the communication is instant, because it’s a block call to read from the pipe.  This also allows the backup server to make a call out to the monitoring service API to disable alerts on ping for the client during the reboot, if you wanted to get fancy with it.

In essence, FIFO files are a poor man’s message broker service.  You set up a listening call on the pipe that wants to be triggered by the event of receiving data on the pipe, then when you’re ready for the event to happen, you send the data down the pipe to trigger the event.

FIFO structures can also stand in as files for programs that expect a file, as shown by the OpenSSL example, previously.  At one point, a teammate looked at using a FIFO for an nmon project he was working on, polling the servers for performance data at regular intervals.  I forget why he needed the FIFO, but it was a limitation with nmon itself that required the setup.