Book Review – Networking for Systems Administrators chapter 8

There are 5 chapters left, including this one, and I would like to finish my chapter by chapter review before next year, so I’ll have to do more than one of these per month, at least one month.  I’ll probably do two a month to accelerate the process.  I apologize for the interruption to the current series, but we’ll return to it next week.

Chapter 8 is about DNS, or the Domain Name System.  As Michael W. Lucas states, there are books much larger than this one that cover this single topic alone, so this chapter is a very brief overview into the fundamentals of viewing it from a troubleshooting perspective.

The service runs on port 53, both UDP and TCP.  Many organizations only allow port 53 UDP traffic, but TCP is required for larger requests.  The chapter discusses how DNS servers keep a mapping of name and IP address relationships for translating requests between the two.

The name mappings are defined within zones.  Each layer of an address (read right to left) represents another zone.  For example, the “.net,” “.com,” and “.org” endings we often use are top level zones.  The book’s examples include “google.com” and “michaelwlucas.com” as child zones of the top level “.com” zone.  Any zone inside another zone is a child zone.  This could mean a1.www.mysite.noip makes www.mysite.noip a child zone of mysite.noip, which is in turn a child zone of .noip, in this scenario.

DNS servers are either authoritative, or recursive.  Authoritative nameservers contain the information for specific domains.  Recursive nameservers provide DNS lookups for clients.  These servers find the authoritative servers, then query it, then returns the result to the client.

Ideally, authoritative and recursive nameservers should be on separate machines.  This is for security reasons, as well as simplification of configuration.

Next, the author covers the DNS hierarchy, explaining how DNS is a distributed database, and how the queries work their way up the chain until a server is capable of providing an authoritative response.  Then, he goes into forward and reverse lookups.  A forward lookup is the response given when querying what IP(s) belong to a name.  A reverse lookup is the response when querying what name belongs to an IP, also known as an PTR record.  The protocol allows for multiple PTR records for the same name, but in practice, this can break things.

The next section covers the different types of records that are relevant to most situations, such as A (name to IPv4,) AAAA (name to IPv6,) SOA (start of authority,) PTR, CNAME (canonical name… name to name alias mapping,) and MX (mail exchange) records.

A brief discussion of caching follows, which explains that changes can take time to propagate.  Then he covers why checking DNS is important.  If a server is responding with incorrect or even inconsistent information, it will likely cause issues with other troubleshooting steps.

He suggests using “nslookup” on Windows and “host” on Unix systems, but the “host” command may not be available.  He covers both of these tools in detail before briefly introducing the more advanced “dig” and “unbound-host” commands.  Finally he explains the “hosts” file for local name to IP mappings that may override responses from DNS, depending on how a system is configured.

And I’ll wrap this review with a sentence that leads to a footnote, then the footnote attached.

“A few failed DNS requests can drag some server software to a crawl or make it entirely fail.”

Footnote:
“Should software be written such that it handles DNS failures gracefully? Of course. And in that world, I have a pony. No, wait — a unicorn. No, better still — a ponycorn!”

Persistence through job control – RC scripts

For SystemV style systems, the next phase of the boot process after inittab is to kick off the rc scripts.   This is often one of the last entries in inittab, even.  The rc scripts on these systems typically begin with a script called “rc” that does some initial environmental setup, then it goes through and calls the different runlevel scripts based on which runlevel the system is booting into.

The rc scripts described here will be the same on both SystemV style systems, and “Upstart” init systems such as on Red Hat Enterprise Linux 6.  The “systemd” affliction does things differently, and we’ll cover it next week.

These runlevel rc scripts live in a structure that varies from system to system, but is often either directly under /etc or under /etc/rc.d as the parent directory.  The structure often looks like this:

/etc/rc.d/init.d
/etc/rc.d/rc#.d (where "#" is the runlevel number.)

The init.d directory contains the actual scripts that start, stop, restart, or show status of various services.

The rc#.d directories contain symbolic links which point to the scripts in the init.d directory.  The names of these scripts determine whether to start or stop the script, and define which order they should be started or stopped.

For example, we might have a script called “httpd” that starts our web service.  We want this to be one of the last things started, and one of the first that gets stopped, so we might have a structure like this:

/etc/rc.d/init.d/httpd (the actual start/stop script)
/etc/rc.d/rc2.d/S99httpd (symbolic link to ../init.d/httpd)
/etc/rc.d/rc2.d/K01httpd (symbolic link to ../init.d/httpd)

The “S99httpd” link says to “S”tart it, and the high number puts it as a lower priority when starting services.  The “K01httpd” says to “K”ill it, (or stop it,) and the lower number gives it a higher priority when stopping services.  The standard rc script that parses these directories will take use the name to figure out what order to do things in, and will pass either a “start” or a “stop” based on the “S” or “K” at the beginning of the name.  The capitalization of the “S” and “K” are important.  An easy way to disable one of these temporarily, is to rename it with a lower case character, but keeping the name the same, otherwise.  This way you know what order it SHOULD be started or stopped if and when you want to re-enable it.

On complex runlevel systems, those numbers will vary depending on single or multi user mode, graphical environment mode, and so on.  On AIX, there are only two runlevels, so most things will be in rc2.d.  On BSD style systems, there is no actual concept of “runlevels” so much as there is a “local” rc file that has all of the settings inside of it, and the order is based on where they fall within the monolithic script.

In order to take advantage of the rc scripts for persistence, we would want to inject a call to our persistent shell within an existing script, or add one of our own.  Remember to put it after networking stuff is started up.

When we suspect this has been done, the routine is similar to inittab inspection.  Review all of the rc scripts, including “rc” itself, and for every call made, check that the file exists, is executable, and only contains what you expect it to contain.  A comparison from a known clean system (such as a fresh install to another machine) is a fast way to check the common items.  Anything that exists on our suspect machine, or any of the existing files that are different than originally delivered are worth digging deeper into.  Use diff, sdiff, and the like to make fast work of this.

Persistence through job control – inittab replacements

Last week we looked at a traditional “inittab,” using AIX’s inittab as an example.  This week, we’ll look at some of the “inittab” replacements that have popped up on various flavors of Linux.

As we mentioned last week, “Upstart” replaced “inittab” in Red Hat Enterprise Linux version 6, and “SystemD” unit files replaced it in RHEL7.

For Upstart, instead of a single “inittab” file, there is a directory called “/etc/init” that contains individual files that each control a single program to run.  There is no specified order by name of file, and the configuration files don’t contain any ordering themselves, other than to say “start me after this other process.”  With inittab, the order is controlled by where it falls within the file, so you get less fine grained control with this method, but you can be sure that a process that depends on another process already being up will wait, at least.

An interesting note about Upstart is that it doesn’t just check “/etc/init.”  It can also live in “~/.init/” which would start jobs for a user rather than for the system, and these jobs wouldn’t run as children of PID 1.  This gives some flexibility in dropping persistence in less explored locations on the file system, which means more places to audit for these kinds of persistence scripts.

An Upstart init file contains directives that respond to “emitted events.”  The basic events are “starting, started, stopping, and stopped.”  You can use a custom event that can be called by the emitter manually.

The program for sending “emitter events” to the Upstart “inittab” scripts is “initctl,” rather than “telinit.”  Instead of a “telinit q” you would use “initctl <emitter event> <file name>” to stop, start, and so on.

For SystemD, there is no inittab, either.  Everything is a Unit file, and lives in the same directory structure for both “init” type processes and “rc” type processes.  For that reason, we won’t go into SystemD too much, today, but we will say this much that relates:

In the [Service] section of the Unit file, include the ExecStart directive to call the process you want to be started, the Restart directive to inform whether it should be “always” or “on-failure,” as well as a RestartSec directive to tell it to throttle the restart attempts so that it doesn’t restart as fast as possible.

SystemD also includes a different command from “telinit” for controlling the Unit files.  The command “systemctl” will allow for starting, stopping, and status of the services controlled by Unit files.

Digging through all of the Unit files for the Exec lines and whether and how they are restarted is the key to looking for potentially persistent shells dropped in this manner.  Remember, though, that just because a process isn’t set to “Restart” by directive doesn’t mean this isn’t calling a script that automatically loops on itself similarly to how we will cover other job types, later.

Persistence through job control – inittab

The inittab file is often one of the first things loaded during the initialization process.  This file contains a list of processes that need to be run very early in the boot process, may need a “watchdog” to restart them on crash, and so on.  It also often contains the settings for defining and securing TTYs.  What we are concerned with is the “respawn” feature, since we want a place to drop a persistent process on the box.

Before we proceed, note that more modern varieties of Linux don’t use inittab, but have similar functions.  Red Hat Enterprise Linux (RHEL) version 6 has “upstart” which replaces the inittab with a directory containing individual files per process to be started.  RHEL version 7 has “systemd” which uses “unit” files to control just about everything.  We’ll cover those in next week’s article.  The “inittab” is a SystemV-ism, and OpenBSD doesn’t use it.  Instead of “inittab” the startup files begin in the “rc” scripts, and the TTYs are defined and secured in the “/etc/ttys” instead.  The reason for this is inittab is tied to the concept of run levels, and OpenBSD doesn’t really have those.

On AIX, the inittab is managed by a set of commands similar to many other base AIX commands (mk-, ch-, rm-, and ls- prefixed.)  The mkitab, chitab, rmitab, and lsitab commands create, change (modify,) remove, and list inittab entries (respectively.)  Let’s break down an inittab entry, using AIX’s inittab as a base line.

Each entry is colon “:” separated, similarly to /etc/passwd.  Lines are commented with a semi-colon “;” at the beginning of the line.  The first column is the label for the entry.  The second colomn is the run level that triggers the running of this command.  The third column is the “action” to take on the command.  This is where we define “run this one time at boot” vs. “run this again if it dies.”  The last column is “the command(s) to run.”  The reason I said “command(s)” is this can be a chain of commands piped together, not just a singular command.

To create an inittab entry (again, using AIX as an example,) we would do something similar to this:

mkitab "softaud:2:respawn:/usr/bin/audit start"

In the above example, we created an inittab entry labeled “softaud” that starts at runlevel 2, respawns on death, and calls the “/usr/bin/audit” command passing it “start” as input.  Since “audit” lives at “/usr/sbin/audit” on AIX, this entry is suspect.  If we were to leave a persistent script named “audit” in /usr/bin and then add this inittab entry, our script would be respawned on death, and would likely be overlooked by the systems administrators reviewing the file.  If we wanted to inject this after the existing entry already labeled as “audit” we could pass that mkitab command the “-i” flag followed by the label we want to append after.  Why “-i” for “append?”  You would have to ask IBM.

mkitab -i audit "softaud:2:respawn:/usr/bin/audit start"

The possible “actions” that can be passed to an inittab are listed in the man pages, but a few common ones are:
"once" - When the init command enters the run level specified for this record, start the process, do not wait for it to stop and when it does stop do not restart the process. If the system enters a new run level while the process is running, the process is not restarted.
"respawn" - If the process identified in this record does not exist, start the process. If the process currently exists, do nothing and continue scanning the /etc/inittab file.
"wait" - When the init command enters the run level specified for this record, start the process and wait for it to stop. While the init command is in the same run level, all subsequent reads of the /etc/inittab file ignore this object.

Some systems include a commands to make changes to inittab (such as we see with AIX.)  Other systems require you to modify the file directly using your favorite editor (vi, for example.)

Whatever your system requires, after a change is made, it only takes effect if you tell the “init” process to re-read the inittab.  To do this, you need the “telinit” command, and pass it the “q” option.

telinit q

Once it has been re-read, it will happily respawn the persistent script that was injected. A thorough audit of the inittab would involve going through the list of all commands in the file, checking that those processes exist at the locations given, doing a file type check, checksum check, and when possible, a “contents” check to see that they are legitimate.

Persistence through job control – Introduction

The next few weeks will be a quick discussion on the different kinds of “job control” available, and how to potentially use them for persistence post compromise.  This is an outside of the box type of thinking that is needed when hunting for “persistence” issues if you believe you’re machine has been compromised.

Before we get to job control “proper,” though, I wanted to walk down the top level (from an OS perspective) options, and that means we need to look at what happens when you boot your server.  Typically there is a phase with some kind of power on self test, the BIOS or UEFI equivalent initializes hardware, then through magic smoke and mirrors finds and loads the boot loader.  The boot loader loads the kernel and starts the init process.

There are attacks at the various hardware/firmware levels.  We won’t look at those today.  There are attacks at the boot loader level.  We also won’t look at those today, though we may come back to this topic at a later date.  A rootkit can replace your kernel, and we’re not going to look at those today, either.  Instead, we’re going to start with “init” and work our way down from there.  The techniques and topics we’ll cover are things that are “less intrusive” (since they don’t replace or modify firmware, the kernel, or user land programs to hide activity.)

The init system has several components, and depending on what “style” of init, are configured in different ways.  We’ll briefly cover these pieces today, then go into detail on how to focus on any single component, later.

The first piece we’ll cover in more detail later is the ‘inittab’ component.  This is a file that controls respawning of critical processes if they die, among other things.

The next piece is the ‘rc’ system.  This includes SystemV style initialization scripts, systemd “units” (shudder,)  and similar.  There are many variations on this, but I’ll try to cover the most common SystemV, BSD, and systemd styles, and most systems will use some variation on these, so if you’re familiar with what I present, you should have little trouble picking up what’s going on with one that’s just similar to these, but not identical.

Finally, we’ll look at the ‘inetd’ or ‘xinetd’ systems, as well as the systemd equivalent.

After we get through the “init” system, we’ll continue the topics with actual job control and scheduling programs such as cron, at, and shell background jobs.

Book Review – Networking for Systems Administrators chapter 7

Since it’s been a few weeks since we did one of these, and I’d like to have this book review finished before the end of the year, it’s time for another chapter review of Networking for Systems Administrators.

Chapter 7 is fairly short, and focuses on “Network Testing Basics.”  It doesn’t cover tools so much as mind set.  From a Systems Administrator point of view, when troubleshooting network issues, our goal is to determine what is coming into or going out of the server.  Is the data we believe is leaving the server the data actually leaving the server?  Is the data coming in the data we expect?  There are plenty of tools to determine this at various levels, and in the end what we’re looking for is performance and correctness.  The data should match, and the performance should be within our expected parameters.  Anything outside of the norm should be investigated.

If the issue is at our end, it could be something as simple as a configuration issue with the application, or a bastion host firewall rule that shouldn’t have been turned on.  If it’s not at our end, it could be related to firewall, network access control lists (ACLs,) packet filters, or even proxy services.  Data can be blocked or mangled by some combination of the above, and once you can determine that it’s not the fault of your server or application, you can show the evidence to the network or firewall teams, and engage them for assistance in troubleshooting upstream from your machine.  Don’t blame the firewall first.  Check your own stuff first, gather evidence, then engage.

TMUX concepts

There are two tools I use religiously at work to deal with sending commands to multiple servers at the same time.  I use DSH for the quick and dirty one-liners, re-usable functions, and most things that don’t require a TTY/PTY allocation to complete.  For everything else, I use TMUX.

The tmux command works on the concept of sessions, windows, and panes.  A session represents a single detachable “connection” to the windows you want to work with.  A window, is a single screen inside of a session.  A window can be a single “pane” that takes up the entire window, or it can be multiple “panes” that break the window up into multiple pieces, each with its own “shell” instance.

While “screen” has the ability to open a serial connection and behave as a console session, much like cu, tip, or minicom, tmux does not.  If you want a serial connection from a tmux pane, you’ll need to set up your pane, then call the serial connection from cu or tip (or similar.)  The developer(s) didn’t want to make it anything more than just an (awesome) multiplexer.  It does what it does very well, and that’s enough.

If you’ve ever worked with “screen” you know there’s a “command” prefix that needs to be passed in order to tell the multiplexer that you’re talking to it, and not just typing input to whatever pane you’re working from.  In “screen” that’s “ctrl-a” by default.  In tmux, it’s “ctrl-b” instead.

In order to split a screen horizontally (panes side by side,) you would do “ctrl-b %” for example.  In order to split it vertically (panes over each other,) you would do “ctrl-b "” instead (that’s ctrl-b followed by the double quote key.)  If you want to detach a session, you use “ctrl-b d” to get back to the shell you launched the session from initially.  Once it is detached, you need to know the session name.  By default, the first session created is session “0” (zero.)  To re-attach you pass this name to the “-t” flag like so:

tmux attach -t 0

If you need to see how many (and names of) sessions, you can use “tmux ls” which will list each session along with some basic session information (including how many windows are in it.)

If you want to make a new window within an attached session, you can use “ctrl-b c” to create the next available window.  Split this into any panes you need to use, and set up your work flow from there.

In order to synchronize across panes, you can use the “ctrl-b :” and then “set synchronize-panes on” to tell it to send input to all panes simultaneously.  To turn that off, do the same, but change the “on” to an “off,” instead.

To move from one window to the next, you can use “ctrl-b n” and “ctrl-b p” (for next and previous.)  This lets you get to each window sequentially.

To move from one pane to the next, you can use “ctrl-b <arrow key>” to move input focus up, down, left, and right from the pane you are currently in.

Those are the bare essentials for dealing with tmux sessions, windows, and panes on a basic level.  More advanced features let you set hot keys (that may or may not require the prefix command,) change the prefix sequence, and so on.  And for scripting purposes, you can run “tmux” commands from the command line outside of an attached session, to control what’s going on inside that session.  This allows for some “expect” like behavior, if you’re inclined to dive deep and learn how to work with it.  I may dive deeper into the more advanced stuff, and provide samples of my “.tmuxrc” configuration file later.

Appropriate Technology – FIFO (Named Pipes)

A week or three back, I posted about how I (stubbornly) managed to do a “one-liner” for generating an OpenSSL certificate signing request that included a subject alternative name section using X.509 v3 extensions.  The OpenSSL command does not read the extensions from a standard input for piping purposes, so it expects to open an actual file, instead.  The correct thing to do in that case is create a custom configuration file containing the extensions you want to use, and pass that as an argument.  My “one-liner” to avoid that took advantage of BASH’s “process substitution” feature.  This is really just a temporary FIFO structure that gets used “on the fly” by the shell, which means the output of any commands within the “<()” structure is fed to that temporary FIFO, and the path to the FIFO is what is actually returned to the program that needs it.  It tells the program how to open a file (really a named pipe) to pull the contents that were dynamically generated.

This was an abuse of the technology to fit my needs.  There are, however, appropriate times and places to use FIFO/Named Pipe structures.  Here are a few to help make it clear why they are a “good thing” to know how to create and utilize.

Sometimes an administrator may have a desire to reboot a system after a major event, such as after a full system backup has completed successfully.  In this scenario, we have two scripts that would get called by cron (or some other scheduler.)  The first script to get called would be the “reboot” script that listens for an event that indicates the backup ran successfully.  The second script would be the backup script that calls the backup software, monitors for successful completion, and then notifies the “reboot” script to reboot what the status was.  If the reboot script receives a “backup failed” message, it would end gracefully, and perhaps send an email stating things went south.  If it receives a “backup was successful” message, it would send an email stating things were successful, and then reboot the system.

“Why wouldn’t you use one script to just run the backup, then figure out whether to reboot or not?”  I can hear you asking this to yourself, but let’s complicate the scenario a bit.  Let’s say one of these scripts is kicked off from the backup server, itself.  Let’s say the other script is kicked off by the machine that needs to be rebooted.  Let’s say that there is a firewall that only allows the backup server to open a connection to the client, but the client can’t open a connection to the backup server.  A touch file can be used, but typically that means polling to see if the file exists, with a specific timeout duration before giving up entirely.  With a pipe, the communication is instant, because it’s a block call to read from the pipe.  This also allows the backup server to make a call out to the monitoring service API to disable alerts on ping for the client during the reboot, if you wanted to get fancy with it.

In essence, FIFO files are a poor man’s message broker service.  You set up a listening call on the pipe that wants to be triggered by the event of receiving data on the pipe, then when you’re ready for the event to happen, you send the data down the pipe to trigger the event.

FIFO structures can also stand in as files for programs that expect a file, as shown by the OpenSSL example, previously.  At one point, a teammate looked at using a FIFO for an nmon project he was working on, polling the servers for performance data at regular intervals.  I forget why he needed the FIFO, but it was a limitation with nmon itself that required the setup.

BASH nested while loop issues

A week or two ago, a team mate had an issue with a nested while loop in a BASH script.  KSH ran just fine, but when he ran the exact same script under BASH, it had “unexpected results.”  The while loop involved piping output to a while loop (to feed the loop), and there were two loops to iterate, both fed in this fashion.

Some of you may have already guessed what the issue was, but I wanted to go into a little detail here, because it is important to understand why two very similar shells behave so very differently sometimes.  In this case, it has to do with how BASH deals with an execution chain.

When you go full on Mario (chaining commands together with lots of pipes,) BASH has a unique feature (as compared to KSH) where it stores the exit code for EACH PIECE of the pipeline in an array.  This behavior means it is hijacking the pipeline chain, and when you have a loop that relies on receiving output from a pipe, that causes oddities to ensue.  What I mean by “hijacking” is that each piece of the pipe is executed in its own subprocess fork.  This is how BASH is able to grab each individual exit status to store in the array.  Since each piece of the pipe is a “fork” call, the contents of the variable being manipulated isn’t what we expect inside the second loop, and we get output that doesn’t seem to be what we expect, unless we understand that it is forking.  KSH doesn’t do this, so there’s nothing trying to inspect the chain, and thus the loops run just fine.  After I realized what was going on, I suggested changing the command to not use a pipe to feed the loop, and a workable solution was found that works on both BASH and KSH seamlessly.  I don’t recall exactly, but I think I had them change it from an “echo ‘something’ | while … do” to a “while … do < $( echo ‘something’ )” to fix it.

The BASH special array is called PIPESTATUS, and is useful for troubleshooting certain steps in a complicated pipeline, but can cause issues if you don’t know how the behavior affects the pipes in play.  In the case of the nested loop, the loss of the next setting of the variable in play was problematic.

OpenBSD laptop follow up, OpenSSL SAN, and BASH process substitution

I got really busy this last week, so didn’t show up to social media as much as I would have liked.  I also didn’t respond to the comment from last week’s post right away.  Since the question came up, I thought I’d mention it here, on top of my direct response about Google HangOuts on the OpenBSD laptop.

The short answer is, I don’t use HangOuts, so I didn’t test it before the question was raised.  The long answer is that Google has some restrictions about which browsers the Video chat will work on, and the newest versions of Firefox aren’t on the list.  I believe I read somewhere that Google is working on correcting this, but until they do, Video chat is a no go for this set up.  On the other hand, the text chat works fine.

I also wanted to mention the program that I was going to test for temporarily disabling the touchpad.  The program is “syndaemon” and if it worked, I could just drop a line in my .xinitrc file that has “syndaemon -d” on it.  Unfortunately, it doesn’t work on this laptop.  I get an error about it being “Unable to find a synaptics device.”  I’ll have to dig further into how the pointer is recognized and see if there are any alternatives.

Now that that’s out of the way, I thought I’d dive into the discussion about using “subject alternative names” with a certificate signing request.

In the past, I’ve recommended tacking on a “subjectAltName=DNS.1=host.domain.tld,DNS.2=althost.domain.tld” type of string onto the end of your -subj flag that contains the OU information for the request.  This apparently doesn’t generate a certificate that some authorities will recognize, so we’re forced to use the x509 extensions to pull it in.  Almost every recommendation out there says to create a temporary openssl.cnf file, append the SAN section to it, and then generate your certificate, pointing at that cert.  There is a good reason for this.  The way to do without a hard coded temporary file is to take advantage of BASH’s “process substitution” ability.  KSH93 supposedly has this feature as well, but when I tested ksh93 on AIX, it didn’t work, so I will say to test your own ksh93 before just assuming this will benefit you.  Otherwise, stick with bash.

The temporary file would normally be a copy of whatever the openssl.cnf default configuration file is, plus the appended [SAN] section so that the extensions can be requested.  In order to find the location of the default configuration file, we would run this command:

openssl version -d | awk '{print $NF}' | tr -d '"'

This gives us the directory where it lives.  We then tack on the “openssl.cnf” for the full path like so:

$( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf

So, if we wanted to create that temporary file, we might do this:

$( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) >>openssl.temp.cnf

Then we would point the -extfile or -config flag at this temporary file.  However, since we’re being stubborn, we’ll use BASH’s process substitution to do this, instead.

openssl req -nodes -newkey rsa:4096 -keyout ${HOSTNAME}.domain.tld.key -out ${HOSTNAME}.domain.tld.csr -sha256 -subj="/C=US/ST=Arkansas/L=Conway/O=UnixSecLab/OU=TheLab/CN=${HOSTNAME}.domain.tld/subjectAltName=DNS.1=${HOSTNAME}.domain.tld,DNS.2=alt${HOSTNAME}.domain.tld" -config <( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) -reqexts SAN -extensions SAN

Whew.  That’s a lot to take in.  The “<( cat … )” is BASH process substitution.  Instead of creating a variable that contains all of the output from the “cat” and “printf” commands, it sticks those into a file descriptor located at /dev/fd/## (where ## is the file descriptor number in use.)  Think of this as kind of a temporary named pipe/FIFO.  Since the openssl command requires an actual file it can do an “open” on when dealing with the -config or -extfile flags, we can’t pipe things in normally.  Our only option is to create an actual temporary file, or create a named FIFO to talk to (which is overkill, so temp file is better.)  BASH lets us kind of sort of create that with process substitution without having to clean up after ourselves by removing the FIFO file.

Is this practical?  Probably not.  It is less effort to do the temporary file and clean it up after, and more portable, as well.

Am I stubborn?  Absolutely.  That’s what led to my whipping up the abomination above.  Would I recommend this to others?  Not really.  Again, just go with what’s practical.  There’s a reason people recommend it in most of the online commentary on this Subject Alternative Names (SAN) discussion.