Book Review – Networking for Systems Administrators chapter 7

Since it’s been a few weeks since we did one of these, and I’d like to have this book review finished before the end of the year, it’s time for another chapter review of Networking for Systems Administrators.

Chapter 7 is fairly short, and focuses on “Network Testing Basics.”  It doesn’t cover tools so much as mind set.  From a Systems Administrator point of view, when troubleshooting network issues, our goal is to determine what is coming into or going out of the server.  Is the data we believe is leaving the server the data actually leaving the server?  Is the data coming in the data we expect?  There are plenty of tools to determine this at various levels, and in the end what we’re looking for is performance and correctness.  The data should match, and the performance should be within our expected parameters.  Anything outside of the norm should be investigated.

If the issue is at our end, it could be something as simple as a configuration issue with the application, or a bastion host firewall rule that shouldn’t have been turned on.  If it’s not at our end, it could be related to firewall, network access control lists (ACLs,) packet filters, or even proxy services.  Data can be blocked or mangled by some combination of the above, and once you can determine that it’s not the fault of your server or application, you can show the evidence to the network or firewall teams, and engage them for assistance in troubleshooting upstream from your machine.  Don’t blame the firewall first.  Check your own stuff first, gather evidence, then engage.

TMUX concepts

There are two tools I use religiously at work to deal with sending commands to multiple servers at the same time.  I use DSH for the quick and dirty one-liners, re-usable functions, and most things that don’t require a TTY/PTY allocation to complete.  For everything else, I use TMUX.

The tmux command works on the concept of sessions, windows, and panes.  A session represents a single detachable “connection” to the windows you want to work with.  A window, is a single screen inside of a session.  A window can be a single “pane” that takes up the entire window, or it can be multiple “panes” that break the window up into multiple pieces, each with its own “shell” instance.

While “screen” has the ability to open a serial connection and behave as a console session, much like cu, tip, or minicom, tmux does not.  If you want a serial connection from a tmux pane, you’ll need to set up your pane, then call the serial connection from cu or tip (or similar.)  The developer(s) didn’t want to make it anything more than just an (awesome) multiplexer.  It does what it does very well, and that’s enough.

If you’ve ever worked with “screen” you know there’s a “command” prefix that needs to be passed in order to tell the multiplexer that you’re talking to it, and not just typing input to whatever pane you’re working from.  In “screen” that’s “ctrl-a” by default.  In tmux, it’s “ctrl-b” instead.

In order to split a screen horizontally (panes side by side,) you would do “ctrl-b %” for example.  In order to split it vertically (panes over each other,) you would do “ctrl-b "” instead (that’s ctrl-b followed by the double quote key.)  If you want to detach a session, you use “ctrl-b d” to get back to the shell you launched the session from initially.  Once it is detached, you need to know the session name.  By default, the first session created is session “0” (zero.)  To re-attach you pass this name to the “-t” flag like so:

tmux attach -t 0

If you need to see how many (and names of) sessions, you can use “tmux ls” which will list each session along with some basic session information (including how many windows are in it.)

If you want to make a new window within an attached session, you can use “ctrl-b c” to create the next available window.  Split this into any panes you need to use, and set up your work flow from there.

In order to synchronize across panes, you can use the “ctrl-b :” and then “set synchronize-panes on” to tell it to send input to all panes simultaneously.  To turn that off, do the same, but change the “on” to an “off,” instead.

To move from one window to the next, you can use “ctrl-b n” and “ctrl-b p” (for next and previous.)  This lets you get to each window sequentially.

To move from one pane to the next, you can use “ctrl-b <arrow key>” to move input focus up, down, left, and right from the pane you are currently in.

Those are the bare essentials for dealing with tmux sessions, windows, and panes on a basic level.  More advanced features let you set hot keys (that may or may not require the prefix command,) change the prefix sequence, and so on.  And for scripting purposes, you can run “tmux” commands from the command line outside of an attached session, to control what’s going on inside that session.  This allows for some “expect” like behavior, if you’re inclined to dive deep and learn how to work with it.  I may dive deeper into the more advanced stuff, and provide samples of my “.tmuxrc” configuration file later.

Appropriate Technology – FIFO (Named Pipes)

A week or three back, I posted about how I (stubbornly) managed to do a “one-liner” for generating an OpenSSL certificate signing request that included a subject alternative name section using X.509 v3 extensions.  The OpenSSL command does not read the extensions from a standard input for piping purposes, so it expects to open an actual file, instead.  The correct thing to do in that case is create a custom configuration file containing the extensions you want to use, and pass that as an argument.  My “one-liner” to avoid that took advantage of BASH’s “process substitution” feature.  This is really just a temporary FIFO structure that gets used “on the fly” by the shell, which means the output of any commands within the “<()” structure is fed to that temporary FIFO, and the path to the FIFO is what is actually returned to the program that needs it.  It tells the program how to open a file (really a named pipe) to pull the contents that were dynamically generated.

This was an abuse of the technology to fit my needs.  There are, however, appropriate times and places to use FIFO/Named Pipe structures.  Here are a few to help make it clear why they are a “good thing” to know how to create and utilize.

Sometimes an administrator may have a desire to reboot a system after a major event, such as after a full system backup has completed successfully.  In this scenario, we have two scripts that would get called by cron (or some other scheduler.)  The first script to get called would be the “reboot” script that listens for an event that indicates the backup ran successfully.  The second script would be the backup script that calls the backup software, monitors for successful completion, and then notifies the “reboot” script to reboot what the status was.  If the reboot script receives a “backup failed” message, it would end gracefully, and perhaps send an email stating things went south.  If it receives a “backup was successful” message, it would send an email stating things were successful, and then reboot the system.

“Why wouldn’t you use one script to just run the backup, then figure out whether to reboot or not?”  I can hear you asking this to yourself, but let’s complicate the scenario a bit.  Let’s say one of these scripts is kicked off from the backup server, itself.  Let’s say the other script is kicked off by the machine that needs to be rebooted.  Let’s say that there is a firewall that only allows the backup server to open a connection to the client, but the client can’t open a connection to the backup server.  A touch file can be used, but typically that means polling to see if the file exists, with a specific timeout duration before giving up entirely.  With a pipe, the communication is instant, because it’s a block call to read from the pipe.  This also allows the backup server to make a call out to the monitoring service API to disable alerts on ping for the client during the reboot, if you wanted to get fancy with it.

In essence, FIFO files are a poor man’s message broker service.  You set up a listening call on the pipe that wants to be triggered by the event of receiving data on the pipe, then when you’re ready for the event to happen, you send the data down the pipe to trigger the event.

FIFO structures can also stand in as files for programs that expect a file, as shown by the OpenSSL example, previously.  At one point, a teammate looked at using a FIFO for an nmon project he was working on, polling the servers for performance data at regular intervals.  I forget why he needed the FIFO, but it was a limitation with nmon itself that required the setup.

BASH nested while loop issues

A week or two ago, a team mate had an issue with a nested while loop in a BASH script.  KSH ran just fine, but when he ran the exact same script under BASH, it had “unexpected results.”  The while loop involved piping output to a while loop (to feed the loop), and there were two loops to iterate, both fed in this fashion.

Some of you may have already guessed what the issue was, but I wanted to go into a little detail here, because it is important to understand why two very similar shells behave so very differently sometimes.  In this case, it has to do with how BASH deals with an execution chain.

When you go full on Mario (chaining commands together with lots of pipes,) BASH has a unique feature (as compared to KSH) where it stores the exit code for EACH PIECE of the pipeline in an array.  This behavior means it is hijacking the pipeline chain, and when you have a loop that relies on receiving output from a pipe, that causes oddities to ensue.  What I mean by “hijacking” is that each piece of the pipe is executed in its own subprocess fork.  This is how BASH is able to grab each individual exit status to store in the array.  Since each piece of the pipe is a “fork” call, the contents of the variable being manipulated isn’t what we expect inside the second loop, and we get output that doesn’t seem to be what we expect, unless we understand that it is forking.  KSH doesn’t do this, so there’s nothing trying to inspect the chain, and thus the loops run just fine.  After I realized what was going on, I suggested changing the command to not use a pipe to feed the loop, and a workable solution was found that works on both BASH and KSH seamlessly.  I don’t recall exactly, but I think I had them change it from an “echo ‘something’ | while … do” to a “while … do < $( echo ‘something’ )” to fix it.

The BASH special array is called PIPESTATUS, and is useful for troubleshooting certain steps in a complicated pipeline, but can cause issues if you don’t know how the behavior affects the pipes in play.  In the case of the nested loop, the loss of the next setting of the variable in play was problematic.

OpenBSD laptop follow up, OpenSSL SAN, and BASH process substitution

I got really busy this last week, so didn’t show up to social media as much as I would have liked.  I also didn’t respond to the comment from last week’s post right away.  Since the question came up, I thought I’d mention it here, on top of my direct response about Google HangOuts on the OpenBSD laptop.

The short answer is, I don’t use HangOuts, so I didn’t test it before the question was raised.  The long answer is that Google has some restrictions about which browsers the Video chat will work on, and the newest versions of Firefox aren’t on the list.  I believe I read somewhere that Google is working on correcting this, but until they do, Video chat is a no go for this set up.  On the other hand, the text chat works fine.

I also wanted to mention the program that I was going to test for temporarily disabling the touchpad.  The program is “syndaemon” and if it worked, I could just drop a line in my .xinitrc file that has “syndaemon -d” on it.  Unfortunately, it doesn’t work on this laptop.  I get an error about it being “Unable to find a synaptics device.”  I’ll have to dig further into how the pointer is recognized and see if there are any alternatives.

Now that that’s out of the way, I thought I’d dive into the discussion about using “subject alternative names” with a certificate signing request.

In the past, I’ve recommended tacking on a “subjectAltName=DNS.1=host.domain.tld,DNS.2=althost.domain.tld” type of string onto the end of your -subj flag that contains the OU information for the request.  This apparently doesn’t generate a certificate that some authorities will recognize, so we’re forced to use the x509 extensions to pull it in.  Almost every recommendation out there says to create a temporary openssl.cnf file, append the SAN section to it, and then generate your certificate, pointing at that cert.  There is a good reason for this.  The way to do without a hard coded temporary file is to take advantage of BASH’s “process substitution” ability.  KSH93 supposedly has this feature as well, but when I tested ksh93 on AIX, it didn’t work, so I will say to test your own ksh93 before just assuming this will benefit you.  Otherwise, stick with bash.

The temporary file would normally be a copy of whatever the openssl.cnf default configuration file is, plus the appended [SAN] section so that the extensions can be requested.  In order to find the location of the default configuration file, we would run this command:

openssl version -d | awk '{print $NF}' | tr -d '"'

This gives us the directory where it lives.  We then tack on the “openssl.cnf” for the full path like so:

$( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf

So, if we wanted to create that temporary file, we might do this:

$( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) >>openssl.temp.cnf

Then we would point the -extfile or -config flag at this temporary file.  However, since we’re being stubborn, we’ll use BASH’s process substitution to do this, instead.

openssl req -nodes -newkey rsa:4096 -keyout ${HOSTNAME}.domain.tld.key -out ${HOSTNAME}.domain.tld.csr -sha256 -subj="/C=US/ST=Arkansas/L=Conway/O=UnixSecLab/OU=TheLab/CN=${HOSTNAME}.domain.tld/subjectAltName=DNS.1=${HOSTNAME}.domain.tld,DNS.2=alt${HOSTNAME}.domain.tld" -config <( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) -reqexts SAN -extensions SAN

Whew.  That’s a lot to take in.  The “<( cat … )” is BASH process substitution.  Instead of creating a variable that contains all of the output from the “cat” and “printf” commands, it sticks those into a file descriptor located at /dev/fd/## (where ## is the file descriptor number in use.)  Think of this as kind of a temporary named pipe/FIFO.  Since the openssl command requires an actual file it can do an “open” on when dealing with the -config or -extfile flags, we can’t pipe things in normally.  Our only option is to create an actual temporary file, or create a named FIFO to talk to (which is overkill, so temp file is better.)  BASH lets us kind of sort of create that with process substitution without having to clean up after ourselves by removing the FIFO file.

Is this practical?  Probably not.  It is less effort to do the temporary file and clean it up after, and more portable, as well.

Am I stubborn?  Absolutely.  That’s what led to my whipping up the abomination above.  Would I recommend this to others?  Not really.  Again, just go with what’s practical.  There’s a reason people recommend it in most of the online commentary on this Subject Alternative Names (SAN) discussion.

OpenBSD as a “Desktop” (Laptop)

My daughter has finally finished transferring all of her files off to her new laptop, and I have graciously inherited the old one.  The laptop is a Lenovo Flex 3.  It has one of those screens that folds all the way around, and a touch screen so you can treat it like a “tablet” sometimes.  It has one RJ45 jack for a Realtek 8168 chipset ethernet device.  This can do up to 1000BaseT full-duplex, which is nice.

The built in wireless is an unsupported chipset, so I’m not using wireless networking on it, yet.  I’ll probably pull the adapter from the Hak5 field kit for use with this laptop.

There is no CD or DVD drive included.  This meant having to use a USB thumb drive for the installation.  This wasn’t a problem, just something to note.

The first thing I did was use the already made Linux Mint 18.1 thumb drive I had laying around from building my new work laptop so that I could boot the machine into a live system that I could take a disk image from.  I plugged in my 3TB Western Digital external drive, ran “dd” to dump the internal drive to an image file on the external drive, and then put the Western Digital back in its place on the shelf.

Next, I built a new USB disk with the OpenBSD 6.1 Release installation media image.  I had to go into the BIOS / UEFI settings and change the boot order for the USB drive to boot first, which I had already done to get Mint working in the previous step.  However, to get OpenBSD working, I also had to change the boot type to “Legacy Boot.”  I found documentation that says OpenBSD should work in UEFI mode, but it refused to install in that mode, so I’m noting it here that this setting had to be changed.  I also took advantage of the time to turn on the virtualization settings in the BIOS / UEFI, because I plan to play with the new vmm commands at some point.

Once I got OpenBSD safely installed using the default “use whole disk” and default partition scheme, I set up my package URL settings and pulled down the xfce package and its dependencies.  I only did this, because I use the XFCE desktop environment on Mint, and I wanted a familiar X experience, to start.  I don’t normally run X on OpenBSD, and it comes with some nice light weight window managers by default, but I want to ease into playing with those, so I installed the fluff.  I also installed Firefox and its dependencies, because I intend to do a lot of the work for both UnixSecLab and Jack of all Hobbies from this machine, moving forward.  To get “startx” to load XFCE, I created a “.xinitrc” file in my normal users’ home directories that just contain the line “xfce4-session” for now.  I am unhappy with the fact that as I type, if I brush the touch pad, it causes the mouse to steal focus and my typing either jumps around, or accidentally “clicks” on something it shouldn’t.  There is a utility built into OpenBSD that should help prevent that behavior, I just haven’t taken the time yet to set it up and tweak it.  It’s on my to-do list.

So far, everything works swimmingly well.  I have three users created: one is my “main” with doas permissions (doas replaced sudo recently,) as well as one each for the two blogs / businesses I’m running.

Speaking of “doas,” I’m actually happy that I have the option to install sudo as a package, while still retaining “doas” as the primary privilege escalation mechanism.  It means that when I do research and articles on sudo configuration, I can do all kinds of crazy configurations that are “broken” without breaking my production privilege escalation configuration.

Firefox works for all of my needs.  I’m able to do my Canva images for the “quote graphics” I’ve been posting, lately.  I’m able to watch YouTube videos in HTML5 mode without issues, other than the ads not playing.  I’m okay with that, except that I sometimes like to let the ad run for 30 seconds before skipping it, so that the channel gets some credit for it, and can get paid.  I like to support the people I like to watch, and that’s one way to do it.  You have to go to your YouTube settings to turn on HTML5 mode, if it’s not working for you by default.  Just google “youtube html5” to find the link for it.  Speaking of Google, all of its tools work fine as expected.  The only thing I can’t do from the OpenBSD machine that I can do from my Windows laptop is play Guild Wars 2, I think.

I still have some of my older series content that needs finalization, such as the SSH Start to Finish series, which should be easier to get going again on this new setup.

All in all, I’m happy with the system, but I’m an OpenBSD fan boy.  It’s hard to get mad at a system that generally “just works.”

Book Review – Networking for Systems Administrators chapter 6

Since we aren’t doing “Monday this” and “Friday that” for a while, I thought I should leave off the usual title prefix.  I’m also continuing the chapter by chapter review for today to ease back into the writing.  This won’t be every Monday, but I need to mix these in every now and then to keep from letting it trail off before the reviews are finished.

This chapter focuses on viewing network connections.  This is useful for troubleshooting, diagnostics, and performance data gathering.  The chapter goes into details for displaying live ports, tcp/udp/both, filtering by state such as “established connections,” identifying the ports, and identifying the programs that own those ports.  The netstat command is discussed heavily, but lsof and sockstat make an appearance, as well.

As mentioned by the author, there is no common command for displaying which programs own which ports.  The lsof command is ported to many platforms, but is not always an option.

As an example of how to deal with this in AIX, (not specifically covered by the book,) you need to do this in two commands.  First, run the netstat command with the -A flag to get the socket identifier, then pass the identifier for the specific port to the “rmsock” command using tcbcb as the last parameter.  This will show you the program that owns that socket, even though you aren’t actually removing the socket at all.

Also, on openbsd you can use the fstat command, but this was not covered by the author.

Thanks for reading, and remember to check out the social media links from this site!

Returning next week.

My break has taken a bit longer than intended. I want to apologize for that, but rest assured, I’m coming back next week. This post is to give a heads up on the re-structuring of both UnixSecLab and Jack of all Hobbies.

Until further notice, the mailing list sign ups (one per site) will only be a summary feed of the previous week’s blog posts for that site. I may bring back the extra content, eventually, and I may continue to use the mailing list for the rare but occasional promotional advertisement for products or services I offer.

Also, until further notice, the minimum number of posts per week (starting NEXT week) will be ONE post on Monday (for UnixSecLab) and ONE post on Thursday (for Jack of all Hobbies.)

I’m trying to run two different sites with two different focus topics. I work an 8am to 5pm job with a one hour commute to and one hour commute from work each day. This means I lose two hours out of my day just driving on the interstate. I also have nine beautiful children that need some of my attention, a loving wife that needs some of it, and so on. My time is more limited than I imagined when I first started these ventures, so I’m having to restructure. I may be able to get into a groove and get more content in a week eventually, but for now I’m backing things down.

In lieu of the extra blog posts, I will be doing more on social media. Posting to Twitter(UnixSecLab), Twitter(Jack of all Hobbies), FaceBook, LinkedIn, and so on takes less of my time than writing a full detailed blog post would, so I’ll be more active there.

You can expect the first content blog post for UnixSecLab under the new format starting June 5th. The first Jack of all Hobbies blog post under the new format will be available June 8th.

I also have some exciting plans for the not too distant future, so stay tuned.

I hope to catch you all here and on social media, and as always, leave comments, suggestions, or other verbiage in the comments for this post.

Weekend Wrap Up – No real technical content today

This last weekend was crazy busy.  We’ve had heavy showers and mild storms hitting our area off and on most of the week, but this weekend we got hit hard.  Several towns and communities surrounding us have been heavily flooded due to the torrential rains.  A friend of the family showed their rain gauge measured more than seven (7) inches of rain in one day.  Some of the flooding was so bad that one lane of one of the roads literally washed away.  While 7 inches isn’t anywhere near what Queensland Australia got recently (they got almost a foot and a half or so,) the power of the flow was significant, and our county has been declared a disaster area.

Saturday morning, before the heavy stuff hit, I gave a presentation on Permaculture to a local community group.  It went well, and if you are interested in following that side of my life on a regular basis, check out my non-tech hobby page: Jack of all Hobbies.

Sunday morning at midnight, I headed to Little Rock.  This was right in the middle of the flooding mess.  My car stalled several times on my way in to work.  I was at work until almost noon, and I came home and crashed (sleep.)

Due to the crazy experiences we’ve had with the weather, the work schedule, and the presentation I gave, I’m cancelling today’s normal content to just provide an update on life.  Crazy busy.

Thanks again for reading, and hopefully I can get back on track quickly.