BASH nested while loop issues

A week or two ago, a team mate had an issue with a nested while loop in a BASH script.  KSH ran just fine, but when he ran the exact same script under BASH, it had “unexpected results.”  The while loop involved piping output to a while loop (to feed the loop), and there were two loops to iterate, both fed in this fashion.

Some of you may have already guessed what the issue was, but I wanted to go into a little detail here, because it is important to understand why two very similar shells behave so very differently sometimes.  In this case, it has to do with how BASH deals with an execution chain.

When you go full on Mario (chaining commands together with lots of pipes,) BASH has a unique feature (as compared to KSH) where it stores the exit code for EACH PIECE of the pipeline in an array.  This behavior means it is hijacking the pipeline chain, and when you have a loop that relies on receiving output from a pipe, that causes oddities to ensue.  What I mean by “hijacking” is that each piece of the pipe is executed in its own subprocess fork.  This is how BASH is able to grab each individual exit status to store in the array.  Since each piece of the pipe is a “fork” call, the contents of the variable being manipulated isn’t what we expect inside the second loop, and we get output that doesn’t seem to be what we expect, unless we understand that it is forking.  KSH doesn’t do this, so there’s nothing trying to inspect the chain, and thus the loops run just fine.  After I realized what was going on, I suggested changing the command to not use a pipe to feed the loop, and a workable solution was found that works on both BASH and KSH seamlessly.  I don’t recall exactly, but I think I had them change it from an “echo ‘something’ | while … do” to a “while … do < $( echo ‘something’ )” to fix it.

The BASH special array is called PIPESTATUS, and is useful for troubleshooting certain steps in a complicated pipeline, but can cause issues if you don’t know how the behavior affects the pipes in play.  In the case of the nested loop, the loss of the next setting of the variable in play was problematic.

OpenBSD laptop follow up, OpenSSL SAN, and BASH process substitution

I got really busy this last week, so didn’t show up to social media as much as I would have liked.  I also didn’t respond to the comment from last week’s post right away.  Since the question came up, I thought I’d mention it here, on top of my direct response about Google HangOuts on the OpenBSD laptop.

The short answer is, I don’t use HangOuts, so I didn’t test it before the question was raised.  The long answer is that Google has some restrictions about which browsers the Video chat will work on, and the newest versions of Firefox aren’t on the list.  I believe I read somewhere that Google is working on correcting this, but until they do, Video chat is a no go for this set up.  On the other hand, the text chat works fine.

I also wanted to mention the program that I was going to test for temporarily disabling the touchpad.  The program is “syndaemon” and if it worked, I could just drop a line in my .xinitrc file that has “syndaemon -d” on it.  Unfortunately, it doesn’t work on this laptop.  I get an error about it being “Unable to find a synaptics device.”  I’ll have to dig further into how the pointer is recognized and see if there are any alternatives.

Now that that’s out of the way, I thought I’d dive into the discussion about using “subject alternative names” with a certificate signing request.

In the past, I’ve recommended tacking on a “subjectAltName=DNS.1=host.domain.tld,DNS.2=althost.domain.tld” type of string onto the end of your -subj flag that contains the OU information for the request.  This apparently doesn’t generate a certificate that some authorities will recognize, so we’re forced to use the x509 extensions to pull it in.  Almost every recommendation out there says to create a temporary openssl.cnf file, append the SAN section to it, and then generate your certificate, pointing at that cert.  There is a good reason for this.  The way to do without a hard coded temporary file is to take advantage of BASH’s “process substitution” ability.  KSH93 supposedly has this feature as well, but when I tested ksh93 on AIX, it didn’t work, so I will say to test your own ksh93 before just assuming this will benefit you.  Otherwise, stick with bash.

The temporary file would normally be a copy of whatever the openssl.cnf default configuration file is, plus the appended [SAN] section so that the extensions can be requested.  In order to find the location of the default configuration file, we would run this command:

openssl version -d | awk '{print $NF}' | tr -d '"'

This gives us the directory where it lives.  We then tack on the “openssl.cnf” for the full path like so:

$( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf

So, if we wanted to create that temporary file, we might do this:

$( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) >>openssl.temp.cnf

Then we would point the -extfile or -config flag at this temporary file.  However, since we’re being stubborn, we’ll use BASH’s process substitution to do this, instead.

openssl req -nodes -newkey rsa:4096 -keyout ${HOSTNAME}.domain.tld.key -out ${HOSTNAME}.domain.tld.csr -sha256 -subj="/C=US/ST=Arkansas/L=Conway/O=UnixSecLab/OU=TheLab/CN=${HOSTNAME}.domain.tld/subjectAltName=DNS.1=${HOSTNAME}.domain.tld,DNS.2=alt${HOSTNAME}.domain.tld" -config <( cat $( openssl version -d | awk '{print $NF}' | tr -d '"' )/openssl.cnf ); printf "\n[SAN]\nsubjectAltName=DNS:%s.domain.tld,DNS:alt%s.domain.tld" ${HOSTNAME} ${HOSTNAME} ) -reqexts SAN -extensions SAN

Whew.  That’s a lot to take in.  The “<( cat … )” is BASH process substitution.  Instead of creating a variable that contains all of the output from the “cat” and “printf” commands, it sticks those into a file descriptor located at /dev/fd/## (where ## is the file descriptor number in use.)  Think of this as kind of a temporary named pipe/FIFO.  Since the openssl command requires an actual file it can do an “open” on when dealing with the -config or -extfile flags, we can’t pipe things in normally.  Our only option is to create an actual temporary file, or create a named FIFO to talk to (which is overkill, so temp file is better.)  BASH lets us kind of sort of create that with process substitution without having to clean up after ourselves by removing the FIFO file.

Is this practical?  Probably not.  It is less effort to do the temporary file and clean it up after, and more portable, as well.

Am I stubborn?  Absolutely.  That’s what led to my whipping up the abomination above.  Would I recommend this to others?  Not really.  Again, just go with what’s practical.  There’s a reason people recommend it in most of the online commentary on this Subject Alternative Names (SAN) discussion.

OpenBSD as a “Desktop” (Laptop)

My daughter has finally finished transferring all of her files off to her new laptop, and I have graciously inherited the old one.  The laptop is a Lenovo Flex 3.  It has one of those screens that folds all the way around, and a touch screen so you can treat it like a “tablet” sometimes.  It has one RJ45 jack for a Realtek 8168 chipset ethernet device.  This can do up to 1000BaseT full-duplex, which is nice.

The built in wireless is an unsupported chipset, so I’m not using wireless networking on it, yet.  I’ll probably pull the adapter from the Hak5 field kit for use with this laptop.

There is no CD or DVD drive included.  This meant having to use a USB thumb drive for the installation.  This wasn’t a problem, just something to note.

The first thing I did was use the already made Linux Mint 18.1 thumb drive I had laying around from building my new work laptop so that I could boot the machine into a live system that I could take a disk image from.  I plugged in my 3TB Western Digital external drive, ran “dd” to dump the internal drive to an image file on the external drive, and then put the Western Digital back in its place on the shelf.

Next, I built a new USB disk with the OpenBSD 6.1 Release installation media image.  I had to go into the BIOS / UEFI settings and change the boot order for the USB drive to boot first, which I had already done to get Mint working in the previous step.  However, to get OpenBSD working, I also had to change the boot type to “Legacy Boot.”  I found documentation that says OpenBSD should work in UEFI mode, but it refused to install in that mode, so I’m noting it here that this setting had to be changed.  I also took advantage of the time to turn on the virtualization settings in the BIOS / UEFI, because I plan to play with the new vmm commands at some point.

Once I got OpenBSD safely installed using the default “use whole disk” and default partition scheme, I set up my package URL settings and pulled down the xfce package and its dependencies.  I only did this, because I use the XFCE desktop environment on Mint, and I wanted a familiar X experience, to start.  I don’t normally run X on OpenBSD, and it comes with some nice light weight window managers by default, but I want to ease into playing with those, so I installed the fluff.  I also installed Firefox and its dependencies, because I intend to do a lot of the work for both UnixSecLab and Jack of all Hobbies from this machine, moving forward.  To get “startx” to load XFCE, I created a “.xinitrc” file in my normal users’ home directories that just contain the line “xfce4-session” for now.  I am unhappy with the fact that as I type, if I brush the touch pad, it causes the mouse to steal focus and my typing either jumps around, or accidentally “clicks” on something it shouldn’t.  There is a utility built into OpenBSD that should help prevent that behavior, I just haven’t taken the time yet to set it up and tweak it.  It’s on my to-do list.

So far, everything works swimmingly well.  I have three users created: one is my “main” with doas permissions (doas replaced sudo recently,) as well as one each for the two blogs / businesses I’m running.

Speaking of “doas,” I’m actually happy that I have the option to install sudo as a package, while still retaining “doas” as the primary privilege escalation mechanism.  It means that when I do research and articles on sudo configuration, I can do all kinds of crazy configurations that are “broken” without breaking my production privilege escalation configuration.

Firefox works for all of my needs.  I’m able to do my Canva images for the “quote graphics” I’ve been posting, lately.  I’m able to watch YouTube videos in HTML5 mode without issues, other than the ads not playing.  I’m okay with that, except that I sometimes like to let the ad run for 30 seconds before skipping it, so that the channel gets some credit for it, and can get paid.  I like to support the people I like to watch, and that’s one way to do it.  You have to go to your YouTube settings to turn on HTML5 mode, if it’s not working for you by default.  Just google “youtube html5” to find the link for it.  Speaking of Google, all of its tools work fine as expected.  The only thing I can’t do from the OpenBSD machine that I can do from my Windows laptop is play Guild Wars 2, I think.

I still have some of my older series content that needs finalization, such as the SSH Start to Finish series, which should be easier to get going again on this new setup.

All in all, I’m happy with the system, but I’m an OpenBSD fan boy.  It’s hard to get mad at a system that generally “just works.”

Book Review – Networking for Systems Administrators chapter 6

Since we aren’t doing “Monday this” and “Friday that” for a while, I thought I should leave off the usual title prefix.  I’m also continuing the chapter by chapter review for today to ease back into the writing.  This won’t be every Monday, but I need to mix these in every now and then to keep from letting it trail off before the reviews are finished.

This chapter focuses on viewing network connections.  This is useful for troubleshooting, diagnostics, and performance data gathering.  The chapter goes into details for displaying live ports, tcp/udp/both, filtering by state such as “established connections,” identifying the ports, and identifying the programs that own those ports.  The netstat command is discussed heavily, but lsof and sockstat make an appearance, as well.

As mentioned by the author, there is no common command for displaying which programs own which ports.  The lsof command is ported to many platforms, but is not always an option.

As an example of how to deal with this in AIX, (not specifically covered by the book,) you need to do this in two commands.  First, run the netstat command with the -A flag to get the socket identifier, then pass the identifier for the specific port to the “rmsock” command using tcbcb as the last parameter.  This will show you the program that owns that socket, even though you aren’t actually removing the socket at all.

Also, on openbsd you can use the fstat command, but this was not covered by the author.

Thanks for reading, and remember to check out the social media links from this site!

Returning next week.

My break has taken a bit longer than intended. I want to apologize for that, but rest assured, I’m coming back next week. This post is to give a heads up on the re-structuring of both UnixSecLab and Jack of all Hobbies.

Until further notice, the mailing list sign ups (one per site) will only be a summary feed of the previous week’s blog posts for that site. I may bring back the extra content, eventually, and I may continue to use the mailing list for the rare but occasional promotional advertisement for products or services I offer.

Also, until further notice, the minimum number of posts per week (starting NEXT week) will be ONE post on Monday (for UnixSecLab) and ONE post on Thursday (for Jack of all Hobbies.)

I’m trying to run two different sites with two different focus topics. I work an 8am to 5pm job with a one hour commute to and one hour commute from work each day. This means I lose two hours out of my day just driving on the interstate. I also have nine beautiful children that need some of my attention, a loving wife that needs some of it, and so on. My time is more limited than I imagined when I first started these ventures, so I’m having to restructure. I may be able to get into a groove and get more content in a week eventually, but for now I’m backing things down.

In lieu of the extra blog posts, I will be doing more on social media. Posting to Twitter(UnixSecLab), Twitter(Jack of all Hobbies), FaceBook, LinkedIn, and so on takes less of my time than writing a full detailed blog post would, so I’ll be more active there.

You can expect the first content blog post for UnixSecLab under the new format starting June 5th. The first Jack of all Hobbies blog post under the new format will be available June 8th.

I also have some exciting plans for the not too distant future, so stay tuned.

I hope to catch you all here and on social media, and as always, leave comments, suggestions, or other verbiage in the comments for this post.

Weekend Wrap Up – No real technical content today

This last weekend was crazy busy.  We’ve had heavy showers and mild storms hitting our area off and on most of the week, but this weekend we got hit hard.  Several towns and communities surrounding us have been heavily flooded due to the torrential rains.  A friend of the family showed their rain gauge measured more than seven (7) inches of rain in one day.  Some of the flooding was so bad that one lane of one of the roads literally washed away.  While 7 inches isn’t anywhere near what Queensland Australia got recently (they got almost a foot and a half or so,) the power of the flow was significant, and our county has been declared a disaster area.

Saturday morning, before the heavy stuff hit, I gave a presentation on Permaculture to a local community group.  It went well, and if you are interested in following that side of my life on a regular basis, check out my non-tech hobby page: Jack of all Hobbies.

Sunday morning at midnight, I headed to Little Rock.  This was right in the middle of the flooding mess.  My car stalled several times on my way in to work.  I was at work until almost noon, and I came home and crashed (sleep.)

Due to the crazy experiences we’ve had with the weather, the work schedule, and the presentation I gave, I’m cancelling today’s normal content to just provide an update on life.  Crazy busy.

Thanks again for reading, and hopefully I can get back on track quickly.

Fun-Day Friday – Book Review – Networking for Systems Administrators chapter 5

Last week, we actually did Chapters 3 and 4 together, but I only titled it Chapter 3.  Chapter 3 was the “IPv4” portion, and chapter 4 was the “IPv6” portion.  Today, we go over Chapter 5, for TCP/IP layer 4 Transport.

The Transport layer covers more than just TCP.  It includes TCP, UDP, ICMP, as well as some less commonly known protocols such as SCTP (stream control transmission protocol,) ESP (encapsulating security payload,) and AH (IPSec Authentication Header.)  Lucas mentions that there are dozens more, but these were the opening examples.

He goes into more depth about ICMP, TCP, and UDP, including roles, protocols, and troubleshooting.  After this, we get a bit of information on sockets, and how to configure the common services.

Finally, he covers the TCP connection state and the three way handshake, of SYN, SYN-ACK, and ACK, which is necessary for an ESTABLISHED session.  A little about the protocols file for defined numbers, and he wraps up the chapter.

Next week, we’ll look at Chapter 6, which is “viewing the network.”

AIX notifications

In line with my lack of a “proper” Monday post, I’m skipping the “proper” Wednesday post to bring you an AIX specific tool I was asked to research for an internal project this week at work.

Many of you are likely familiar with Linux’s “inotify” process, which can be used to trigger a response to a file change in real time.  This is great for filesystem events.  You might even be able to crowbar this into notifying on process death events by monitoring something out of /proc, if you’re brave enough to go poking around in there.

On AIX, there is no native “inotify.”  There is, however, a special file system called the “Autonomic Health Advisor File System (AHAFS.)”  This is available by default, and is easy to set up.

Our particular scenario was to monitor when a specific process dies.  We want a real time notification, and log of the event.  I spent a few hours reading up on how this works, then played with it until I was sure I understood the best way to use it, and this is what I came up with.

There is a sample perl script “aha.pl” in the /usr/samples/ahafs/bin directory.  This can be used out of the box for most things.  In order to monitor a process, we need to know the path it is in.  In my example for this blog post, (not the actual service I needed to monitor,) we’ll say we want to monitor the apache httpd service.  It will be located at /usr/local/apache/bin/httpd for this example.

In order to monitor the death of this process, we would need to mimic that filesystem structure underneath the appropriate “Event Factory” directory.  In this case, that would be /aha/cpu/processMon.monFactory/ which means our final structure would be this:

/aha/cpu/processMon.monFactory/usr/local/apache/bin/

Before we just go making the directory, though, we need to mount the ahafs filesystem to “/aha.”  This is as simple as:

sudo mkdir /aha
sudo mount -v ahafs /aha /aha

Once that’s done, we can build out the structure.  The “cpu/processMon.monFactory” was created by mounting, but we need the subdirectory structure.

sudo mkdir -p /aha/cpu/processMon.monFactory/usr/local/apache/bin

Now we can monitor at any time.  The monitor itself will sit in “wait” status until the process being monitored dies, then it will print some event information.  This means that we key off of this monitor process coming to completion before we do any notifications of our own.  My recommendation is to tee the output of the event notification with “-a” to append, so that you have a log you can review, then have it email your support email account.

/usr/samples/ahafs/bin/aha.pl -m /aha/cpu/processMon.monFactory/usr/local/apache/bin/apache.mon "CHANGED=YES;INFO_LVL=3" 2>&1 | tee -a /var/log/apache.monitor.log | mailx -s "Apache died on $( hostname )" support@my.example.com

This should probably be placed within a script, and that script should be called after starting apache.  Of course, this is probably a terrible process to monitor, since httpd spawns off forked child processes all the time to handle requests, but it was an easy one to use as an example to get the point across on how to use this.

Also, it should be noted that for this specific scenario (PID dies, we want notification when it does,) if the process in question is one that doesn’t daemonize on its own, you can probably get away with just doing this:

echo "/usr/local/bin/my_process" | at now

When the process dies, the at job “dies” and you get an email notification from the at job termination itself.  You don’t get a local log appended, and you are relying solely on email for the notification, but it’s less work overall.

General Rant – Corporate Policies

We interrupt our regularly scheduled post on Sudo to bring you a General Rant rant.

Last week, I was oncall. We also received new laptops for our team. The current corporate policy states that we cannot have two laptops at the same time. The problem is that most members on our team actually run some flavor of Linux, which means we manage our own OS installation. I backed up all of my stuff on Friday when I came off of oncall, downloaded a new OS ISO to “burn” to a USB thumb drive, and dropped off the old laptop. I picked up my new one before leaving for the day. The new one doesn’t have a CD or DVD drive of any sort. This is why I had to use a USB drive.

Today (Saturday as I write this,) I began the process of installing the new OS. I interrupted the boot, went into the UEFI/BIOS to set the boot order, and… nothing. The USB drive doesn’t show as a boot option. It’s in a list at the bottom, but it’s not selectable as a boot device. I was going to wait until Monday to borrow the external DVD drive there, but I was informed that we bought one of those for my daughter at some point.  I borrowed hers, downloaded the ISO (again) and burned it to disk.

The drive we have here at the house?  Same problem.  It “shows” but doesn’t show as a bootable media option as it should.  It could be driver related, but it shouldn’t be.  I’ll try again on Monday with the one we have at work, but I may have to give this back to Corporate as “faulty” if the one we have there doesn’t work.  It’s also possible that they applied some kind of corporate policy to lock that out, and I don’t know the hoodoo voodoo to make it work from that perspective.

In the mean time, I’ve got plenty to work on for my home business(es) and will focus on those since this attempt was a bust.

It would have made more sense if they had let us have both laptops (new and old) at the same time for say… a week.  Due to the nature of my job, I can’t leave one sitting in their little build space while I remote in to set things up, since that would give them physical access.  Then again, “I got trust issues.”

Next rant…

I would build this with OpenBSD, but there are some software restrictions.  First, our Corporate VPN solution requires a specific agent that isn’t available on OpenBSD.  There is a third party client port available in the ports and packages, but the VPN service itself is configured to reject that.  There’s also a chance that the consoles we have that require Java won’t work with the Java clients available on OpenBSD.  I know that the OpenJDK has had issues in the past for some of them, so we download Oracle’s version, and I’m not aware of an Oracle provided OpenBSD build of Java.  Then again, I haven’t looked in a while.  I could be wrong, and one might be available.  I would have a higher peace of mind if I could run OpenBSD, rather than some flavor of Linux, but Linux is better than Windows any day of the week.

General Rant ending transmission.