Persistence through job control – inittab

The inittab file is often one of the first things loaded during the initialization process.  This file contains a list of processes that need to be run very early in the boot process, may need a “watchdog” to restart them on crash, and so on.  It also often contains the settings for defining and securing TTYs.  What we are concerned with is the “respawn” feature, since we want a place to drop a persistent process on the box.

Before we proceed, note that more modern varieties of Linux don’t use inittab, but have similar functions.  Red Hat Enterprise Linux (RHEL) version 6 has “upstart” which replaces the inittab with a directory containing individual files per process to be started.  RHEL version 7 has “systemd” which uses “unit” files to control just about everything.  We’ll cover those in next week’s article.  The “inittab” is a SystemV-ism, and OpenBSD doesn’t use it.  Instead of “inittab” the startup files begin in the “rc” scripts, and the TTYs are defined and secured in the “/etc/ttys” instead.  The reason for this is inittab is tied to the concept of run levels, and OpenBSD doesn’t really have those.

On AIX, the inittab is managed by a set of commands similar to many other base AIX commands (mk-, ch-, rm-, and ls- prefixed.)  The mkitab, chitab, rmitab, and lsitab commands create, change (modify,) remove, and list inittab entries (respectively.)  Let’s break down an inittab entry, using AIX’s inittab as a base line.

Each entry is colon “:” separated, similarly to /etc/passwd.  Lines are commented with a semi-colon “;” at the beginning of the line.  The first column is the label for the entry.  The second colomn is the run level that triggers the running of this command.  The third column is the “action” to take on the command.  This is where we define “run this one time at boot” vs. “run this again if it dies.”  The last column is “the command(s) to run.”  The reason I said “command(s)” is this can be a chain of commands piped together, not just a singular command.

To create an inittab entry (again, using AIX as an example,) we would do something similar to this:

mkitab "softaud:2:respawn:/usr/bin/audit start"

In the above example, we created an inittab entry labeled “softaud” that starts at runlevel 2, respawns on death, and calls the “/usr/bin/audit” command passing it “start” as input.  Since “audit” lives at “/usr/sbin/audit” on AIX, this entry is suspect.  If we were to leave a persistent script named “audit” in /usr/bin and then add this inittab entry, our script would be respawned on death, and would likely be overlooked by the systems administrators reviewing the file.  If we wanted to inject this after the existing entry already labeled as “audit” we could pass that mkitab command the “-i” flag followed by the label we want to append after.  Why “-i” for “append?”  You would have to ask IBM.

mkitab -i audit "softaud:2:respawn:/usr/bin/audit start"

The possible “actions” that can be passed to an inittab are listed in the man pages, but a few common ones are:
"once" - When the init command enters the run level specified for this record, start the process, do not wait for it to stop and when it does stop do not restart the process. If the system enters a new run level while the process is running, the process is not restarted.
"respawn" - If the process identified in this record does not exist, start the process. If the process currently exists, do nothing and continue scanning the /etc/inittab file.
"wait" - When the init command enters the run level specified for this record, start the process and wait for it to stop. While the init command is in the same run level, all subsequent reads of the /etc/inittab file ignore this object.

Some systems include a commands to make changes to inittab (such as we see with AIX.)  Other systems require you to modify the file directly using your favorite editor (vi, for example.)

Whatever your system requires, after a change is made, it only takes effect if you tell the “init” process to re-read the inittab.  To do this, you need the “telinit” command, and pass it the “q” option.

telinit q

Once it has been re-read, it will happily respawn the persistent script that was injected. A thorough audit of the inittab would involve going through the list of all commands in the file, checking that those processes exist at the locations given, doing a file type check, checksum check, and when possible, a “contents” check to see that they are legitimate.

Persistence through job control – Introduction

The next few weeks will be a quick discussion on the different kinds of “job control” available, and how to potentially use them for persistence post compromise.  This is an outside of the box type of thinking that is needed when hunting for “persistence” issues if you believe you’re machine has been compromised.

Before we get to job control “proper,” though, I wanted to walk down the top level (from an OS perspective) options, and that means we need to look at what happens when you boot your server.  Typically there is a phase with some kind of power on self test, the BIOS or UEFI equivalent initializes hardware, then through magic smoke and mirrors finds and loads the boot loader.  The boot loader loads the kernel and starts the init process.

There are attacks at the various hardware/firmware levels.  We won’t look at those today.  There are attacks at the boot loader level.  We also won’t look at those today, though we may come back to this topic at a later date.  A rootkit can replace your kernel, and we’re not going to look at those today, either.  Instead, we’re going to start with “init” and work our way down from there.  The techniques and topics we’ll cover are things that are “less intrusive” (since they don’t replace or modify firmware, the kernel, or user land programs to hide activity.)

The init system has several components, and depending on what “style” of init, are configured in different ways.  We’ll briefly cover these pieces today, then go into detail on how to focus on any single component, later.

The first piece we’ll cover in more detail later is the ‘inittab’ component.  This is a file that controls respawning of critical processes if they die, among other things.

The next piece is the ‘rc’ system.  This includes SystemV style initialization scripts, systemd “units” (shudder,)  and similar.  There are many variations on this, but I’ll try to cover the most common SystemV, BSD, and systemd styles, and most systems will use some variation on these, so if you’re familiar with what I present, you should have little trouble picking up what’s going on with one that’s just similar to these, but not identical.

Finally, we’ll look at the ‘inetd’ or ‘xinetd’ systems, as well as the systemd equivalent.

After we get through the “init” system, we’ll continue the topics with actual job control and scheduling programs such as cron, at, and shell background jobs.

Book Review – Networking for Systems Administrators chapter 7

Since it’s been a few weeks since we did one of these, and I’d like to have this book review finished before the end of the year, it’s time for another chapter review of Networking for Systems Administrators.

Chapter 7 is fairly short, and focuses on “Network Testing Basics.”  It doesn’t cover tools so much as mind set.  From a Systems Administrator point of view, when troubleshooting network issues, our goal is to determine what is coming into or going out of the server.  Is the data we believe is leaving the server the data actually leaving the server?  Is the data coming in the data we expect?  There are plenty of tools to determine this at various levels, and in the end what we’re looking for is performance and correctness.  The data should match, and the performance should be within our expected parameters.  Anything outside of the norm should be investigated.

If the issue is at our end, it could be something as simple as a configuration issue with the application, or a bastion host firewall rule that shouldn’t have been turned on.  If it’s not at our end, it could be related to firewall, network access control lists (ACLs,) packet filters, or even proxy services.  Data can be blocked or mangled by some combination of the above, and once you can determine that it’s not the fault of your server or application, you can show the evidence to the network or firewall teams, and engage them for assistance in troubleshooting upstream from your machine.  Don’t blame the firewall first.  Check your own stuff first, gather evidence, then engage.

TMUX concepts

There are two tools I use religiously at work to deal with sending commands to multiple servers at the same time.  I use DSH for the quick and dirty one-liners, re-usable functions, and most things that don’t require a TTY/PTY allocation to complete.  For everything else, I use TMUX.

The tmux command works on the concept of sessions, windows, and panes.  A session represents a single detachable “connection” to the windows you want to work with.  A window, is a single screen inside of a session.  A window can be a single “pane” that takes up the entire window, or it can be multiple “panes” that break the window up into multiple pieces, each with its own “shell” instance.

While “screen” has the ability to open a serial connection and behave as a console session, much like cu, tip, or minicom, tmux does not.  If you want a serial connection from a tmux pane, you’ll need to set up your pane, then call the serial connection from cu or tip (or similar.)  The developer(s) didn’t want to make it anything more than just an (awesome) multiplexer.  It does what it does very well, and that’s enough.

If you’ve ever worked with “screen” you know there’s a “command” prefix that needs to be passed in order to tell the multiplexer that you’re talking to it, and not just typing input to whatever pane you’re working from.  In “screen” that’s “ctrl-a” by default.  In tmux, it’s “ctrl-b” instead.

In order to split a screen horizontally (panes side by side,) you would do “ctrl-b %” for example.  In order to split it vertically (panes over each other,) you would do “ctrl-b "” instead (that’s ctrl-b followed by the double quote key.)  If you want to detach a session, you use “ctrl-b d” to get back to the shell you launched the session from initially.  Once it is detached, you need to know the session name.  By default, the first session created is session “0” (zero.)  To re-attach you pass this name to the “-t” flag like so:

tmux attach -t 0

If you need to see how many (and names of) sessions, you can use “tmux ls” which will list each session along with some basic session information (including how many windows are in it.)

If you want to make a new window within an attached session, you can use “ctrl-b c” to create the next available window.  Split this into any panes you need to use, and set up your work flow from there.

In order to synchronize across panes, you can use the “ctrl-b :” and then “set synchronize-panes on” to tell it to send input to all panes simultaneously.  To turn that off, do the same, but change the “on” to an “off,” instead.

To move from one window to the next, you can use “ctrl-b n” and “ctrl-b p” (for next and previous.)  This lets you get to each window sequentially.

To move from one pane to the next, you can use “ctrl-b <arrow key>” to move input focus up, down, left, and right from the pane you are currently in.

Those are the bare essentials for dealing with tmux sessions, windows, and panes on a basic level.  More advanced features let you set hot keys (that may or may not require the prefix command,) change the prefix sequence, and so on.  And for scripting purposes, you can run “tmux” commands from the command line outside of an attached session, to control what’s going on inside that session.  This allows for some “expect” like behavior, if you’re inclined to dive deep and learn how to work with it.  I may dive deeper into the more advanced stuff, and provide samples of my “.tmuxrc” configuration file later.

Appropriate Technology – FIFO (Named Pipes)

A week or three back, I posted about how I (stubbornly) managed to do a “one-liner” for generating an OpenSSL certificate signing request that included a subject alternative name section using X.509 v3 extensions.  The OpenSSL command does not read the extensions from a standard input for piping purposes, so it expects to open an actual file, instead.  The correct thing to do in that case is create a custom configuration file containing the extensions you want to use, and pass that as an argument.  My “one-liner” to avoid that took advantage of BASH’s “process substitution” feature.  This is really just a temporary FIFO structure that gets used “on the fly” by the shell, which means the output of any commands within the “<()” structure is fed to that temporary FIFO, and the path to the FIFO is what is actually returned to the program that needs it.  It tells the program how to open a file (really a named pipe) to pull the contents that were dynamically generated.

This was an abuse of the technology to fit my needs.  There are, however, appropriate times and places to use FIFO/Named Pipe structures.  Here are a few to help make it clear why they are a “good thing” to know how to create and utilize.

Sometimes an administrator may have a desire to reboot a system after a major event, such as after a full system backup has completed successfully.  In this scenario, we have two scripts that would get called by cron (or some other scheduler.)  The first script to get called would be the “reboot” script that listens for an event that indicates the backup ran successfully.  The second script would be the backup script that calls the backup software, monitors for successful completion, and then notifies the “reboot” script to reboot what the status was.  If the reboot script receives a “backup failed” message, it would end gracefully, and perhaps send an email stating things went south.  If it receives a “backup was successful” message, it would send an email stating things were successful, and then reboot the system.

“Why wouldn’t you use one script to just run the backup, then figure out whether to reboot or not?”  I can hear you asking this to yourself, but let’s complicate the scenario a bit.  Let’s say one of these scripts is kicked off from the backup server, itself.  Let’s say the other script is kicked off by the machine that needs to be rebooted.  Let’s say that there is a firewall that only allows the backup server to open a connection to the client, but the client can’t open a connection to the backup server.  A touch file can be used, but typically that means polling to see if the file exists, with a specific timeout duration before giving up entirely.  With a pipe, the communication is instant, because it’s a block call to read from the pipe.  This also allows the backup server to make a call out to the monitoring service API to disable alerts on ping for the client during the reboot, if you wanted to get fancy with it.

In essence, FIFO files are a poor man’s message broker service.  You set up a listening call on the pipe that wants to be triggered by the event of receiving data on the pipe, then when you’re ready for the event to happen, you send the data down the pipe to trigger the event.

FIFO structures can also stand in as files for programs that expect a file, as shown by the OpenSSL example, previously.  At one point, a teammate looked at using a FIFO for an nmon project he was working on, polling the servers for performance data at regular intervals.  I forget why he needed the FIFO, but it was a limitation with nmon itself that required the setup.