Fun-Day Friday – Children Are Sweet

So one of the things that defines who I am is my family. I started out life as an only child, but had cousins who sometimes felt more like brothers and sisters than cousins. When my mom re-married, I suddenly had lots of siblings, most of whom were already grown and gone with kids of their own, but I had one brother younger than me. Experiencing both “only child” and “brothers and sisters,” I knew I wanted to be a dad. I knew I wanted kids of my own. I also knew I wanted lots of them, and I was lucky enough to find a soul mate who feels the same way, and we have had seven beautiful children together. A friend in need who can not be there for her children for a while asked us recently to take on two more. This last weekend, we did just that, and they have been here for a full “business” week so far. We feel blessed and honored to take on this responsibility, and the kids are all doing pretty well. They already behave like siblings, with all of the ups and downs that entails, but they “click” more than they “collide.”

Children aren’t everyone’s cup of tea, but for me, there is nothing more important that family. I don’t have anything “fun” to share today, because I’m exhausted from the trip to pick them up, and the oncall phone from work being noisy this week, but I felt compelled to share a little piece of my life with you. I hope everyone finds happiness in all you do. Thanks for reading.

Hacker-Tool Hump-Day – john the ripper

Another quickie today. Before we get to the technical goods, I thought I’d share an update on the family situation. We have both “bonus” children home safe and sound, and everyone is settling in just fine. The hectic nature of the trip, and post-trip excitement has reduced the time I have to work on a good detailed post this week, so I apologize in advance for the lack of substance I may be about to present. I will review this post, and possibly do a more in-depth review of this tool to flesh out the details another day. On to the spotlight!

John the Ripper is a staple tool for cracking passwords. If you can get your hands on a set of hashed passwords, you can use this tool to make an attempt at cracking them. It does a fair job on any standard system, but some people build elaborate rigs to throw as much computing power at the tool (or a modified version of it for distributed cracking) to reduce the time it takes to crack some of the more complex passwords a user might set.

You can feed it a dictionary file, tell it to try different permutations of each word, and even brute force all characters in a set using various flags. Different people have different routines for running it, but my general work flow is to kick it off on one machine with a dictionary of common words, and on another machine with a brute force crack, and just let it hum. I don’t have an elaborate rig, and I’ve never tried to crack a password I wasn’t asked to crack for work purposes (outside of my own lab/gear, of course.) It can take a while to crack some passwords, but if you have the time and / or horse power, you can crack most password hash schemes. It can handle simple hashes such as what you might generate with OpenSSL, as well as Windows hashes such as NTLM.

Oddly enough, during my SANS Sec504 class, the one hash scheme I had trouble with was md5. I used another tool (Cain/Abel) on a Windows machine to crack that while john sat and choked on it. I think it was because of the version of john that was provided with the class materials.

Just a reminder that John is used for brute forcing / dictionary attacking an already retrieved set of password hashes. Other tools are often used to brute force passwords as actual login attempts to a machine. This is a better tool if you can get the hashes, because it doesn’t actually attempt to log into anything, and thus doesn’t lock out any accounts due to too many failures, or leave a bunch of failed login attempts in a log file somewhere.

And after all of that, I’m not showing any examples for this command for today’s post. I think I will definitely do a follow up article for that, instead.

Until next time, go hug a family member. Or a friend. Your imaginary one, if noone else will do.

SSH – Start to Finish Architecture – The Connection Flow

Before we get into any more advanced stuff with configuring SSH, I thought we should take a look at what actually happens when a client connects to an OpenSSH server, and what the decision tree is for granting or not granting access.

From the sshd man pages:
When a user successfully logs in, sshd does the following:
1. If the login is on a tty, and no command has been specified, prints last login time and /etc/motd (unless prevented in the configuration file or by ~/.hushlogin; see the FILES section).
~/.hushlogin
This file is used to suppress printing the last login time and /etc/motd, if PrintLastLog and PrintMotd, respectively, are enabled. It does not suppress printing of the banner specified by Banner.
2. If the login is on a tty, records login time.
3. Checks /etc/nologin; if it exists, prints contents and quits (unless root).
4. Changes to run with normal user privileges.
5. Sets up basic environment.
6. Reads the file ~/.ssh/environment, if it exists, and users are allowed to change their environment. See the PermitUserEnvironment option in sshd_config(5).
~/.ssh/environment
This file is read into the environment at login (if it exists). It can only contain empty lines, comment lines (that start with ‘#’), and assignment lines of the form name=value. The file should be writable only bythe user; it need not be readable by anyone else. Environment processing is disabled by default and is controlled via the PermitUserEnvironment option.
7. Changes to user’s home directory.
8. If ~/.ssh/rc exists, runs it; else if /etc/ssh/sshrc exists, runs it; otherwise runs xauth. The “rc” files are given the X11 authentication protocol and cookie in standard input. See SSHRC, below.
~/.ssh/rc
Contains initialization routines to be run before the user’s home directory becomes accessible. This file should be writable only by the user, and need not be readable by anyone else.
9. Runs user’s shell or command.

So from the above, we can see a few more ways to control our client connection and what it outputs when we connect. We looked at “LogLevel QUIET” in the ~/.ssh/config file last week, but we can also take advantage of the “.hushlogin” file to suppress some information.

We also see that the login gets logged only if there is a TTY associated. This is important to remember for forensics purposes.

We can temporarily disable logging into a system with SSH (other than root) by creating an /etc/nologin file, and the contents of that file will be displayed when the connection attempt gets rejected. It’s dangerous to use this if you don’t have console access, so be careful with it.

The service drops privileges and sets up a basic environment, then adjusts it from the ~/.ssh/environment file if it exists, and users are allowed to change their environment. The default behavior is to disallow this, but it’s something to check when locking down your systems. Finally, it changes to the user’s home directory to finish the environment preparations.

Next it reads and automatically runs the ~/.ssh/rc file if it exists. This is also important to know for forensics and for locking down your system. This is an excellent spot to drop a persistent misbehaving script, so it’s worth looking for and reviewing.

Finally, it runs the shell or whatever command was requested. Seems pretty simple, right? Well, the man pages stop short at that point.

So from a defense perspective, we want to review more than just the ~/.ssh/{config,authorized_keys,authorized_keys2,known_hosts} files. We also want to look at any rc and environment files in that directory. This is especially true of the root user.

Next week we’ll look at the sshd_config file and cover how to check the running configuration against the written config file.

Fun-Day Friday – Sad News and Happy News

So today, I’m not going to go over any of my hobbies other than to mention that one of them is called “Permaculture.” It’s a design science driven by a specific set of ethics to improve the land and grow abundance (food, friends, and fun.) The ethics state that any decisions made when designing a system should first consider whether the actions taken will harm the earth. If they do not, then consider will they harm people. If neither the earth, nor the people are harmed, then finally take care to build in a balance such that you limit growth of any one element of the system such that it will not consume more than it should. This last ethic is often misunderstood, and is often represented as “if you have abundance, you should freely share it with others, and not charge anything for your efforts.” In other words (using the misunderstood third ethic,) they are often summed up as:
earth care
people care
fair share

The first two are pretty accurate. The last, not as much. It is good to share abundance, but there is nothing wrong with getting something in return for your efforts.

I digress. The sad news for today is that one of the founding fathers (and the most well recognized of them) has passed away. Bill Mollison died on September 24th. He gave a great gift to the world by sharing his experiences and ideas about how to better grow food, and it is a sad day knowing that he has passed on.

The happy news is that we are expanding our family via a foster care situation. A friend of the family has asked us to step in while she is away for a while, and we are happy to do so. We have seven children that are excited to meet their new foster-ish siblings, and this weekend should be exciting, since we are meeting them and bringing them home.

There’s nothing more important than family, and sometimes family is who you CHOOSE, not who you share blood ties with. My great wisdom to share for today is this: (in the words of one of my favorite authors, Michael W. Lucas) “Go do something in meatspace.”

Hacker-Tool Hump-Day – netcat

I was planning to do a continuation of the LAN Turtle exploration and experimentation today, but I’ve had several delays on getting my posts written. Friday’s post will explain a bit more on that. To keep things somewhat short and sweet, I decided to cover one of the Swiss Army knife tools in a hacker and/or a sysadmin’s pocket: netcat.

There are several versions of this program available, and they all have varying degrees of functionality, but for the most part you can expect that it will allow for at least a “one connection” listening socket function, and at least a “one connection” talking sending socket function. In other words, you can open a listening socket to receive one stream of information, or to send one stream of information. Usually this is terminated by an EOF (end of file) character. Some versions will keep the listening socket (when acting as a server) open. Some versions need to be placed inside a while loop to re-open that listening socket for persistence. Some versions allow for binding the output to a specific command, while others require some effort of juggling file descriptors to achieve the same kind of goal. Whatever version you have, it’s a handy tool for both troubleshooting, and swift involuntary copying of data (exfiltration.)

The command may be called “netcat,” “ncat,” or “nc,” depending on the version. To my knowledge, all versions have a “-l” flag for listening as a service, so the easiest example is to create a listening service on a port on one host, and then connect to that port from another host to send some information.
On host 1 (host1):
netcat -l 1234
On host 2 (host2):
cat somefile.txt | nc host1 1234

This will send the contents of “somefile.txt” to the standard out of the listening service on host1. You can redirect the output on that listening service to a file, if you like. Once the EOF is reached, the listening service will terminate.

If you have a version that has the “-k” flag, you can use this flag to keep the service open to receive more data even when the “client” has finished sending and terminates its connection.

netcat -k -l 1234

If you want to send or receive UDP packets instead of TCP, you can use the “-u” flag.

If you want to bind it to a process (usually a shell) and the flag is available, you would use “-e” for this.

netcat -k -l -e /bin/bash 1234

You won’t get a full login session doing it this way, so no prompt or any other indicators that you have a shell once you connect. You’ll have to learn to get used to this, but it’s not bad.

When the -e flag isn’t available, you can do something similar using redirects and a FIFO (also known as a named pipe.)

mkfifo /tmp/namedpipe
nc -k -l 1234 0/tmp/namedpipe

I will most likely go more in depth on this command later (especially when I get to the netcat module in the LAN Turtle,) but for now, this covers most of the basics a person might want.

I will also cover netcat’s beefier brother “socat” at some point down the road.

If you enjoyed this, leave a comment or hit me up on Twitter @stefanrjohnson

SSH – Start to Finish Architecture – Client Config Pt. 1

This week, we’re going to focus on the client side configuration file(s) and how to use them to make ssh a more enjoyable experience. There are a lot of options in the configuration files, so we’re going to break this up into different parts. We’ll cover some common settings people might want to play with for making session handling more efficient, and then after we’ve covered some more advanced topics over the next few weeks, we’ll do a part two to go over some of the more advanced configuration options that mesh with those previously covered topics. This means part two won’t be next week.

There are two default files that handle client side ssh configuration in OpenSSH. The global file is usually found at /etc/ssh/ssh_config, and contains all of the default settings for all users on the system. The user can also write configuration options that override the global config file by including the file “config” inside that user’s .ssh directory. When the ssh client is called, the order of priority when parsing options is covered thusly:

1. command-line options
2. user’s configuration file (~/.ssh/config)
3. system-wide configuration file (/etc/ssh/ssh_config)

This whatever is given on the command line trumps everything else. The user’s config trumps the system-wide config. The system-wide config covers everything not explicitly overridden by the other two. This makes sense, but it’s important to know what the priority of parsing is when troubleshooting issues.

The focus for today will cover the ~/.ssh/config file that should contain customized settings.

One of the first options I set in my own config is the “LogLevel” setting. I prefer to suppress extraneous information (such as banners,) because I like to batch process reports from multiple servers, and I only want the information from the commands I run, not anything else. To suppress these messages, I use:

LogLevel QUIET

You can tune this to whatever level of noise you prefer, all the way up to DEBUG3, which is equivalent to “ssh -vvv” for “verbose debugging.” I don’t recommend using the user config for this, though. Just use the flags to override, and use the config to suppress as above.

The next set of options people often use are the “Match,” “User,” “Host,” and “HostName” settings.
The “User” setting is used to set what the target username to login as should be. This is like going from User_A@Workstation_A to User_B@Server_B scenario we set up before. In this case, setting “User User_B” would mean you could type “ssh Server_B” and it would know to use “User_B” as the target, instead of the default of “User_A.”

The “Host” option sets a block for options that apply to a given hostname until the next “Host” or “Match” block. The name can be a pattern match with wild cards, and can include negation via a preceding “!” if needed. “Host !*.web.com” would say “the following settings up to and not including the next Match or Host block apply to all systems that are NOT like *.web.com.” The previous example is a bad one, but it gets the point across. To set global stuff at the bottom, use a “Host *” option to match everything, and the earlier Match/Host blocks will override as needed, since they have already set values, if something matches them.

The “Match” option is used to fine tune matching against various objects. It can match a username with “Match user” or an address with “Match address” and so on. You can do a “Match host,” but just using “Host” as a block option is probably better for almost all cases.

The “HostName” option lets you set aliases within your config. This would generally be set as an option after a “Host” match. You can use “%h” to indicate the hostname that was passed to ssh on the command line before altering it with the “HostName” so that “HostName” prepends or appends to it as needed. You can also set to this an address to override whatever DNS would have returned, for example.

Using what we’ve learned so far, let’s assume that User_A wants to log in as User_B in almost all cases. However, User_A is an AIX systems administrator, and thus might need to log into a VIO server for virtual IO setup, which means the target user is most likely “padmin” rather than User_A or User_B. Also, AIX is managed by a Hardware Management Console (HMC) which often uses the “hscroot” user, rather than local accounts. Of course, every shop is different, but this is a standard practice, so we’ll go with it. The organization uses a standard naming scheme that all VIO servers have a name that starts with “vio” followed by some unique identifier. The HMCs are similarly named as “hmc” followed by some unique identifier. The VIO servers are also in a different domain than the standard, called ‘internal.net.’ We also know that if the user logs in as root, a key will never be used, so we want to skip the key exchange and force a password for root, instead. Armed with this knowledge, we can make User_A’s life easier if we build out a config that looks like this:

LogLevel QUIET
Host vio*
   HostName “%h.internal.net”
   User padmin
Host hmc*
   User hscroot
Match user root
   PubkeyAuthentication no
   PasswordAuthentication yes
   PreferredAuthentications password
Match user User_B
   PreferredAuthentications publickey,password
Host *
   User User_B

The options we set for the “Match user root” block should be fairly self explanatory at this point. There are many more options than this, and we will cover them eventually, but this is decent general overview of how to build out your config over time. Remember to put your global stuff at the bottom, since it reads top down with first match winning the option to set.

Fun-Day Friday – Ramblings

This is going to be short and sweet. I’ve been busy getting ready for a big event in our household, and thus I shorted myself on time to publish this. I apologize for that.

I’ll treat this as a week in review and go over some of the cool things that happened.

1) Michael W. Lucas has released his new “Mastery” book, PAM Mastery. PAM is one of those rare dark arts that most Unix SysAdmins prefer to meddle with as little as possible for fear of summoning a maentwrog or something. This book sheds some really bright light on how to tackle the appropriate incantations so you don’t do something that bites your head off in the end.

2) I’ve made some progress on troubleshooting one of the more recent modules released on the LAN Turtle. A new version of that module has already been released, so I get to start all over with testing it to see if it fixed any of the issues I was having. I’ll do that testing this weekend.

3) Guild Wars 2 released the second episode of the Living Story season 3. GW2 is one of a very small handful of computer games I still play these days. The content thus far is pretty good. I was hoping ArenaNet would put in a petrified wood node, but maybe it’ll come later.

4) My WiFi Pineappling book came in the mail. I’ll go through it, and do a review when I cover the Hak5 WiFi Pineapple NANO later down the road.

5) My wife bought me a present today. Halloween is my favorite holiday of the year, and she brought home a skull on a stand. I’ll take it to work and start decorating my cube for the holiday soon. It should go nicely with the skulls chain I hang every year.

What are your favorite games, holidays, or other activities and events? Share in the comments!

Hacker-Tool Hump-Day – Hak5 LAN Turtle Part 1

I’m going to spend a few days covering the Hak5 LAN Turtle. I may or may not break these up with some other Hacker-Tool Hump-Day topics, but for today, we’re just going to cover the initial set up of a brand new Turtle.

The first time you plug it into a USB port, you’ll need to wait a moment for it to initialize and present an IP to the host machine. Once an IP has been obtained (in the 172.16.84.x range,) bring up your favorite SSH client (such as PuTTY) and connect to 172.16.84.1 as root. The initial default password is “sh3llz” and the first time you log in you should see a screen prompting you to set a new root password.

Enter the password of your choosing. It will prompt you to repeat the password. Do so, and it will either say it didn’t succeed and start the prompt sequence over, or it will say the password was changed successfully. Hit enter to select “Okay” and move on.

The first thing you should do after setting that initial root password is to check for updates and update the device. It’s a good idea to check for updates regularly, even if they aren’t released often. Select “Config” and hit enter. At the new menu, you can see that you can change the password you just set, change the WAN or the LAN MAC addresses, disable the Turtle Shell, and Check for updates. Use the arrow keys to move down to “Check for updates,” and hit enter. If you don’t have the ethernet side plugged into a switch that has internet access, this will fail. Don’t forget to plug that cable in.

If an update is available, it will download and verify the update files, update the LAN Turtle with a warning that it could take about 5 minutes, and a note that the Turtle will reboot after updates. It will then kick you out of the SSH session. Just wait for it to update, and keep an eye on the yellow flashy light.

Once it’s been updated, you can restart your session. The HOST keys will have changed, so you should get a warning about that if you aren’t suppressing those, and after you accept the new keys, you will need to login with the default sh3llz again. It will prompt you to change that password (again) so do so at this time. If you go back through the “Config” and “Check” menus again, you should see that there are now no new updates at this time.

I don’t recommend Disabling the turtle shell unless and until you have things set up the way you want, but if you do, you can always get back to the turtle menu by calling “turtle” from the command line.

The next step is to back out to the main menu and pick your modules through the “Module configuration” menu option. The first time you go into this menu, the only option you can select is the “Module Manager.” Select this and choose “Configure.” The next menu should let you choose which modules you’d like to install (“Directory”) or which ones to delete (“Delete”) as well as updating all installed modules via “Update.” Just like the initial update of the Turtle itself, these actions require an internet capable connection to be plugged into the Ethernet side of the Turtle.

We will go over each of the modules over time, but for now you know where this is at and you can select any you would like to play with. The “Cron” module is not necessary for the “at” command, as that appears to be installed already by default.

After you’ve selected all of the modules you want (and you can select all of them… at the time of this writing, they all fit with some space to spare…) install them and then back out of the menu to the Main menu again. Now select “Exit” to drop to the command line.

The Module Manager from the Turtle menu shell is good for pre-packaged installation and configuration of stuff, but you can also install packages that may not be part of a module from the command line. The LAN Turtle is a MIPS based OpenWRT build, and thus any packages compiled for that platform -should- work for the Turtle, assuming the binary fits in the limited file system space provided. The device appears to have 40 megs total, 30 of which are assigned to /tmp as a tmpfs file system. This means 30 megs are volatile. This still shouldn’t be a problem if you select your packages carefully.

The package type for OpenWRT devices is a “.ipk” file, and you manage the packages from the command line using the “opkg” command. The “opkg list” command claims to list available packages, but seems to list installed ones instead. The “opkg list-installed” is supposed to list installed packages, but returns nothing. To fix this, you need to run an initial “opkg update” first, which will grab the repository lists and update the local opkg database with the installed information. The “opkg search” command works as advertised with or without that initial update, so you can find out what command belongs to what package. For example, “opkg search ‘*ps'” returns “busybox – ” which means ps is part of the busybox package. If you’re a fan of multiplexers, there is a package for both “screen” and “tmux.” The “tmux” binary is slightly smaller than the “screen” one and is actually my preferred multiplexer. Alas, there was no package for “dsh.” I may choose to attempt a build of this down the road, because this is an incredibly useful tool. If I do this, I’ll document the process here as well as document turning it into an official Turtle module that could be selected from the Module Manager in the Turtle menu shell.

From a Systems Administration perspective, this device does a few things. It provides a handy USB ethernet adapter if all modules are turned off. It also provides a slimmed down linux environment reminiscent of a Raspberry Pi, BeagleBone Black, or similar, with a smaller form factor, but at the cost of losing some computing power. This is good for providing a quick environment where a VM would be overkill and you’re stuck with a Windows machine as your workstation. Just be aware that your Corporate Security team may take exception to you using such a device, even if no hostile modules are installed. Make sure you are transparent and work things out with them before plugging a “rogue” device into the network in a corporate environment.

Some of the more useful SysAdmin related tools in the Module Manager are “autossh,” “sshfs,” “clomac (if the network is locked down by MAC,)” and “cron” for obvious reasons. I’ll cover each module available over time, and explain the “Configure,” “Start/Stop,” and “Enable/Disable” menu options for each in their own articles. This way each gets the right amount of focus.

Thanks for reading, and if you like this kind of thing, check out the forums at Hak5 to see the latest discussion.

SSH – Start to Finish Architecture – Securing The Private Key

Our previous post showed how to generate the bare bones public/private key pair without using a passphrase. This is sometimes the desired configuration, but it is better to lock down the private key using a passphrase. When you generate the key pair, you can add a passphrase at the prompt that we just hit “enter” on last week, but you might want to change an existing passphrase or add a passphrase to a key that doesn’t already have one. The means for doing this is shown below:
ssh-keygen -f ~/.ssh/id_rsa -p

If the existing phrase is empty (like the one we generated last week,) this will prompt you for your new passphrase right away. If there is an existing passphrase, it will first prompt for that before prompting for the new one. Setting a passphrase on a private key is an important step to securing that key. If someone unauthorized to use that key managed to get a copy of it somehow, they won’t be able to use the key until they figure out the passphrase for it. While it is possible to brute force crack a key, if you use a decently long phrase that isn’t something commonly spoken or written, the chances of cracking it go down. Also note that SSH key passphrases allow for spaces, so you can literally write nonsense sentences, spaces and all. There is more that can be done to reduce the risk of someone using a stolen private key to do harm, but it’s on the client side, and there are caveats. We’ll cover that next week.

Now that we have a passphrase protecting our private key, what has changed in how ssh works? For starters, if you don’t use an Agent to load your keys to, every time you go to log into a server using this key, you will be prompted for the passphrase like you used to be prompted for a password. This makes convenience worse, not better. To use the agent, run ssh-add. If you’re using a standard key name such as id_rsa, id_dsa, or id_ecdsa, it will automatically find and load that key for you. For each key with a standard name it finds, it will prompt for the passphrase. You give it the phrase and it handles the rest. It acts on your behalf from then until it is told to unload a key or is stopped. When you go to login, the SSH client will see that the agent is running, and when prompted for the key by the server, it will pass that request through to the agent, which will provide proof that it knows the key, and thus you won’t be prompted. It’s like promptless SSH, but requires the extra step of loading the agent first.

If you get an error message when you run ssh-add, there is a chance that ssh-agent isn’t already running. If that is the case, you can start ssh-agent first, take the output it gives, and export those variables. For example:

ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-w8iG9Aq6KWLR/agent.1070; export SSH_AUTH_SOCK;
SSH_AGENT_PID=1071; export SSH_AGENT_PID;

If you have a key with a custom name such as id_rsa_2016, you can load these by passing their name, like so:
ssh-add /home/User_A/.ssh/id_rsa_2016

Using the agent is dangerous in a shared environment where other people have elevated privileges. Anyone with root can potentially pull the private key from memory while the agent is running on your behalf. You can unload the keys before locking your workstation if you’re paranoid enough, by using -D or -d as below:
ssh-add -D #Delete all identities
ssh-add -d /home/User_A/.ssh/id_rsa_2016 #Deletes just the id_rsa key from the agent list

You can also lock and unlock your agent using the “-x” and “-X” flags respectively, if you don’t want to completely unload for security’s sake. These will prompt you for a password to use for locking and unlocking the agent, if you choose to use them.

If you want to see which keys are loaded, you can list them with “ssh-add -l.” And if you need to be sure which public key matches the loaded private key, you can use “ssh-add -L.”

Finally, if you want to set a time limit on a key being loaded, you can use the -t flag to make it temporary. It requires a number (in seconds) for how long the key should remain loaded by the agent.

The rest of the flags are for more advanced stuff I will be covering separately, so that’s all we’ll cover for today. If you’ve kept up thus far, you’re pretty much at the level of the average SSH user at this point. (And there’s so much more to be covered.) Next week, we’ll go over some client configuration options to make session management easier.

Fun-Day Friday – Not So Fun

A couple of weeks ago, a fellow HAM in the local Amateur Radio Club had a mishap with his Elecraft KX3 and PX3 rig setup. It was a nice day, weather wise, so he called an impromptu “Hamming In The Park.” This is where a bunch of us grab our portable capable rigs, head to one of the parks in town, set up, and operate in the (usually sunny) outdoors.

He let the smoke out of his KX3, and here’s the story:

http://ae5nw.net/index.php?option=com_content&view=article&id=48&Itemid=54&limitstart=25

I personally am a huge fan of Elecraft. I own the KX3 (radio,) the PX3 (waterfall display and control,) and the KXPA100 (amplifier.) I use the KX3 on the amp at the house to get up to 110W of power (comparable to most non-amp HF rigs on the market.) It’s a sweet rig, but cheap is not part of its description. I’ll definitely be taking his experience to heart and making sure my setup doesn’t connect power until everything else is hooked up moving forward.