Hacker-Tool Hump-Day – LAN Turtle Modules – ddnsc

The Hak5 LAN Turtle has a module for configuring a dynamic DNS client. This module is the “ddnsc” module, and configuration requires four fields to be filled in.

The first is the “Service” field, which is the Dynamic DNS provider service you will use. This might be “no-ip.com” or “dyn.com” or something similar.

The second field is the “Domain” field, which is the domain name you have reserved through the service. Once upon a time, I had DynDNS as a service, and they had a “homeunix.org” domain available. Mine was configured for “abtg.homeunix.org” at the time. I no longer use this service, but that’s an example of what you would need to put into this field.

The third is the “Username” field, which is whatever user name you signed up with your dynamic DNS provider for authentication with their services.

The final field is the “Password” field, which is simply the password you use to authenticate your user against that domain.

Internally, the module uses “uci” commands, which are part of the OpenWRT router project to provide a centralized configuration tool for the router functionality. It also does an “opkg install ddns-scripts” to pull down the standard scripts for this from the OpenWRT repositories.

The module is simple enough, but it provides a great way to get a persistent means of looking up the current external IP to whatever LAN the Turtle has been dropped on, if that network has a dynamic IP address on its WAN side. This way, you can get to your turtle on whatever known open port you have set up for use.

This has been short and sweet, but it has gotten late on me, so that’s the best kind of post I can give for today. Thanks for reading, and leave comments!

SSH Start to Finish Architecture – Dealing with key distribution

This is going to be a brief post with not as much content, but it will explain where we’re heading from here.

One of the biggest advantages of SSH is the added encryption layer of security it provides. A second major benefit is the lack of a need for a password. Passwords can be replaced by pass phrases, which helps make it even more secure. However, there is a risk involved with the vanilla authorized_keys file type of system.

The public keys must be managed. What does that mean? The public key needs to be provisioned to the appropriate authorized_keys files manually for every user that takes advantage of this system. When a user needs access revoked, the authorized_keys files must also be analyzed to make sure there isn’t a key sitting out for an account where that user shouldn’t have that access any longer. When you de-provision a user in your production environment, their own user account authorized_keys files get deleted with the home directory, if you use a standard “userdel -r ” command. However, if they had keys to access a shared service account, such as www, apache, or similar, there needs to be an extra step involved to check every system where that key might exist.

This can be handled with something like puppet, chef, or one of the other common provisioning and configuration management type tools, or it can be done with something as simple as a distributed ssh (dsh) check, but having multiple files is a provisioning nightmare. Assume a system is down when the user is de-provisioned. Without a configuration management / identity access management solution, your manual checks might miss a single key floating out in an authorized_keys file.

So how do we handle this? There are multiple ways, and they don’t all involve configuration management tools. One way to deal with this is to use the “AuthorizedKeysCommand” option in sshd_config. It requires a second option (“AuthorizedKeysCommandUser”) in order to use it. These allow you to set a script called by a (hopefully) non-privileged user that can query some service to retrieve a key for a given user. This can be LDAP, SQL, a flat file on a remote machine… anything that returns the expected results. We’ll look at this more in depth next week.

Another option is to set up a Certificate Authority for OpenSSH, and sign your user’s public keys. For provisioning, this does not require a configuration management / identity access management tool, but for de-provisioning, it might. However, you can potentially automate this de-provisioning on a time schedule, to avoid the CF/IAM solution, and we’ll look at that kind of set up when we get into the Certificate Authority stuff in the weeks after next Monday.

Ideally, you might consider combining these methods in the end, but we’ll present them initially as separate philosophies and methods for managing the keys.

Thanks for reading

Fun-Day Friday – Holidays and horror movies

Most of you in the U.S. are probably recovering from a Turkey or Ham hangover, by now. Yesterday was Thanksgiving, and there is always much to be thankful for. I’m thankful for my family. I’m thankful for our service members who couldn’t be with their own families. I’m thankful that we live in a free nation. I’m thankful that I have a job that lets me support my family with more than the bare life requiring essentials.

I was alone, yesterday. The rest of the family headed down to Texas for a family and friends gathering, but I had work obligations that prevented me from going with them. Skype will be used much until they return.

The only up-side to this was getting a chance to get caught up on my back logged list of horror shows on Netflix. I don’t get to watch them often, since there’s only a small sliver of time each evening where the kids aren’t close by, and the wife really doesn’t like them, so I can’t watch with her, either.

Silver linings, and all that. In the end, I can’t wait for them to get back. It’s better than six months at a time, though. Four years in the Navy taught us much appreciation for the time we have together.

I hope everyone reading had a good time with family and/or friends. Stay safe for the Holiday seasons, and enjoy the things you might often take for granted. Other people don’t have them.

Thanks for reading!

Hacker-Tool Hump-Day – Kali Nethunter on Nexus 10 tablet

Last week, my Nexus 10 tablet arrived in the mail. I ordered this tablet with a few purposes in mind, but foremost was my intention to run Kali Nethunter on it. I chose this particular tablet, because the forums seemed to indicate few issues with this device, the screen size is quite nice (for a tablet,) and it doesn’t have cell phone capabilities. I specifically wanted a tablet that only worked with local wifi, and not cellular networks. I chose the 32 gig version, because there is no documented way to increase the local storage.

The package arrived, and my wife texted me that it looked a bit beat up. When she showed me a picture of the package, my first thought was the opening scene from Ace Ventura. The device itself was okay, thankfully, so I placed an order for a tablet case/cover with keyboard, and began prepping the tablet for its new function in life.

The first evening, I allowed the tablet to update as far as it could, then went into the System Updates and selected updates manually until it went from Android 4.2.2 to 5.1.1. After each major update, I powered the tablet off and turned it back on. This took most of the evening, so I installed Netflix on it, and my wife and I curled up to watch the last episode of Blacklist available on the streaming service before going to sleep.

The next day, I downloaded the tools to my linux laptop for dealing with unlocking and rooting the device. The table of supported devices at the Kali Nethunter page said that Android 5.1.1 would be sufficient, but that the device would need to be unlocked and rooted, so that was the goal.

I downloaded android-studio-ide-145.3360264-linux.zip from https://developer.android.com/studio/index.html and extracted the zip file. Then I changed directories to android-studio/bin and ran “./studio.sh” to set up the initial installation. Unfortunately, this wasn’t what I needed or wanted, but I didn’t know that at the time. Once I figured out why things weren’t looking the way I expected, I went back to the developer site and found out that you need to scroll way down the page to get the actual file needed. I went up a directory and removed the files from this initial bad download.

I downloaded android-sdk_r24.4.1-linux.tgz this time, and extracted the tarball. I changed directories to Changed directories to android-sdk-linux and ran tools/android. I went through the GUI but it was having issues with updating, so I went to “Selected Tools => Options” and checked the “Force https//… sources to use http://…” check box to deal with the “peer not authenticated” errors. I assumed these were caused by our corporate gateway that cracks open SSL for inspection, and turning this on did indeed work.

From there, I went to the “Packages” list and selected the “Android 5.1.1(API 22)” packages. I selected all of these, and told it to install the packages, accepted the license, and then hit “install.” I exited the SDK GUI, but had to come back to this and do this step again after connecting to the tablet later down. This step only installed two items, and there were well more than two needed before I was through.

On the tablet, I swiped down twice to click the settings gear, went ot “About tablet” and tapped the “Build number LMY49J” seven times to put the tablet into developer’s mode. It indicated as much with “You are now a developer.” I went back one screen and selected “Developer Options” which was now available, and turned on “USB debugging.” At this point, I plugged the tablet into the computer using the USB cable supplied with the tablet.

I ran a “dmesg” on the laptop to confirm it was seen as a new device, then ran “platform-tools/adb devices” to verify it was seen by the developer tools suite.

At this point, I re-ran the GUI tool “tools/android” and selected the rest of the files that were missing for my tablet. Once they were all installed, I was able to do the “platform-tools/adb devices” command again.

To put the device into fastboot mode, I ran “platform-tools/adb reboot bootloader” and while the tablet was at the fastboot menu, I ran the following command from the laptop to unlock the device:

platform-tools/fastboot devices
sudo platform-tools/fastboot oem unlock

If the first command (fastboot devices) doesn’t show your device, even if adb devices does, something is wrong and more troubleshooting is needed. Otherwise, go ahead and do the fastboot oem unlock at this point.

My table rebooted and seemed to be stuck in a boot loop. The graphic with the four swirley colorful dots never seemed to stop, so I did some online research and found that this device is known for getting in a boot loop after the unlock. The solution is to go back to fastboot mode by holding down both volume up and volume down as well as the power button (all three) until it reboots. Once it is back in fastboot mode, use the volume buttons until the menu says “RECOVERY MODE.” Push the power button again, and let it “recover.”

Because it took me some time to figure out how to get it out of the boot loop, I spent the rest of that evening letting it recover from the online backup it had made, then headed to bed with my wife.

On day three, I set up all of the developer tools again, put the tablet back into “Developer mode,” and hooked it up to the computer. I downloaded nethunter-manta-lollipop-3.0.zip via https://www.offensive-security.com/kali-linux-nethunter-download/ to my laptop. I also downloaded TWRP (Team Win Recovery Project) from https://twrp.me/site/update/2016/04/05/twrp-3.0.2-0-released.html per the recommendations on the Nethunter Wiki. The file was twrp-3.0.2-0-manta.img in case you’re following along.

In order to install twrp, I changed directories to the android-sdk-linux/platform-tools directory from the day before, and ran:

adb reboot bootloader
sudo fastboot recover twrp-3.0.2-0-manta.img

I installed via the menu options, then pushed the nethunter image across with:
adb push nethunter-manta-lollipop-3.0.zip /data/local/tmp/nethunter-manta-lollipop-3.0.zip

Back on the tablet, I navigated up, then back down to the appropriate location data => local => tmp.

I selected the ZIP file and told it to install, answered the questions presented, and rebooted.

Unfortunately, something went wrong with the installation using this method. Some more research, and I found that I could repair this by uninstalling the individual nethunter applications and then installing the latest “APK” file, instead. This meant I needed to get the latest.apk. I also needed to install SuperSU, which was an option that was unchecked by default in the ZIP install menu.

I went to Google Play to install SuperSU, then continued with the APK installation as below.

I opened chrome and selected the following URL to get the APK needed:

I then installed that by selecting it through the downloads folder, and it installed better. I’m not sure it installed perfectly, yet, but it at least has the nethunter app, now.

I then had to set Nethunter to be allowed with super user privileges by SuperSU.

Unfortunately, it says there’s no busybox, and I’m not sure if that got missed by the APK, or if something else is going on. TWRP wants to install a BusyBox app, but it’s not the one built into the Nethunter suite. I may go ahead and install that one, but I’d rather have the one provided by Nethunter.

Due to the extreme hecticness of this holiday season, I haven’t had an opportunity to really play with this, yet, but I will go through and work with it some more so that I can do a decent write up of some of its features, before I’m through. In the mean time, I’ll return to covering the Hak5 LAN Turtle modules next week.

Thanks for reading!

SSH Start to Finish Architecture – Dealing with laggy networks

One of the aspects of servers and clients that speak to each other over a network that sometimes needs handling is how to keep the connection open. Sometimes the network is flakey, for lack of a better term, and this means packets can get dropped. OpenSSH is not exception, and it has several options that can be set in the sshd_config, ssh_config, and local ~/.ssh/config files to help determine how to handle such cases.

Both the server and client share the “TCPKeepAlive” option. This option determines whether or not to send TCP keepalive messages. This helps terminate a connection is a client or server crashes, but can be problematic on a network that has considerable lag. If it’s not set, though, it can mean lots of ghost users in the connection pool for the server, so it’s recommended to turn this on unless you have a very bad network. The options are “yes” or “no” depending on whether to turn it on or off.

There are also the “ClientAliveCountMax” (sshd_config) and “ServerAliveCountMax” (ssh_config and ~/.ssh/config.) These set the number of alive messages that may be sent without the client or server receiving a response back from the other end. These messages are different from TCPKeepAlive messages. The alive messages are sent via the encrypted channel, and thus are not able to be spoofed. TCP keepalive messages on the other hand can be spoofed, so this may be a better option in a more (potentially) hostile environment. This setting is a number, and defaults to “3.” Remember that this is the COUNT so it represents the number of times a message will be sent before a session it terminated due to lack of reply from the other end.

The other variable needed to make the above work is the “ClientAliveInterval” (sshd_config) and the “ServerAliveInterval” (ssh_config and ~/.ssh/config.) These determine the number of seconds between messages to be sent. The default is “0” which means nothing is sent, so you must give this a value greater than 0 to turn this option on. If this value is set to “5” on either end, and the alive count max is left at a default of “3,” then the connection would be terminated after 15 seconds if the messages don’t get a response.

Outside of TCPKeepAlive and the *AliveCountMax and *AliveInterval settings, there is also a setting for determining how long to wait for a user to successfully log in. To keep people from making the initial ssh request handshake, but then not logging in and tying up a socket, this can be set and that user will be dropped after that set amount of time. The default value is 120 seconds, but if you don’t want this at all, you can change it to 0 to turn it off, just like the *AliveInterval options.

That’s all we’re going to cover for today. We’re getting closer to the certificate authority stuff, which makes maintaining keys so much better.

Thanks for reading!

Fun-Day Friday – Happenings

I thought I’d fill today’s post with a bunch of updates on what’s been going on in my life, lately.

The two “bonus children” as we like to call them are pretty well settled into the household at this point. They’ve mostly learned our routines, and everyone gets along pretty well. This, of course, has led to a higher level of necessity for obtaining a vehicle with more seating. We were sad to let “The Tank” go, but the Ford Excursion has been traded in, and we now have a Ford E-350 15 passenger van to replace it. The van is in the shop getting running board steps installed. Everyone in our family is short. It’s more comical than practical when we try to climb in without those.

My quest for a Nintendo NES Classic is ongoing. We’ve called several stores to ask if there has been any new stock. Of course, GameStop lied and said Nintendo has stopped making them already, and there won’t be any further shipments. I considered tagging Nintendo and GameStop on Twitter about that, but clearly I will just have to look at the other three options in town. We’ve made an attempt at the Amazon and Walmart online sprints that have been available, but none have shown any success in obtaining the treasured compact box-o-nostalgia. I may never get one, but I’ll continue to try, at least.

I’ve recently purchased and received a Nexus 10 32G tablet. I’m in the process of setting it up for a Kali Nethunter installation. I’ll share the details in my Wednesday post. A case with keyboard and stand are on their way, and I’ll probably treat this like a not-quite laptop for some situations.

ArenaNet will release the next installment of Living Story Season 3 on Monday. I’m looking forward to trying it out. So far, I’ve been happy with this season of Living Story, and don’t foresee any major issues they could cause with it to change that.

My NaNoWriMo progress is pretty steady. I’m ahead of where I should be, and feel confident that I’ll hit the goal before November 30. I’m almost certain I won’t share this story with many people. It’s mediocre at best, but it’s keeping my creative writing juices flowing, and I might be able to start churning out a better story after I feel less pressure for a deadline. Maybe. I still need to edit NaNoWriMo 2015’s book, so maybe not.

The next door neighbors cut down a tree a while back, and some of the larger logs are still in their yard. If I don’t forget, I’m thinking about asking them if they mind if I cut a few log ends off of the wood that’s left. They make some of the best targets for throwing knives, and I’d like to get a decent target stood up in the back yard before the end of the year. I might need to hit up someone in the Amateur Radio club to borrow a chainsaw if they agree to let me do that.

Speaking of radio… if time permits, I need to do up a post on how our local DMR repeater works, programming the MD-380 handheld, and go over our local talk groups a bit. It might have to wait until after November, since my time is in a serious crunch this month with NaNoWriMo. We’ll see.

I got to help a friend with a scripting issue the other night. It was good catching up with him. Remember folks, it’s important to take a breather every now and then to catch up with old friends. Seriously. Go hug a friend right now, even. Even if it’s just an imaginary one.

If you’ve got anything on your mind you want to share, post in the comments. I’d love to read them.

Thanks for reading!

Hacker-Tool Hump-Day – Hak5 LAN Turtle module – ptunnel

The next module for the Hak5 LAN Turtle we’ll look at is the ptunnel module. This is for the “ping tunnel” program, which allows tunneling TCP traffic over ICMP. It’s not exactly fast, but it can potentially get you out where other tools won’t.

To use ptunnel, you want a “client” configuration, and a “proxy” configuration. The PROXY must be running somewhere outside of the firewall you’re having issues with, and you must be able to ping the host it is running on. To start the proxy just call “./ptunnel” without any flags. If you test this and it doesn’t work, you can try one or more of the following flags:

If you need packet capturing, use the “-c ” flag with this.
You might want to try it in unprivileged mode first, with the “-u” flag.
You can set an arbitrary password with the “-x ” flag, as well.
Finally, if you want some logging at the proxy side, you can use the “-f ” and “-v ” flags.

Once the proxy is running and waiting for connections, you can use the module on the LAN Turtle to connect to that.
A standard client connection would look like this:
ptunnel -p -lp -da -dp

For example, if we wanted to be able to ssh over ICMP to our proxy box (proxy_server) and our listening port for this is locally set to 443,
and the target to ssh to is the host “server_x” on port “22” because it’s the standard ssh port, we would run this for a client set up:
./ptunnel -p proxy_server -lp 443 -da server_x -dp 22

We would then run ssh like so:
ssh -p 443 localhost

Well, the module has four configuration settings you need to fill in through the module configuration menu.
Those four settings all correspond to the four items we just discussed:
-p Proxy Server – called “PTunnel Host” in the menu
-lp Local Port
-da Destination Address – called “Dst Server” in the menu
-dp Destination Port

Note that there is no configuration for a password here. If you want to use a password on your proxy set up, you’ll need to manually configure that, or modify the turtle script to include it.

There is also no configuration through the Turtle menu to run the Turtle as the proxy host. Only a proxy client. You of course can always run the host yourself from the command line, though.

Again, this isn’t the fastest way to get your packets where they’re going, but it does work well when it works at all.

Also, if you do decide to run this, be responsible. Don’t break out of your corporate firewall if it’s against corporate policy… and it almost assuredly is.

Thanks for reading, and if you liked this or even didn’t like it, leave a comment below!

SSH Start to Finish Architecture – X11 Forwarding

I was reviewing previous posts, and realized I haven’t really covered this aspect of forwarding, yet. I would be remiss to leave it out.

X11 is a client/server protocol, which means you can run software on one machine, and display it’s graphical output on another. It also has some inherent security risks, so a common way to mitigate some of those risks is to allow SSH to forward the X11 client connection to your local X11 server when you start a client on a remote system.

It seems backwards for some people, but you run the server on your workstation, and you run the remote graphical command as a client that calls back to your server. The server generates the graphics on behalf of the client. If you’re running a workstation with Linux, a BSD derivative, or something like one of the OpenSolaris forks, you are likely already running X11 for your desktop needs. We will make the assumption that you are for this process.

In order to do X11 forwarding, the remote server needs to be configured to allow such forwarding. The settings that matter are: “X11Forwarding yes” to turn on the forwarding, “X11DisplayOffset 10” (default) to determine the offset for the display that will be used, “X11UseLocalhost loopback” (default) to tell sshd to bind the forwarding server to the loopback device, and “XAuthLocation /usr/X11R6/bin/xauth” (default) if you need to provide a path to the xauth program, because it’s not in the default location on your system.

It may be that the only setting you need to adjust is “X11Forwarding” from “no” to “yes” on your target system.

Once you’ve done this, you can make your connection to the target system by passing the -X or -Y flags to the ssh client. The -X flag enforces the ForwardX11Trusted settings for the ssh client, which set higher restrictions on how the X11 forwarding can be used, as well as setting a 20 minute expiration timer on the xauth token. The -Y flag does not set these restrictions. It’s up to you to decide which flag you want to use.

After you connect, you can check that your environment is set up appropriately. You should have a “.Xauthority” file in your home directory, and you should have an environment variable already set called ${DISPLAY} that should probably show “localhost:10.0” when you echo it out.
ls -ld .Xauthority
echo ${DISPLAY}

After you’ve confirmed these, you can test your forwarding with something simple, such as xeyes, or xclock, if either of those are installed on the target machine. If they are not, go ahead and try whatever X11 program you intended to run. You should see the program show up on your desktop once you do.

Finally, if you have need of running an X11 program as a different user, you can use the xauth command to merge your .Xauthority tokens with the other user’s environment and then switch to that user to run your command. You will need to extract your xauth token for your DISPLAY, and merge it for the other user’s environment. The standard way to do this is with “xauth extract” piped to “xauth merge” as shown in the full session example below.

ssh -Y User_A@Server_B
ls -ld .Xauthority
echo ${DISPLAY}
xauth extract – ${DISPLAY} | sudo -i -u User_B xauth merge –
#OR xauth extract – ${DISPLAY} | sudo su – User_B xauth merge –
sudo -i -u User_B #(or sudo su – User_B)
echo ${DISPLAY} #(May need to manually set this to what you were given upon login)

The client configuration has several settings to always or never set this for you. These should probably be set in Match blocks for just the servers you need to run X programs on regularly, and not set at all otherwise.

ForwardX11 yes/no
ForwardX11Trusted yes/no

The time format will be a number followed by a modifier unit. “S” or “s” for seconds. “M” or “m” for minutes, and so on all the way up to weeks. No unit indicates seconds by default.

You can use xauth commands to delete your tokens manually, when you are done by doing “xauth remove ${DISPLAY}” if you so desire.

Hopefully this helped shed some light on how to get X11 Forwarding working from a basic to complex scenario. This is one of the most commonly asked questions I’ve had in the past, and I’m sorry it wasn’t covered sooner.

If you have any questions on this, leave a comment. Thanks for reading!

Fun-Day Friday – Nintendo Entertainment System Classic

So today is the day that Nintendo releases the NES Classic. It’s a smaller console designed to look much like the original Nintendo Entertainment System. It has HDMI output, and 60 classic games built in. The majority of the titles were games I enjoyed as a child, so this is on my want list. Amazon will have these available starting at 4:00p.m. CST. They are turning off oneclick for this release to give everyone a fair chance at it. They will have limited stock. I went to the local soul sucking Wal-Mart to buy one in person, but there were only 6 units available. I was (un)lucky number 7. Since I have to work, I won’t be able to park my rear at any of the other facilities that would have them “at opening time,” so I’m stuck waiting for my chance at it on Amazon.

As for NaNoWriMo, I’m at a total of 18,062 words as of November 10. This is ahead of the curve. I’ve managed to hit the minimum “average” per day every day. My peak number was 2,127 words on November 8. I’m still on the fence about whether to share this story or not, when it’s done. I guess I’ll know at the end of November.

Is there something that stirs your own nostalgia? Things you enjoyed from your childhood that you would like to relive for a while? Share in the comments.

Thanks for reading!

Hacker-Tool Hump-Day – Hak5 LAN Turtle module – sshfs

There is a very nice program that allows you to use an ssh connection to “mount” a remote directory as if it is local. This program is called “sshfs,” and it sometimes makes things easier from the perspective of needing to copy a lot of files between systems, but not wanting to deal with the scp or sftp commands to do it. The Hak5 LAN Turtle has a module for this, and the primary purpose of this (most likely) is to assist in off loading large files to a remote location. You could use sshfs to remote mount a file system on another machine you control, then configure your ex-filtration tools to dump their payloads to that directory. In essence, the payload would never truly be written to the turtle in the end.

To use this, you need to go to the modules and choose “configure” as you would any module we’ve covered, thus far.

The configuration takes a “Host” a “Port” a “User” and a “Path” as options to be filled in. Since a “Password” is not among the options, you will need to do some preliminary work as well. The ssh keys would need to be generated and configured for the target user at that host.

The “Path” option can be left empty if you just want to use the target user’s home directory for the path.

Once everything is configured, use “Start” to make sure it works, and “Enable” if you want it to persist, as normal.

This module is quite nice consider the very limited space on the LAN Turtle, so play with it. I think you’ll enjoy the benefits.

Thanks for reading!