WARNING! THIS POST HAS BEEN MARKED AS OUTDATED!
While there may be useful information still contained within the article, there may be other more relevant articles out on the Internet. Please pay close attention to version numbers of software that this article refers to. If you're not careful, you could break your system if you do not understand what you are doing. If you would like to see this article updated, please contact the site administrator using the Contact page. Thanks!
Updated (11/21/2007): I’ve added an updated version of this How-to on the community supported Ubuntu documentation site. The new document can be found at: https://help.ubuntu.com/community/SinglePacketAuthorization.
Single Packet Authorization (SPA) using “fwknop” is probably one of the coolest recent innovations in server and network access control technology. Just what is SPA, you ask? SPA is a method of limiting access to server and network resources by cryptographically authenticating users before any type TCP/IP stack access is allowed.
In it’s simplest form, your Linux server can have an inbound firewall rule that by default drops all access to any of it’s listening services. Nmap scans will completely fail to detect any open ports, and zero-day attacks will not have any effect on vulnerable services since the firewall is blocking access to the applications.
The server however has a nifty trick up it’s sleeve. An authorized user sends a single encrypted UDP packet that is passively sniffed and analyzed by the fwknopd service running on the server using pcap. If successfully authenticated, fwknopd dynamically creates an iptables firewall rule, granting the source IP address of the authorized client access to the service for a defined period of time (default is 30 seconds). Pretty frickin’ cool, eh?
Okay, so here’s how to get it working in Ubuntu 7.04. Read more of this article »
An OpenSSH server can be used as a SOCKS compliant proxy, allowing one to tunnel virtually any type of traffic via the SSH protocol. This is very useful when surfing the web on untrusted networks such as hotel internet services and wireless hotspots. You just never know who’s snooping in on your data.
All you need is external access to a trusted OpenSSH server, perhaps the one you have at home, work, etc. If you’re using your laptop to surf the internet at your local coffee shop, you’ll simply need to establish a connection to that external SSH server using the appropriate client variables, and configure your web browser’s proxy settings to connect to a locally defined TCP port. Read more of this article »
If you’re not going to use tapes, CD’s, DVD’s, or other form of attached media for storing your backups, you’re more than likely going to use some form of a remote network storage repository. There are many ways to ship your *nix backups across a network to a remote file system. Using SSH (and its related tools) is among the most popular methods for this delivery process as it can be relatively fast, free, secure, and very flexible.
In the following examples, I’ll show you three ways to ship an archived folder to a remote SSH server.
Method 1: Secure Copy
Using ‘scp’, (secure copy), one can take any existing file and deliver it to an SSH server. This means that you can create a backup, store it temporarily to your “local” file system, and copy the file across the network.
In this example, one backs up a folder in their home directory called “myfiles” using tar and gzip compression, and then copies the resulting archive using scp to a folder called /archives on a remote SSH server.
$ tar -czvpf myfiles.tar.gz ~/myfiles
$ scp myfiles.tar.gz user@sshserver:/archives/
$ rm myfiles.tar.gz
Cool stuff, but the downside is two-fold:
(1) If your backup is larger than the available space on your local file system, this method obviously won’t work;
(2) If your backup is large, the entire process takes a little longer than you might find convenient, since you have to first create the backup, and then copy it across the network.
A better solution would be to start sending the backup during the file creation process, which leads us to to the next two methods.
Method 2: Concatenate to SSH
SSH can read from STDIN and print results to STDOUT, which means one can concatenate any type of “input” to a remote SSH server. For example, you could redirect the output of ‘tar’ using the following syntax:
$ tar czpvf - ~/myfiles | ssh user@sshserver "cat > /archives/myfiles.tar.gz"
As you can see, with a single command, you can both create and deliver the backup at the same time. The backup process does not take up any space on the local file system. Wicked cool!
There is however yet another way to accomplish this task as shown in the next section.
Method 3: Write to an SSH File System (SSHFS)
For those of you not familiar with SSHFS, this is a file system client based on SFTP and FUSE. This client allows you to mount any remote SSH server to a local empty directory, just as you would with other devices like CD/ROM’s, floppies, usb sticks, etc. What’s also great about this client is that it requires no server side modification. It’s resource friendly, and sending data is just as fast as any other SSH file transfer.
In Ubuntu 7.04, the fuse kernel module and utilities are installed by default, and sshfs is available in the repositories.
Once you have sshfs installed and working, the following example mounts the remote “/archives” directory to the local “~/temp-mount” folder, and then places the backup directly in the mounted file system. The file is transported across the network during the write process.
$ mkdir ~/temp-mount
$ sshfs user@sshserver:/archives ~/temp-mount
$ tar -czvpf ~/temp-mount/myfiles.tar.gz ~/myfiles
To unmount the directory,
$ fusermount -u ~/mnt
As you can see, using SSH for the delivery of your backups can make your life a whole lot easier. A suggested practice would be to use DSA/RSA public key authentication for making SSH connections. This way, you don’t have rely on passwords every time the SSH client is used, which makes sense when applying any of the above examples to an automated process such as Crontab, or At.
If you ever need to work with a large file and wish you could split it into smaller pieces, you’ll be pleased to know that it’s extremely easy to do in Linux. You can use the “split” utility that comes standard with most *nix variations. Lets take a look at a couple easy examples.
To create a test file to work with, the following will create one that’s exactly 100 megabytes. Note, I am using ‘dd’ with /dev/urandom to demonstrate that the results of the split and reassembly are completely accurate. This will be accomplished via md5 hash comparisons at the end of this process.
$ dd if=/dev/urandom of=testfile bs=1k count=102400
102400+0 records in
102400+0 records out
104857600 bytes (105 MB) copied, 23.2982 seconds, 4.5 MB/s
$ ls -lh testfile
-rw-r--r-- 1 gmendoza gmendoza 100M 2007-06-03 22:45 testfile
To split the file into five 20MB files, use the split command as shown below. Note, I am producing five files with a new naming convention of “splitfiles”.
$ split -b 20971520 -d testfile splitfiles
Verify by listing all files that begin with “splitfiles”. Below, you see the new files with the appropriate sequence numbers as a result of the split command.
$ ls -l splitfiles*
-rw-r--r-- 1 gmendoza gmendoza 20971520 2007-06-03 22:47 splitfiles00
-rw-r--r-- 1 gmendoza gmendoza 20971520 2007-06-03 22:47 splitfiles01
-rw-r--r-- 1 gmendoza gmendoza 20971520 2007-06-03 22:47 splitfiles02
-rw-r--r-- 1 gmendoza gmendoza 20971520 2007-06-03 22:47 splitfiles03
-rw-r--r-- 1 gmendoza gmendoza 20971520 2007-06-03 22:47 splitfiles04
To reassemble the smaller files back to their original state, concatenate them together using a simple redirect.
$ cat splitfile* > newtestfile
… and list again to show your handy work…
$ ls -lh newtestfile
-rw-r--r-- 1 gmendoza gmendoza 100M 2007-06-03 22:52 newtestfile
As proof that both the original and newly reassembled files are exactly the same, check the results of a cryptographic md5 hash:
$ md5sum testfile newtestfile
If you’re an avid user of Ubuntu or other Debian based Linux distributions, then you’re probably very familiar with using APT and it’s related command line utilities. You might however find it useful to create some command line aliases that shorten the time it takes to type out these repetitive tasks.
"sudo apt-get update" can be shortened to "agu".
"sudo apt-get install" can be shortened to "agi".
"sudo apt-get dist-upgrade" can be shorted to "agd".
A very simple way to create a set of command line aliases would be to add them to your
~/.bashrc file located in your users home directory. Here’s an example of some of my favorite APT aliases.
# Favorite Aliases
alias agu='sudo apt-get update'
alias agi='sudo apt-get install'
alias agd='sudo apt-get dist-upgrade'
alias agr='sudo apt-get remove'
alias ags='sudo aptitude search'
alias agsh='sudo apt-cache show'
alias afs='sudo apt-file search'
alias afsh='sudo apt-file show'
alias afu='sudo apt-file update'
To apply the changes immediately to your bash profile without having to log out, simply run the following command:
Now, if you want to install the “vim-full” package, simply issue the following command:
Remember, because “sudo” has been added to your alias, you don’t have to type it every time. It will prompt you to use the password the first time, and won’t ask again for the duration of the defined timeout period. Cool?
“apt-file” is a very useful package you should install. The alias is defined above, but is not installed by default. It allows you to search for file names in all packages from all your defined repositories. For example, lets say you’ve tried to run an application and it claims that your’re missing the library “libstdc++.so.5.0.7”. The following example tells you which packages contains a file with that name, which you can then install.
Although these examples have been geared towards Debian and Ubuntu, you can obviously use aliases on any Unix-like operating system. The technique of applying them just varies depending on the shell environment you are using. Have fun!