Mosh

Ever since 1995 SSH has been a stable workhorse for anyone dealing with Linux and Unix hosts and even Windows. There is good reason for this. SSH has a number of powerful features that let you exploit its capabilities. You can, for instance, set up a quick and dirty VPN of sorts for forwarding your traffic over a secure tunnel to another machine. SSH can also be used to transfer files, which is enormously useful in itself, but other programs like rsync add onto this functionality and make syncing directories, for instance, be nearly magical. All this while maintaining the security of default AES ciphers over what just might be the most secure network protocol in wide use.

For all its great strengths, SSH does have a few weaknesses. Computer scientists have solved a few of them by adding on to the protocol. The HPN-SSH additions solved some buffer and SMP problems that were holding back SSH from its true potential in terms of transferring files. More recently, MOSH has come along to solve the issues pertaining to interactivity performance and roaming capabilities. The vagaries of modern life mean that we often have to make do with administering machines over wireless or unstable connections. With old-school SSH, any sort of interference in the line would mean you would have to re-establish your connection. This is a particular annoyance for me on my laptop, for example. Suspending and resuming interrupts the network connection, necessitating a handshake to re-establish my broken pipe. If I connect with MOSH, the terminal will simply try to re-establish a connection on its own, firing off UDP packets that it doesn’t need to keep track of in terms of the acks that TCP would traditionally necessitate. It simply waits for the latest data to paint the terminal. Other neat features include its increased interactivity. Managing my European machines from the Western US used to have noticeable lag. While we can’t change the speed of light, MOSH does a great job of making my terminal sessions much more responsive to input. Also, interrupt sequences are non-blocking. This means that you can make a mistake and, for instance, cat out a huge file. At any time you can Ctrl^c your way out of it. Without MOSH, you may have been stuck waiting for that scrolling output to finish. MOSH was developed by some awesome M.I.T. researchers. You can watch them talk about MOSH below:

If you find yourself re-connecting your SSH sessions all too often, you might want to check it out.

NAS

There is a saying that goes: “There are two types of people. Those who have yet to lose data, and those that backup.” Proper backup can be done a number of ways. Options include internal hard disk, external hard disk, internal network, external network, and tape. The external network option has become more popular over the years as WAN pipes get larger and storage gets cheaper. Excellent companies like Box.net and Backblaze are happy to sell you subscriptions for storing your data with them. If you have a reasonably small data set, these services are probably the way to go. Your files will be offsite, which protects you from catastrophic failures such as equipment loss / theft and equipment damage / tornadoes. But for people like me with a bit more data to store…

zomg

I can’t really maintain a 26 Terabyte dataset off-site. Backblaze would technically allow it, as they advertise unlimited storage. I’m pretty sure Comcast wouldn’t like it if I began uploading and didn’t stop for nearly a year… uploading   My only real solution is local network storage. I’ve been using network storage for more than ten years now. I started with 320GB disks on Debian Linux with MDADM. Then it was 750GB disks organized in the same way. Then another rebuild with 10 1.5TB disks using Ubuntu Server. I still have the 10 disk, 13.6TB array going. I have migrated it to BTRFS in order to take advantage of its extensive next-gen feature set. In particular, I would like to avoid the URE issue that can be present when rebuilding the array. I lost a whole array of data to this problem in 2011, and plan to never allow it to happen again. Since the next gen file-systems store bitsumming for both file and metadata, the integrity of the array is much stronger. Errors can be detected and corrected without downtime. The majority of my data lives on a 12 disk ZFS on FreeBSD array. The spindles are Seagate Barracudas at 3TB a piece. To date I have had one drive that was DOA and another that failed after two years in continuous operation. The wiring is a bit monstrous, but with so much hardware packed into a standard chassis there is no way around it.

20140412_031343

I’ll post a follow up going into more detail about the filesystem, the hardware I have used, and the performance aspects of the system.

Experimenting with 10Gbit

In relation to a storage series which I plan on posting soon, I was scouring the Internet searching for ways of improving network transfer without selling kidneys. One very cool option is to implement a point-to-point Infiniband architecture between servers. Since I have a pair of storage servers that would benefit from >1Gbit/s speeds, this sounded ideal. I got the idea from a poster called “Flain.” Using a pair of Mellanox cards and some short CX4 cabling, it is possible to get 10Gbit between machines for around $100.

Some things to keep in mind should you be interested. First, this is for crazy people. Home users have almost no use for anything beyond 1Gbit networking. Secondly, few systems would even be capable of handling data at beyond 1Gbit speeds. GigE has a theoretical maximum throughput of 125MB/s. This is roughly what modern 7,200 RPM HDDs can push for sustained transfer. High end SSD users, on the other hand, have hardware more than capable of maxing out GigE. Since my use case involves shuffling data between two very fast disk arrays, I have to conclude that I NEED 10Gbit 🙂

Parent Proofing the Browser

While you are at home for the holidays many in our profession will be called upon to answer any number of profane technical questions. Once per annum, I do my best to clean up the aging PC my parents use for Ebay, shopping, and word processing. I’ve been insistent in the past that software should only be installed if it came from an angel descended from heaven. All else requires a requisition and approval process. Even so, as certain as death and taxes Malwarebytes chews through the hard disk and spits out a decent amount of malware detections. It’s almost a certainty that the browser was the infection vector for all of these. After backing up files and restoring to a premade image, I had to sit down and explore my options.

In my quest to make the browser safer, I came up with two options. One is simple and devious. The other is more complex, systematic, and sublime.

Option 1:

Step 1. Change the browser icon for Chrome to the IE icon.

chrome_iconThe path is “C:Program FilesInternet Exploreriexplore.exe”

Step 2. Within Chrome, install Adblock Plus and HTTPS Everywhere.

Option 2:

Step 1. Install Privoxy.

Step 2. Open the Internet Options dialog.

internet_optionsStep 2. Choose LAN settings and change the proxy address to localhost and the port to 8118.

lanFeel free to check “Bypass proxy server for local addresses.”

Both of these options should prevent the browser from interacting with unnecessary advertising and more questionable content that could lead to infection.