Our Colo Adventure Pt II – The Software

So you’ve got an absolute monster of a server, what do you run on it? There are some really great options. If you are like my co-worker and prefer clicky clicky management and a familiar interface, there is always Windows Server. If you are hardcore and want the most stable and most performant option, FreeBSD is a strong contender. Linux is a great all-around performer and has very strong software support for whatever task you are trying to accomplish. These days though, you aren’t really all that limited with any of them as we can leverage virtualization.

In order to keep the natives from getting restless, Windows Server was our first choice. We could leverage the switch-independent NIC teaming that is new to WS 2012. Also available is a native hypervisor for separating all of our services and controlling network access through the vswitch. We both use Hyper-v at work and support numerous customer deployments on the platform. It’s not a bad system, but the management utilities are slightly lacking, especially for advanced features such as clustering. But, as we have only a single server, these issues would have had minimal impact for us.

Windows server installed just fine and getting things up and running was plenty easy. But after placing a load on the machine, we discovered several problems. The first issue was an absolute showstopper. The NIC aggregation would simply not work in a stable manner for our dual Intel NICs. No matter what we did in terms of driver versions or settings would solve the issue. The network could only sustain between 30 seconds and 1 hour of sustained load before it would become entirely unresponsive and the system would need a reboot to become functional again. While this was a dealbreaker, and Windows was dead to us at that point, we were curious about another feature new to Server 2012. Storage Spaces sounds like just a magically awesome technology for pooling drives and managing storage. You just use the GUI to make virtual disks on top of whatever physical disks you want. Then you just carve out partitions on top of the storage as normal and go to town…when it works. For our purposes the only configuration that made sense was going to be dual-parity. We couldn’t just throw away half our disk space going with a mirrored configuration. With dual parity, we would only lose 2 disks worth of space for data redundancy. While I had read that parity spaces perform badly, I was shocked to find out just how poorly it performed. The writes were most concerning:

terribad_writes

80MB/s writes, under the most ideal conditions, across 24 7,200 RPM spindles is wildly unacceptable. 4k writes and re-writes were so bad you’d almost get better performance from a 3.5″ floppy drive. I was shocked!

terribad_reads

The read performance was sawtoothing all over the place, so I took Windows Server out behind the woodshed and put it out of its misery.

The next choice was Ubuntu Server LTS. It has a nice long 5 year support window, and it’s just a nice sane Linux implementation. We would have gone Centos, but I really needed a newer kernel because I wanted to employ ZFS. I could write an entire post on why ZFS is the most amazing thing since sliced bread. I might even do that sometime, but staying on point…ZFS was lightyears ahead of Spaces. With all the disks in three separate RAID-Z1 vdevs we made a single pool. Benchmarking this pool yields 1.2+ GB (yes, gigabytes) per second for sustained uncached reads. Writes hover around the 800MB/s range. The performance is just phenomenal. On top of that we get to make use of our 48GB of RAM for read caching, which just makes things even faster. I couldn’t be happier with our disk performance.

UFW makes a really simple IPTables frontend for managing our traffic and locking down ports.

We also started using KVM to keep things sane and separated, which leads us into part III.

kyle