0

Proxmox – Ready for the Big Leagues?

In the world of virtualization, a decently long pedigree and strong support are king. There is an old saying that nobody ever got fired for buying Cisco. The hypervisor equivalent would probably be either Hyper-V or VMWare. So is there room for a more homegrown Linux based virtualization platform?

I tried to flesh out some details on Proxmox in my homelab to get a general idea on performance, stability, and support. I grabbed the latest spin of the Debian based virtualization environment and went to work. With the cast-off machines I have available for testing, I didn’t expect big boy server performance. I do know, however, what kind of performance I could get with ESXi on these same machines. The install was painless and it took about 30 minutes to get my four machines booted and configured. Here is where I ran into my first problem. There are two different wiki pages in the official documentation for setting up clustering. Each had a different procedure. Some more forum digging later I learned that the pvecm command was created to deprecate the previous method. Also unlisted is the fact that you must copy your ssh key for the root user from each machine to each other machine before you begin the process of tying the cluster together. My first attempt without doing this resulted in an insufficient quorum (three machines are required). The third machine was unable to join, as the cluster leader was awaiting a quorum and did not process the added key. Be sure to ssh-copy-id before you begin. I had to reinstall and start over.

After I got the cluster created I was really excited to try out the built-in cluster storage management suite Ceph. Ceph is a really elegant approach to storage. Each node (server) has whatever storage you attach to it. You take these storage block devices and add them to a pool. The storage drives are called OSDs. Transactions in flight occur on the ODSs or on separate block devices you designate as journals. It’s best to use something with great latency characteristics for a journal such as an SSD. But I’m getting ahead of myself as, yet again, there are two separate documents outlining two separate procedures for setting up the Ceph cluster. After more digging the second link is more up to date. Unfortunately, I was unable to utilize the Ceph pool that I created as an undocumented key signing issue got in the way. What you need to do is copy the key created during cluster setup into a new directory and name it after what you wish to call your storage pool.

mkdir -p /etc/pve/priv/ceph
cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/ceph-sshd_01.keyring

This was the third issue I ran into just trying to configure Proxmox before I even began using it to host anything. I soldier on only to discover that I am unable to upload any reasonably sized .iso to mount for OS installation. The option exists in the UI, but the web server is, by default, incorrectly configured and fails silently and repeatedly attempting to handle any installation .iso. After researching the issue it appears that a lead developer on the project doesn’t know how to configure Apache. Apache can be configured to accept any size of upload you want. Case in point my Owncloud setup:

owncloud-upload

After using scp to copy my installation media to the appropriate directory, I was finally able to install some virtual machines. Performance seemed good, especially in terms of CPU and memory utilization. Right about where I expected a lean hypervisor to perform. The storage, however, was great at reading but had deal-breaking issues with write. A write benchmark within a Linux VM put sustained writes at 8.5MB/s. For hosts connected by gigabit Ethernet, that sort of performance is inexcusable. ZFS is available as a storage backend for Proxmox, and I recommend using it. Unfortunately, you lose some of the versatility and scalability that come with a clustered, network filesystem.

Among all the documentation issues, setup foibles, and general usability problems I persevered. The final nail in the coffin was live migration. Live migration was not supported for LXC clients, and I had more downtime during live migration than I would see with Hyper-V or a VCenter cluster.

I’m really glad that a Linux based virtualization platform is being developed. It is coming along nicely, but as of December 2015 I’d have to conclude that VMWare is still king by a country mile.

kyle

Leave a Reply