Since the most powerful machine that I own is my storage server, I thought it prudent to leverage its capabilities by running alternate operating systems to provide the services that I use. FreeBSD is awesome in that it is a coherent system coded by a passionate community and refined by decades of production service. Since I have 96TB of raw disk hanging off of the machine it was imperative that I use what is objectively the best filesystem currently in existence: ZFS. While ZFS runs on Linux now and is mostly nice and stable, I wanted the most tested and most performant platform for ZFS. Short of utilizing the slightly crazy and far less supported OpenIndiana, FreeBSD is the best choice. All of the storage functionality and exporting of shares is fairly straightforward under FreeBSD. What wasn’t quite so well documented was the process of getting virtual machines to run. There are some options:
The first option for virtualization on FreeBSD is to not use FreeBSD. VMWare will allow you to use their hypervisor and run FreeBSD as a guest on top. You then pass through the disks directly as long as you have VT-D capability in your environment. I’ve heard of examples of this working just fine. I’ve also heard of it causing problems that were difficult to diagnose. It’s really up in the air depending on what hardware you are using. In the end I decided against this approach mostly because I am a freedom loving American and I like my software to be as free as I am.
Your second option actually uses FreeBSD as the hypervisor. It’s called bhyve but pronounced “beehive.” It’s eventually going to be to BSD as KVM is to Linux. Eventually. It’s still under development and just wasn’t quite ready when I was deploying my server. This will someday be the best option, but for now there is Virtualbox.
Virtualbox is a type 2 hypervisor that runs on just about everything. And while I would never even consider using it for any sort of serious business, in my experience it has been plenty good enough to use in a homelab setting. So how the heck do we get VMs spun up on a headless FreeBSD host?
Install virtualbox using the fancy new package management system:
pkg install virtualbox-ose
Before we begin creating the VM we need to set up FreeBSD to load the necessary kernel modules. There are three. Load them:
kldload vboxdrv
kldload vboxnetflt
kldload vboxnetadp
Make sure they load on startup every time by adding them to /etc/rc.conf
vboxdrv_load="YES"
vboxnet_enable="YES"
vboxheadless_enable="YES"
Now we create a VM and name it. This also registers the VM with Virtualbox. You can find your particular ostype by running VBoxManage list ostypes
.
VBoxManage createvm --name w8 --ostype Windows81 --register
Then we need to define some attributes of the VM. The memory specification, hardware type, and networking setup. If you want the NIC for the VM to just show up as its own discreet NIC on the network, bridged mode is what you are looking for. My NIC is named “igb1.” You can find the one you need to use with ifconfig
.
VBoxManage modifyvm w8 --memory 8048 --ioapic on --cpus 8 --chipset ich9 --nic1 bridged --nictype1 82540EM --bridgeadapter1 igb1
Define the storage in MegaBytes.
VBoxManage createhd --filename /path/to/store/HDDcontainer/w8.vdi --size 50000
Create a storage controller to attach it to.
VBoxManage storagectl w8 --name "SATA Controller" --add sata --controller IntelAhci --portcount 4
Attach it.
VBoxManage storageattach "w8" --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium /path/to/store/HDDcontainer/w8.vdi
We need to define an optical disk emulator to mount our ISO so that we can install to our VM.
VBoxManage storagectl w8 --name "IDE Controller" --add ide --controller PIIX4
Attach our .iso
VBoxManage storageattach w8 --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium /path/to/iso/file.iso
Start the VM. The option --vrde on
starts a VNC session on the IP of the HOST NIC on port 3389 by default. You will need a VNC viewer to see the console output and install from the mounted media. I used TightVNC, which worked just fine.
VBoxHeadless --startvm w8 --vrde on
After you are done installing set the VM to boot from the “HDD” by default to prevent boot device confusion. If you fail to do this it will just boot up from the install media again and again.
VBoxManage modifyvm w8 --boot1 disk
It’s best to remove the VNC console after everything is set up the way you want it. Just enable your remote access option of choice from within the VM OS for management.
VBoxHeadless --startvm w8 --vrde off
One last thing to keep in mind is the NIC type that you assign to the VM. You are going to get the best performance by utilizing the virtio driver maintained by Red Hat. The package for the driver is available here. If you are installing another BSD or Linux VM then the driver is already there. If you installing a Windows guest then I found that it is best to use the Intel NIC from above and then download the driver .iso. Then change the --nictype1
to virtio. After booting use the VNC console to assign the driver to your new NIC. This allows for the best performance and, from what I have experienced, improved system stability.