Monday, February 29, 2016

Changing the storage location for virt-manager

As you probably know, libvirt defaults to storing all your virtual machines on the root file system using the directory /var/lib/libvirt/images. While that might not be a problem if you have a large primary drive, it's not always optimal. In my case, my primary drive is a small re-purposed laptop drive, so obviously that's just not going to work for me. My boot drive is just that, a boot drive.

So, without doing anything fancy like using symlinks to another storage volume, etc., here is a better way to go about changing this:


  1. As root, start virt-manager.
  2. Click Edit > Connection Details, and then click the Storage tab.
  3. The "Connection Details" dialog box opens.
  4. In the lower left hand corner of this new dialog box, click the green plus sign to add a pool.
  5. The "Add Storage Pool" dialog box opens.
  6. Specify a name for the pool (call it anything you want, it's just a label).
  7. For pool type select dir: Filesystem Directory and click the forward button.
  8. For the target path, browse to the directory you wish to use.
  9. After selecting your folder, click the finish button.
  10. Now you should have 2 storage pools (Default and the new one you just created).

Notes:

1) virt-manager will automatically create the pool directory if it does not already exist.
2) virt-manager requires the pool directory to be owned by root.
3) That's why I had you run virt-manager as root.
4) You don't normally need to run virt-manager as root (and probably shouldn't).
5) You can do all of this using virsh instead of virt-manager. Just look at the Redhat docs.


Now.....after having done all this one might expect being able use the new pool by selecting it during the creation of a new virtual machine. Well....that's not going to happen.

As it turns out, virt-manager does not behave as one might expect it would. Instead, there does not look to be any configurable way at all to ensure new virtual machines are automatically created using the new pool. Nor can the default pool itself be modified to use a new location (unless you symlink it or something).

In fact, even if you stop and remove the default pool, disk images still won't be created in the new pool. Instead they'll be created right smack in the middle of your own home directory. Not only that, after the next system reboot, the default pool comes right back, and virt-manager will once again automatically use it as the default pool (no questions asked). Pretty strange behavior if you ask me.

To use your new storage pool, you have to do something completely different and rather unexpected.

Prior to creating your virtual machine, manually create a storage volume in your new pool.
  1. Start virt-manager.
  2. Click Edit > Connection Details, and then click the Storage tab.
  3. In the left hand pane, select your new storage pool.
  4. In the lower right hand pane, click the New Volume button.
  5. The "New Storage Volume" dialog box appears.
  6. Type a name for the new image file and select the format type.
  7. Now specify the max volume size and the amount of disk space to be initially allocated.
  8. Click the Finish button when done.
Specify that volume when you create your new virtual machine.

It goes without saying if you fail to select the new volume during the creation of your virtual machine, the volume will instead automatically be created in the default pool.

Perhaps if you removed all pools and then created a new pool and gave it the name "default", then virt-manager might begin to use it as the default pool. I am only speculating as I have not yet tried it.

I am curious whether the odd behavior of the virt-manager GUI is inherent to KVM itself, or simply the result of poor interface design. At any rate, it's behavior is not always very intuitive.


Tip: I've noticed virt-manager likes to do everything as root and it even wants to take root ownership of your ISO files (another peculiarity of virt-manager).

When creating a new virtual machine, you can work around this issue by simply making a copy of your ISO file and dropping it into /tmp. Once the file is in /tmp who cares what virt-manager does to it as the entire /tmp directory is swept clean upon every reboot.

Hope that helps.


Michael

ZFS - coming to Ubuntu 16.04 LTS (Xenial Xerus)

Truth be told, ZFS is already here for Ubuntu (at least if you're running the 64 bit version, which you probably are). That's right, you can install it on a number of different releases as a PPA right now. This also means you can run ZFS on Linux Mint as well.

Just browse on over to the ZFS on Linux website and under 'packages' click the link for Ubuntu. That'll take you to Canonical's Native ZFS for Linux website. From there you can add the PPA.

Or you can simply do the following (I did this on Linux Mint 17.3 which is based on Ubutnu 14.04 LTS):

sudo add-apt-repository ppa:zfs-native/stable

followed by

sudo apt-get update

Now before you can install the ZFS related software, you'll first need some prerequisites (compilers, etc.), or you won't be able to build the kernel modules. We'll install those before anything else.

sudo apt-get install build-essentials

Now we can install the ZFS software.

sudo apt-get install spl-dkms zfs-dkms ubuntu-zfs zfsutils zfs-doc mountall

A word of warning here. There is a specific order to getting everything installed. The SPL module needs to be built before the ZFS module, so spl-dkms is listed first.

Once everything is installed, you need to load the kernel module using the modprobe command (or you can reboot the computer, but this isn't Windows, so you're under no obligation to do that).

sudo modprobe zfs

If you run into troubles getting things built - Look here:

http://askubuntu.com/questions/431879/error-when-installing-ubuntu-zfs

Also, don't do what I did and forget to install mountall. Otherwise your ZFS dataset mountings won't persist beyond your next reboot.


Here is my story.....

First I did an lsblk to see my block devices. This showed me /dev/sdb and /dev/sdc were the 2 drives I wanted to mirror. Then I created my pool as a RAID mirror using the following syntax:

zpool create <pool_name> mirror <device 1> <device 2> 

In my case this translated into this:

zpool create storage mirror /dev/sdb /dev/sdc

After creating my ZFS storage pool with 2 spare 500GB WD blue SATA drives as a nice little RAID 1 mirror, I then continued on by enabling LZ4 compression for the entire pool:

zfs set compression=lz4 storage

I followed that up by creating 2 ZFS datasets (one for ISO files and another for virtual machines).

zfs create storage/iso
zfs create storage/vm

I then checked my work with a zpool status and a zfs list and everything looked fine.

Next I transferred over about 80 GB of data from my laptop (I first tried to get samba working, but eventually gave up on it and simply copied everything over using scp).

It's probably worth mentioning a little snag I ran into while using scp. At first I got an access is denied, which makes sense. As I had done a sudo su prior to creating the pool and datasets, it was to be expected both datasets would end up owned by root. I resolved that by doing a chown on the dataset. After changing ownership, I was able to scp the data as a regular user without any further difficulties.

Later after my first reboot, I was like "Holy Crap", what just happened to my ZFS datasets? Nothing looks to be mounted!!!!

To make matters worse, zfs list showed everything just fine, including the 80 GB of data I has previously transferred over. But nothing was there except for the pool folder itself......and it was empty. Ugh!!!!!

By then it was midnight and having to get up early the next morning I decided to call it a night. 

The next day after Googling a bit, I found a ZFS command that showed me the problem: 

zfs get all

In all it's glory I saw every single ZFS property for all of my datasets including the one that mattered the most about then:  

mounted no 

So I Googled some more and discovered I had overlooked installing mountall. Long story short, don't forget that one, it's important. Oh and by the way, ZFS doesn't normally use /etc/vfstab (OpenIndiana) or /etc/fstab (Linux), so you won't typically find your mountings there.....although you could (if pressed) add some auto-mounting values to /etc/default/zfs.

That's it for now.....

Michael

What's in your toybox?

Thought I might do a little inventory of my current PC collection.....

At the moment I have 3 operational mid tower PC's and laptop (Thinkpad w500). I also have a hybrid mid tower running a SPARC motherboard (from a SunBlade 1500). I tossed the SPARC system together several years back when I was working daily with Solaris 8 at work. 

Given I have no use for the sparc box anymore, I'll likely be removing this motherboard and selling it on EBAY. I have a spare core2duo motherboard with (I think 8GB of memory) and an AMD HD 6000 series graphics card which I'll likely install into this system so it can be re-allocated as a home theater PC (perhaps running SteamOS or something similar).

As for the details of the operational stable:

My Lenovo Thinkpad W500 laptop is running Linux Mint 17.2 KDE. It has 8GB of memory, a 128GB primary SSD, and for expanded storage I swapped out the DVD drive for a hard drive caddy containing a 500GB WD Scorpio black. Thinking about pulling this drive and replacing it with a 250GB SSD or something similar. At least then the laptop would have no moving parts (sans the cooling fan). 

My primary workstation is a nice refurbished HP Z210 workstation running Windows 10. It has a server class motherboard , quad core XEON with hyperthreading, 8 GB of ECC memory, and an AMD R7 265 Graphics card. It's basically a poor man's i7 gamer box. The primary drive is a 128GB SSD and I have 2 old 150GB WD raptors running in a raid stripe. 

Ya I know there is no redundancy, and if I lose a single drive everything is gone, but this storage array is only storing downloaded game content. So, there is nothing on it which cannot be reproduced. When I configured the array, I chose speed over integrity. This is a very fast and nearly silent gaming rig. I also use it for occasional virtualization (Vmware player and virtualbox).

Next up is my Linux box. It's running an old Gigabyte EP45-UD3R motherboard with a slightly overclocked core 2 quad CPU, 12GB of memory, and an old Nvidia graphics card. To reduce energy usage, I will likely dial this box back to stock settings (It's overclocked because it used to be my gaming rig). The primary storage drive is a 7200 rpm 160GB laptop drive with Linux Mint 17.3 KDE installed. For additional storage I have added two 500GB WD blue drives, which are configured for storage integrity as a ZFS RAID mirror. This system (along with my laptop) will primarily be used for OpenIndiana (and illumos) systems documentation development (A.K.A content creation). I will also be using it as a virtualization system (VMware, Virtualbox, and KVM).  

Finally there is the system I have built up as my primary storage server. It's an old core 2 duo motherboard with only a couple of Gigs of memory. Primary storage is a 300 GB hard drive (which is overkill really...but what else am I going to do with it?). Secondary storage consists of 2 brand new 1TB WD RE 7200 rpm nearline SAS drives. The drives are running off an old LSI hardware RAID controller configured as a simple host bus adapter (IT mode I guess?). The sole purpose of the card is to support the operation of the SAS drives as data integrity will be the sole responsibility of ZFS.

This system will be running OpenIndiana Hipster with a text mode console (no GUI).  Hipster makes for a great home NAS system as ZFS is native to OpenIndiana (due to it's OpenSolaris roots). No it doesn't have ECC memory.....but I'm not going to let that stop me. 

That's it for now.....

Michael

Hello World

Hello World,

My name is Michael Kruger and I created this blog to chronicle my journey through the wonderful world of open source software. In particular I volunteered in late December of 2015 to help with writing systems documentation for the OpenIndiana project. 

OpenIndiana is derived from OpenSolaris and is essentially a fully community based continuation of the OpenSolaris project, which itself was the testbed (Think Fedora) for what later became Oracle Solaris 11.

OpenIndiana shares most of the technologies found in Oracle Solaris 11, but is completely open source as it uses the illumos kernel rather than Sun/Oracle's proprietary OS/Net consolidation. In a nutshell, illumos is OS/NET with all the proprietary bits removed and rewritten to be entirely open source.

And now for a little bit about me. While not a developer, I have done a dash of bash and sprinkle of powershell scripting here and there. I also have some modest technical writing experience, and some exposure to the software development process in general (and software configuration management in particular).

To round it all off, I have toyed around a little bit with version control systems (subversion), and in support of the OpenIndiana project, am now teaching myself how to use GIT, GitHub, etc., along with remembering everything I have forgotten about how to use VIM. Oh yes, there is also text markup. I am teaching myself how to write in Asciidoc (well actually the Ruby implementation of Asciiidoc, which is called Asciidoctor).

I also have roughly 15 years experience supporting Windows desktops and servers (mostly desktops). A couple of those years included working with Vmware Vsphere, Redhat Enterprise Linux, and even some UNIX (HP-UX and Solaris). 

Most of what I expect to do as a contributing member of the OpenIndiana project is help with writing tutorials, producing a nice polished (BSD like) OpenIndiana handbook, as well as assisting with the ground up development of an all new documentation toolchain (ideally something text markup based and completely automated by means of continuous integration).

The current OpenSolaris docs (books) were all written in Solbook (a subset of docbook) and have a rather nasty toolchain. My hope is to someday see all these books converted to Asciidoc (plain text markup) as docbook is a real bear to work with. 

Michael