Just browse on over to the ZFS on Linux website and under 'packages' click the link for Ubuntu. That'll take you to Canonical's Native ZFS for Linux website. From there you can add the PPA.
Or you can simply do the following (I did this on Linux Mint 17.3 which is based on Ubutnu 14.04 LTS):
sudo add-apt-repository ppa:zfs-native/stable
followed by
sudo apt-get update
sudo apt-get install build-essentials
Now we can install the ZFS software.
sudo apt-get install spl-dkms zfs-dkms ubuntu-zfs zfsutils zfs-doc mountall
A word of warning here. There is a specific order to getting everything installed. The SPL module needs to be built before the ZFS module, so spl-dkms is listed first.
Once everything is installed, you need to load the kernel module using the modprobe command (or you can reboot the computer, but this isn't Windows, so you're under no obligation to do that).
sudo modprobe zfs
If you run into troubles getting things built - Look here:
http://askubuntu.com/questions/431879/error-when-installing-ubuntu-zfs
Also, don't do what I did and forget to install mountall. Otherwise your ZFS dataset mountings won't persist beyond your next reboot.
Here is my story.....
First I did an lsblk to see my block devices. This showed me /dev/sdb and /dev/sdc were the 2 drives I wanted to mirror. Then I created my pool as a RAID mirror using the following syntax:
zpool create <pool_name> mirror <device 1> <device 2>
In my case this translated into this:
zpool create storage mirror /dev/sdb /dev/sdc
After creating my ZFS storage pool with 2 spare 500GB WD blue SATA drives as a nice little RAID 1 mirror, I then continued on by enabling LZ4 compression for the entire pool:
zfs set compression=lz4 storage
I followed that up by creating 2 ZFS datasets (one for ISO files and another for virtual machines).
zfs create storage/iso
zfs create storage/vm
I then checked my work with a zpool status and a zfs list and everything looked fine.
Next I transferred over about 80 GB of data from my laptop (I first tried to get samba working, but eventually gave up on it and simply copied everything over using scp).
It's probably worth mentioning a little snag I ran into while using scp. At first I got an access is denied, which makes sense. As I had done a sudo su prior to creating the pool and datasets, it was to be expected both datasets would end up owned by root. I resolved that by doing a chown on the dataset. After changing ownership, I was able to scp the data as a regular user without any further difficulties.
It's probably worth mentioning a little snag I ran into while using scp. At first I got an access is denied, which makes sense. As I had done a sudo su prior to creating the pool and datasets, it was to be expected both datasets would end up owned by root. I resolved that by doing a chown on the dataset. After changing ownership, I was able to scp the data as a regular user without any further difficulties.
Later after my first reboot, I was like "Holy Crap", what just happened to my ZFS datasets? Nothing looks to be mounted!!!!
To make matters worse, zfs list showed everything just fine, including the 80 GB of data I has previously transferred over. But nothing was there except for the pool folder itself......and it was empty. Ugh!!!!!
By then it was midnight and having to get up early the next morning I decided to call it a night.
The next day after Googling a bit, I found a ZFS command that showed me the problem:
zfs get all
In all it's glory I saw every single ZFS property for all of my datasets including the one that mattered the most about then:
mounted no
So I Googled some more and discovered I had overlooked installing mountall. Long story short, don't forget that one, it's important. Oh and by the way, ZFS doesn't normally use /etc/vfstab (OpenIndiana) or /etc/fstab (Linux), so you won't typically find your mountings there.....although you could (if pressed) add some auto-mounting values to /etc/default/zfs.
That's it for now.....
Michael
No comments:
Post a Comment