- Root ZFS pool (RPOOL) which runs on a single disk and hosts the operating system.
- Secondary ZFS pool (STORAGE) which is used for file storage.
This 2nd pool is quickly running out of space.
The storage pool currently uses two 1TB 7200RPM SAS drives which have been configured as a mirrored vdev. The capacity of this mirrored vdev will be enlarged from 1TB to 2TB by individually replacing each 1TB SAS disk with a 2TB SATA disk. ZFS supports doing this while the pool remains online and available.
There are a couple of ways to do this. You could degrade the pool by removing and replacing a disk at a time, but we won't do that as the the primary disk is directly attached to the motherboard and the host bus adapter supports 4 SATA or SAS disks.
The capacity of this server allows for powering it down to attach the 2 additional drives. Only when the disk replacement procedure is fully complete will the original 1TB disks be removed.
The command zpool list -v provides a nice summary of the existing pools and their available space. As you can see below, the 2nd pool is at 93% capacity. Time to do something about that.
# Output of zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 298G 20.9G 277G - 1% 7% 1.00x ONLINE -
c4t0d0 298G 20.9G 277G - 1% 7%
storage 928G 864G 64.3G - 55% 93% 1.00x ONLINE -
mirror 928G 864G 64.3G - 55% 93%
c2t16d0 - - - - - -
c2t17d0 - - - - - -
The format command is useful for a number of things. In this case, it can be used to determine which disks are available for use. Here once the list of disks has been displayed, I simply CTRL + C out of program.
# Output of format command:
AVAILABLE DISK SELECTIONS:
0. c2t16d0 <WD-WD1001FYYG-01SL3-VR07-931.51GB>
/pci@0,0/pci8086,2e21@1/pci1000,3090@0/sd@10,0
1. c2t17d0 <WD-WD1001FYYG-01SL3-VR07-931.51GB>
/pci@0,0/pci8086,2e21@1/pci1000,3090@0/sd@11,0
2. c2t18d0 <Hitachi-HUA723020ALA641-MK7OA840 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci8086,2e21@1/pci1000,3090@0/sd@12,0
3. c2t19d0 <ATA-HitachiHUA72302-A840 cyl 60798 alt 2 hd 255 sec 252>
/pci@0,0/pci8086,2e21@1/pci1000,3090@0/sd@13,0
4. c4t0d0 <SAMSUNG-HD321KJ-CP100-12-298.09GB>
/pci@0,0/pci8086,28@1f,2/disk@0,0
From the output above, the 2 new disks are identified as c2t18d0 and c2t19d0. These will be used to replace disks c2t16d0 and c2t17d0.
To ensure the pool will make full use of the extra space, we will need to turn on pool auto-expansion and then check to ensure it's really enabled.
# Enable pool auto-expansion
zpool set autoexpand=on storage
# Verify it's really enabled
zpool get all | grep autoexpand
rpool autoexpand off default
storage autoexpand on local
The zpool replace command is used to replace the first disk. In this command, disk c2t16d0 will be replaced with disk c2t18d0. When the process completes, disk c2t16d0 will automatically be removed from the pool.
# Replace the first disk
zpool replace storage c2t16d0 c2t18d0
Now to check on the progress of the resilvering. This will likely take a couple of hours to complete.
# Check the status
zpool status -v storage
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Dec 16 18:03:54 2017
30.9G scanned out of 864G at 104M/s, 2h16m to go
30.9G resilvered, 3.58% done
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
replacing-0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0 (resilvering)
c2t17d0 ONLINE 0 0 0
errors: No known data errors
When resilvering completes, the zpool replace command will be used once more, this time to replace disk c2t17d0 with disk c2t19d0.
zpool replace storage c2t17d0 c2t19d0
The resilvering takes some time to complete, but the output of the command provides a completion estimate. Use the zpool status command to monitor the progress.
# Check the status
zpool status -v storage
pool: storage
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Sat Dec 16 21:04:38 2017
206G scanned out of 864G at 107M/s, 1h45m to go
206G resilvered, 23.82% done
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
replacing-1 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0 (resilvering)
errors: No known data errors
# zpool status -v storage
pool: storage
state: ONLINE
scan: resilvered 864G in 2h33m with 0 errors on Sat Dec 16 23:38:02 2017
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
errors: No known data errors
And the output of zpool list -v should now show the increased size of the ZFS pool.
# zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 298G 20.9G 277G - 1% 7% 1.00x ONLINE -
c4t0d0 298G 20.9G 277G - 1% 7%
storage 1.81T 864G 992G - 27% 46% 1.00x ONLINE -
mirror 1.81T 864G 992G - 27% 46%
c2t18d0 - - - - - -
c2t19d0 - - - - - -