Pages

Tuesday, September 25, 2012

Splitting a ZFS pool

Splitting a ZFS mirror is very useful when you need to create a quick copy during runtime without copying the entire file system. My current pool configuration looks like this:

# zpool status orapool
  pool: orapool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        orapool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
# zpool list
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
orapool  1.97G   130K  1.97G     0%  ONLINE  -


I have one pool with two mirrors and four devices. Each devices has a capacity of ~1GB. On this pool i have create one file system:

# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
orapool                     117K  1.94G    21K  /orapool
orapool/u02                  21K  1.94G    21K  /u02


To split the pool run the zpool split command:

# zpool split orapool testpool c1t1d0 c1t3d0

Before you can do anything with the new testpool, you need to import it first:

# zpool import testpool
cannot mount 'testpool/u02': mountpoint or dataset is busy


When the pool gets imported, then it tries to mount all available file systems automatically which will not work since /u02 is already in use by orapool/u02. You need to set a new mountpoint first and then you can mount testpool/u02:

# zfs set mountpoint=/testu02 testpool/u02
# mkdir /testu02
# zfs mount testpool/u02
# df -h
Filesystem             size   used  avail capacity  Mounted on
...
orapool                1.9G    21K   1.9G     1%    /orapool
orapool/u02            1.9G    21K   1.9G     1%    /u02
testpool               1.9G    21K   1.9G     1%    /testpool
testpool/u02           1.9G    21K   1.9G     1%    /testu02
...


Another cool feature is to import the splitted pool directly by defining a alternate root:

# zpool split -R /test orapool testpool c1t1d0 c1t3d0

In this case the pool will be automatically imported to /test and all file systems will be mounted under /test, eg /test/u02:

# df -h
Filesystem             size   used  avail capacity  Mounted on
...
orapool                1.9G    21K   1.9G     1%    /orapool
orapool/u02            1.9G    21K   1.9G     1%    /u02
testpool               1.9G    21K   1.9G     1%    /test/testpool
testpool/u02           1.9G    21K   1.9G     1%    /test/u02
...


From /test/u02 you can do everything you need without affecting the original file system /u02 and so on. When your work is done and you need to recreate the original mirrors then destroy the testpool first:

# zpool destroy testpool
# rmdir /test/u02
# rmdir /test


Finally attach the dettached disks:

# zpool attach orapool c1t0d0 c1t1d0
# zpool attach orapool c1t2d0 c1t3d0
# zpool status orapool
  pool: orapool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Aug 29 21:37:12 2012
config:

        NAME        STATE     READ WRITE CKSUM
        orapool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0  84K resilvered

errors: No known data errors


Your original pool configuration is back.

No comments:

Post a Comment