Quantcast
Viewing all articles
Browse latest Browse all 1507

A Guide to Install and Use ZFS on CentOS 7

ZFS, short form of Zettabyte Filesystem is an advanced and highly scalable filesystem. It was originally developed by Sun Microsystems and is now part of the OpenZFS project. With so many filesystems available on Linux, it is quite natural to ask what is special about ZFS.  Unlike other filesystems, it is not just a filesystem but a logical volume manager as well.  Some of the features of ZFS that make it popular are:

  • Data Integrity -  data consistency and integrity are ensured through copy-on-write and checksum techniques
  • Pooling of storage space - available storage drives can be put together into a single pool called zpool
  • Software RAID - Setting up a raidz array is as simple as issuing a single command.
  • Inbuilt volume manager - ZFS acts as a volume manager as well.
  • Snaphots, clones, compression - these are some of the advanced features that ZFS provides.

ZFS  is a 128-bit filesystem and has the capacity to store 256 zetta bytes!  In this guide, we will be learning how to install, setup and also to use some important ZFS commands on a CentOS 7 server.

NOTE: The installation part is specific to CentOS server while the commands are common on any Linux system

Terminology

Before we move on, let us understand some of the terminologies that are commonly used in ZFS.

Pool

Logical grouping of storage drives. It is the basic building block of ZFS and it is from here that storage space gets allocated for datasets.

Datasets

The components of ZFS filesystem namely filesystem, clones, snapshots and volumes are referred to as datasets.

Mirror

A virtual device storing identical data copies on two or more disks. In situations where one disk fails, same data is available on other disks of that mirror.

Resilvering

Process of copying data from one disk to another in the event of restoring a device.

Scrub

Scrub is used for consistency check in ZFS like how fsck is used in other filesystems

Installing ZFS

In order to install ZFS on CentOS, we need to first setup the EPEL  repository for supporting packages and then the ZFS repository to install the required ZFS packages.

Note: Please prefix sudo to all the commands if you are not the root user. 

yum localinstall --nogpgcheck http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el7.noarch.rpm

Now install the kernel development and zfs packages. Kernel development packages are needed as ZFS is built as a module and inserted into the kernel.

yum install kernel-devel zfs

Verify if the zfs module is inserted into the kernel using 'lsmod' command and if not, insert it manually using 'modprobe' command.

[root@li1467-130 ~]# lsmod |grep zfs

[root@li1467-130 ~]# modprobe zfs

[root@li1467-130 ~]# lsmod |grep zfs
zfs 2790271 0
zunicode 331170 1 zfs
zavl 15236 1 zfs
zcommon 55411 1 zfs
znvpair 89086 2 zfs,zcommon
spl 92029 3 zfs,zcommon,znvpair

Let us check if we are able to use the zfs commands:

[root@li1467-130 ~]# zfs list
no datasets available

Administration

ZFS has two main utilities, zpool and zfs.  While zpool deals with creation and maintenance of pools using disks zfs utility is responsible for creation and maintenance of datasets.

zpool utility

Creating and destroying pools

First verify the disks available for you to create a storage pool.

[root@li1467-130 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8,  0  Mar 16 08:12 /dev/sda
brw-rw---- 1 root disk 8, 16 Mar 16 08:12 /dev/sdb
brw-rw---- 1 root disk 8, 32 Mar 16 08:12 /dev/sdc
brw-rw---- 1 root disk 8, 48 Mar 16 08:12 /dev/sdd
brw-rw---- 1 root disk 8, 64 Mar 16 08:12 /dev/sde
brw-rw---- 1 root disk 8, 80 Mar 16 08:12 /dev/sdf

Create a pool from a set of drives.

zpool create <option> <pool name. <drive 1> <drive 2> .... <drive n>

[root@li1467-130 ~]# zpool create -f zfspool sdc sdd sde sdf

'zpool status' command displays the status of the available pools

[root@li1467-130 ~]# zpool status
pool: zfspool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

Verify if the pool creation was successful.

[root@li1467-130 ~]# df -h
Filesystem    Size   Used      Avail  Use%   Mounted on
/dev/sda      19G    1.4G        17G      8%      /
devtmpfs    488M        0      488M      0%     /dev
tmpfs          497M        0      497M      0%    /dev/shm
tmpfs          497M    50M     447M     11%   /run
tmpfs          497M         0     497M      0%   /sys/fs/cgroup
tmpfs          100M         0     100M      0%   /run/user/0
zfspool         3.7G         0       3.7G      0%  /zfspool

As you can see, zpool has created a pool by name 'zfspool' of size 3.7 GB and has also mounted it in /zfspool.

To destroy a pool, use the 'zpool destroy' command

zpool destroy <pool name>

[root@li1467-130 ~]# zpool destroy zfspool
[root@li1467-130 ~]# zpool status
no pools available

Let us now try creating a simple mirror pool.

zpool create <option> <pool name> mirror <drive 1> <drive 2>... <drive n>

We can also create multiple mirrors at the same time by repeating the mirror keyword followed by the drives.

[root@li1467-130 ~]# zpool create -f mpool mirror sdc sdd mirror sde sdf
[root@li1467-130 ~]# zpool status
pool: mpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

In the above example, we have created mirror pools each with two disks.

Similarly, we can create a raidz pool.

[root@li1467-130 ~]# zpool create -f rpool raidz sdc sdd sde sdf
[root@li1467-130 ~]# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0

errors: No known data errors

Managing devices in ZFS pools

Once a pool is created, it is possible to add or remove hot spares and cache  devices from the pool, attach or detach devices from mirrored pools and replace devices. But non-redundant and raidz devices cannot be removed from a pool.  We will see how to perform some of these operations in this section.

I'm first creating a pool called 'testpool' consisting of two devices, sdc and sdd.  Another device sde will then be added to this.

[root@li1467-130 ~]# zpool create -f testpool sdc sdd

[root@li1467-130 ~]# zpool add testpool sde
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

As mentioned earlier, I cannot remove this newly added device as it is not a redundant or raidz pool.

[root@li1467-130 ~]# zpool remove testpool sde
cannot remove sde: only inactive hot spares, cache, top-level, or log devices can be removed

But I can add a spare disk to this pool and remove it.

[root@li1467-130 ~]# zpool add testpool spare sdf
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
spares
sdf AVAIL

errors: No known data errors
[root@li1467-130 ~]# zpool remove testpool sdf
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME        STATE       READ  WRITE   CKSUM
testpool    ONLINE       0          0               0
sdc            ONLINE       0           0               0
sdd            ONLINE      0            0               0
sde            ONLINE      0            0               0

errors: No known data errors

Similarly, we can use attach command to attach disks to a mirrored or non-mirrored pool and detach command to detach disks from a mirrored pool.

zpool attach <options> <pool name> <device> <new device>

zpool detach <pool name> <device>

When a device fails or gets corrupted, we can replace it using the 'replace' command.

zpool replace <options> <pool name> <device> <new device>

We will test this by forcefully corrupting a device in a mirrored configuration.

[root@li1467-130 ~]# zpool create -f testpool mirror sdd sde

This creates a mirror pool consisting of disks sdd and sde. Now, let us deliberately corrupt sdd drive by writing zeroes into it.

[root@li1467-130 ~]# dd if=/dev/zero of=/dev/sdd
dd: writing to ‘/dev/sdd’: No space left on device
2048001+0 records in
2048000+0 records out
1048576000 bytes (1.0 GB) copied, 22.4804 s, 46.6 MB/s

We will use the 'scrub' command to detect this corruption.

[root@li1467-130 ~]# zpool scrub testpool
[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: scrub repaired 0 in 0h0m with 0 errors on Fri Mar 18 09:59:40 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdd UNAVAIL 0 0 0 corrupted data
sde ONLINE 0 0 0

errors: No known data errors

We will now replace sdd with sdc.

[root@li1467-130 ~]# zpool replace testpool sdd sdc; zpool status
pool: testpool
state: ONLINE
scan: resilvered 83.5K in 0h0m with 0 errors on Fri Mar 18 10:05:17 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
replacing-0 UNAVAIL 0 0 0
sdd UNAVAIL 0 0 0 corrupted data
sdc ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

[root@li1467-130 ~]# zpool status
pool: testpool
state: ONLINE
scan: resilvered 74.5K in 0h0m with 0 errors on Fri Mar 18 10:00:36 2016
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

Migration of pools

We can migrate storage pools between different hosts using export and import commands. For this, the disks used in the pool should be available from both the systems.

[root@li1467-130 ~]# zpool export testpool
[root@li1467-130 ~]# zpool status
no pools available

The command 'zpool import' lists all the pools that are available for importing. Execute this command from the system where you want to import the pool.

[root@li1467-131 ~]# zpool import
pool: testpool
id: 3823664125009563520
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

testpool ONLINE
sdc ONLINE
sdd ONLINE
sde ONLINE

Now import the required pool

[root@li1467-131 ~]# zpool import testpool
[root@li1467-131 ~]# zpool status
pool: testpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0

errors: No known data errors

iostat

One can verify the io statistics of the pool devices using the iostat command.

[root@li1467-130 ~]# zpool iostat -v testpool
capacity          operations                        bandwidth
pool          alloc      free            read     write             read   write
----------    -----     -----            -----     -----                -----   -----
testpool    1.80M  2.86G        22            27               470K  417K
sdc             598K   975M           8              9               200K  139K
sdd             636K  975M            7              9                135K  139K
sde             610K   975M           6              9                 135K 139K
----------   -----     -----           -----          -----               -----  -----

zfs utility

We will now move on to the zfs utility.  Here we will take a look at how to create, destroy datasets, filesystem compression, quotas and snapshots.

Creating and destroying filesystem

ZFS filesystem can be created using the zfs create command

zfs create <filesystem>

 

[root@li1467-130 ~]# zfs create testpool/students
[root@li1467-130 ~]# zfs create testpool/professors
[root@li1467-130 ~]# df -h
Filesystem                    Size             Used          Avail          Use%          Mounted on
/dev/sda                       19G              1.4G          17G             8%                     /
devtmpfs                   488M                  0      488M             0%                    /dev
tmpfs                          497M                  0       497M            0%                   /dev/shm
tmpfs                          497M            50M       447M           11%                  /run
tmpfs                          497M                 0        497M            0%                /sys/fs/cgroup
testpool                       2.8G                  0         2.8G            0%               /testpool
tmpfs                          100M                  0        100M            0%             /run/user/0
testpool/students     2.8G                   0         2.8G             0%            /testpool/students
testpool/professors  2.8G                   0         2.8G             0%           /testpool/professors

From the above output, observe that though there is no mount point given at the time of filesystem creation, mountpoint is created using the same path relationship as that of the pool.

zfs create allows  using -o with it using which we can specify options like mountpoint, compression, quota, exec etc.

One can list the available filesystem using zfs list:

[root@li1467-130 ~]# zfs list
NAME                           USED     AVAIL     REFER    MOUNTPOINT
testpool                         100M       2.67G       19K         /testpool
testpool/professors        31K     1024M   20.5K        /testpool/professors
testpool/students        1.57M     98.4M   1.57M      /testpool/students

We can destroy a filesystem using the destroy option

zfs destroy <filesystem>

Compression

We will now understand how compression works in ZFS. Before we start using compression, we need to enable it using 'set compression'

zfs set <compression=value> <filesystem|volume|snapshot>

Once this is done, compression and decompression happens on the filesystem on the fly transparently.

In our example, I will be enabling compression on the students directory using lz4 compression algorithm.

[root@li1467-130 ~]# zfs set compression=lz4 testpool/students

I will now copy a file of size 15M into this filesystem and check the size once it is copied.

[root@li1467-130 /]# cd /var/log
[root@li1467-130 log]# du -h secure
15M secure

[root@li1467-130 ~]# cp /var/log/secure /testpool/students/

[root@li1467-130 students]# df -h .
Filesystem               Size     Used   Avail    Use%      Mounted on
testpool/students   100M   1.7M   99M        2%      /testpool/students

Notice that the size used in the filesystem is only 1.7M while the file size was 15M. We can check the compression ratio as well..

[root@li1467-130 ~]# zfs get compressratio testpool
NAME      PROPERTY         VALUE            SOURCE
testpool    compressratio     9.03x                     -

 Quotas and reservation

Let me explain quotas with a real life example. Suppose we have a requirement in a university to limit the disk space used by the filesystem for professors and students. Let us assume that we need to allocate 100MB for students and 1GB for professors. We can make use of 'quotas' in ZFS to fulfill this requirement. Quotas ensure that the amount of disk space used by a filesystem doesn't exceed the limits set. Reservation helps in actually allocating and guaranteeing that the required amount of disk space is available for the filesystem.

zfs set quota=<value> <filesystem|volume|snapshot>

zfs set reservation=<value> <filesystem|volume|snapshot>

 

[root@li1467-130 ~]# zfs set quota=100M testpool/students
[root@li1467-130 ~]# zfs set reservation=100M testpool/students
[root@li1467-130 ~]# zfs list
NAME                          USED      AVAIL    REFER    MOUNTPOINT
testpool                        100M       2.67G       19K        /testpool
testpool/professors      19K       2.67G        19K       /testpool/professors
testpool/students      1.57M       98.4M    1.57M    /testpool/students

[root@li1467-130 ~]# zfs set quota=1G testpool/professors
[root@li1467-130 ~]# zfs list
NAME                           USED     AVAIL    REFER    MOUNTPOINT
testpool                         100M     2.67G       19K          /testpool
testpool/professors       19K    1024M       19K         /testpool/professors
testpool/students       1.57M    98.4M    1.57M       /testpool/students

In the above example, we have allocated 100MB for students and 1GB for professors. Observe the 'AVAIL' column in 'zfs list'.  Initially they had the size of 2.67GB each and after setting the quota, the values have changed accordingly.

Snapshots

Snapshots are read-only copy of the ZFS filesystem at a given point in time. They do not consume any extra space in zfs pool. We can either roll back to the same state at a later stage or extract only a single or a set of files as per the user requirement.

I will now create some directories and a file under '/testpool/professors' from our previous example and then take a snapshot of this filesystem.

[root@li1467-130 ~]# cd /testpool/professors/

[root@li1467-130 professors]# mkdir maths physics chemistry

[root@li1467-130 professors]# cat > qpaper.txt
Question paper for the year 2016-17
[root@li1467-130 professors]# ls -la
total 4
drwxr-xr-x  5  root root    6   Mar 19 10:34 .
drwxr-xr-x  4  root root    4   Mar 19 09:59 ..
drwxr-xr-x  2  root root    2   Mar 19 10:33 chemistry
drwxr-xr-x  2  root root    2   Mar 19 10:32 maths
drwxr-xr-x  2  root root    2   Mar 19 10:32 physics
-rw-r--r--     1  root root  36   Mar 19 10:35 qpaper.txt

To take a snapshot, use the following syntax:

zfs snapshot <filesystem|volume@<snap>>

 

[root@li1467-130 professors]# zfs snapshot testpool/professors@03-2016
[root@li1467-130 professors]# zfs list -t snapshot
NAME                                             USED         AVAIL     REFER     MOUNTPOINT
testpool/professors@03-2016       0                -                20.5K          -

I'll now delete the file that was created and extract it from the snapshots

[root@li1467-130 professors]# rm -rf qpaper.txt
[root@li1467-130 professors]# ls
chemistry maths physics
[root@li1467-130 professors]# cd .zfs
[root@li1467-130 .zfs]# cd snapshot/03-2016/
[root@li1467-130 03-2016]# ls
chemistry maths physics qpaper.txt

[root@li1467-130 03-2016]# cp -a qpaper.txt /testpool/professors/
[root@li1467-130 03-2016]# cd /testpool/professors/
[root@li1467-130 professors]# ls
chemistry maths physics qpaper.txt

The deleted file is back in its place.

We can list all the available snapshots using zfs list:

[root@li1467-130 ~]# zfs list -t snapshot
NAME                                             USED     AVAIL    REFER    MOUNTPOINT
testpool/professors@03-2016    10.5K       -              20.5K       -

Finally, let's destroy the snapshot using the zfs destroy command:

zfs destroy <filesystem|volume@<snap>>

 

[root@li1467-130 ~]# zfs destroy testpool/professors@03-2016

[root@li1467-130 ~]# zfs list -t snapshot
no datasets available

Conclusion

In this article, you have learnt how to install ZFS on CentOS 7 and use some basic and important commands from zpool and zfs utilities. This is not a comprehensive list. ZFS has much more capabilities and you can explore them further from its official page.

The post A Guide to Install and Use ZFS on CentOS 7 appeared first on LinOxide.

Image may be NSFW.
Clik here to view.

Viewing all articles
Browse latest Browse all 1507