Home

Search Posts:

Archives

Login

January 2014

S M T W H F S
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31

Last time I described managing RAID pools with ZFS. This time, I'm going to describe what volume management is - and what it's like on Linux - as a way to compare a traditional model to that of ZFS (coming in part 2!).

Historically, with most operating systems, the notion of a "volume" was tied to the partition layout of a physical disk or RAID device. For example, an administrator could create 4 partitions on a single drive, and allocate different amounts of storage to each partition based on its potential use. He would then typically create a filesystem on each partition and determine a logical representation for that filesystem (in UNIX or Linux, this would be a location in the filesystem hierarchy - in DOS this would be an additional "drive letter").

If he wanted to have RAID configured, he could either use RAID on the entire drive (common with hardware RAID) and partition the resulting device (which commonly appears to the OS as a single, unpartitioned drive), or he could partition the drives and then configure RAID across the partitions (common with software RAID).

Partitioning is clearly useful, but it has some severe limitations. Once created, a partition scheme is generally difficult to modify. Growing and shrinking partitions (and their associated filesystems) is usually cumbersome, as is adding additional partitions to an existing layout. You could typically never have a filesystem that was larger than a single physical disk or RAID group. Basically, if you didn't accurately predict your needs initially, you could have a lot of work to do later.

Enter the volume manager. Most modern operating systems include tools that allow the administrator to add an additional abstraction layer on top of the partitioning scheme. Linux's LVM, for example, takes one or more block devices (in LVM terms these are "physical volumes" which generally equate to RAID devices, partitions, or entire drives) and allows the administrator to create "volume groups" which contain them. These volume groups can then be subdivided into "logical volumes" in whatever configuration an administrator desires, and on those logical volumes the administrator can create filesystems. To the administrator, these resulting logical volumes are functionally equivalent to partitions.

Volume managers, however, transcend many of the limitations of partitioning - they allow multiple devices to logically appear as a single large device, thus allowing filesystems to expand across physical boundaries; they allow for greater flexibility in resizing existing volumes on the fly; they allow additional devices to be added to an existing configuration on the fly; and they provide a mechanism by which other "useful things" can happen (such as snapshotting, encryption, or compression). However, there are still limitations - for example, LVM does not actually control the filesystem itself, only the logical volume on which it is placed. So, if you resize a logical volume, you still have to resize the filesystem that resides upon it, which may still require a reboot depending on the filesystem.

Now, if you want to combine all of this in Linux (to achieve RAID/LVM backed storage), you would essentially do the following:

1) Install physical disks
2) Partition these disks (fdisk)
3) Add these partitions to md RAID groups (mdadm)
4) Add those md RAID groups to volume groups using LVM tools (pvcreate, vgcreate)
5) Subdivide those volume groups into logical volumes using LVM tools (lvcreate)
6) Create a filesystem on each of those logical volumes (mkfs)
7) Map these filesystems to locations on the filesystem hierarchy (modify /etc/fstab and run mount)

Should he need to increase the size of a filesystem later, he must:

1) Modify the size of the logical volume (lvextend)
2) Modify the size of the filesystem (resize2fs)

To add a new filesystem, he must:

1) Create a new logical volume (lvcreate)
2) Create the new filesystem (mkfs)
3) Add the appropriate mount point (/etc/fstab and run mount)

Does this stuff work? Sure, and it works pretty well. However, from my perspective, it's all a bit cumbersome. But in't there a better way to administer storage?

Of course there is - and that way is ZFS. Next time, I'll tell you why.

Comments

Sam @ Sun Feb 19 10:07:24 -0500 2012

Nice article but am waiting for Part 2. Please do write.

Syed @ Wed Feb 22 17:14:33 -0500 2012

Thanks for the write up....and looking forward to Part 2

New Comment

Author (required)

Email (required)

Url

Spam validation (required)
Enter the sum of 7 and 6:

Body (required)

Comments |Back