LVM or logical volume management is a storage device management technology that enables users to aggregate and abstract the physical layout of component storage devices, thereby achieving easier and more flexible management. Using the Linux kernel framework of the device mapper, the current iteration of LVM2 can be used to collect existing storage devices into groups and allocate logical units from the combined space as needed.
In this guide, we will introduce how to use LVM to manage storage devices. We will show you how to display information about volumes and potential targets, how to create and destroy various types of volumes, and how to modify existing volumes through resizing or conversion. We will use Ubuntu 18.04 server to demonstrate these operations.
To follow up, you should have access to the Ubuntu 18.04 server. You need a non-root user who has configured sudo
permissions for administrative tasks. Students who don’t have a server can buy it from here, but I personally recommend you to use the free Tencent Cloud Developer Lab for experimentation, and then buy server.
Familiar with LVM components and concepts and test basic LVM configuration.
When you are ready, please log in to your server with your sudo
user.
It is important to be able to easily obtain information about the various LVM components in the system. Fortunately, the LVM tool suite provides a large number of tools for displaying information about each layer in the LVM stack.
To display all available block storage devices that LVM may manage, use the following lvmdiskscan
command:
sudo lvmdiskscan
/dev/sda [200.00 GiB]/dev/sdb [100.00 GiB]2 disks
2 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
We can see the devices that may be used as LVM physical volumes.
This may be the first step when adding a new storage device for use with LVM.
Write the header to the storage device to mark it as free to use as an LVM component. Devices with these headers are called physical volumes.
You can use the lvmdiskscan
by selecting the -l
option to display all physical devices on the system and only return physical volumes:
sudo lvmdiskscan -l
WARNING: only considering LVM devices
/dev/sda [200.00 GiB] LVM physical volume
/dev/sdb [100.00 GiB] LVM physical volume
2 LVM physical volume whole disks
0 LVM physical volumes
The pvscan
command is very similar to the above command, because it searches for LVM physical volumes in all available devices. The output format is a bit different, it contains a small amount of additional information:
sudo pvscan
PV /dev/sda VG LVMVolGroup lvm2 [200.00 GiB /0 free]
PV /dev/sdb VG LVMVolGroup lvm2 [100.00 GiB /10.00 GiB free]
Total:2[299.99 GiB]/in use:2[299.99 GiB]/in no VG:0[0]
If you need more details, then the pvs
and pvdisplay
commands are better choices.
The pvs
command is highly configurable and can display information in many different formats. Since its output can be strictly controlled, it is often used when scripting or automation is required. Its basic output provides a useful, clear summary similar to earlier commands:
sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sda LVMVolGroup lvm2 a--200.00g 0/dev/sdb LVMVolGroup lvm2 a--100.00g 10.00g
For more detailed, human-readable output, the pvdisplay
command is usually a better choice:
sudo pvdisplay
- - - Physical volume ---
PV Name /dev/sda
VG Name LVMVolGroup
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes(but full)
PE Size 4.00 MiB
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID kRUOyU-0ib4-ujPh-kAJP-eeQv-ztRL-4EkaDQ
- - - Physical volume ---
PV Name /dev/sdb
VG Name LVMVolGroup
PV Size 100.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 25599
Free PE 2560
Allocated PE 23039
PV UUID udcuRJ-jCDC-26nD-ro9u-QQNd-D6VL-GEIlD7
As you can see, the pvdisplay
command is usually the simplest command to get detailed information about the physical volume.
To discover the logical extents that have been mapped to each volume, pass the -m
option to pvdisplay
:
sudo pvdisplay -m
- - - Physical volume ---
PV Name /dev/sda
VG Name LVMVolGroup
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 38395
Allocated PE 12804
PV UUID kRUOyU-0ib4-ujPh-kAJP-eeQv-ztRL-4EkaDQ
- - - Physical Segments ---
Physical extent 0 to 0:
Logical volume /dev/LVMVolGroup/db_rmeta_0
Logical extents 0 to 0
Physical extent 1 to 5120:
Logical volume /dev/LVMVolGroup/db_rimage_0
Logical extents 0 to 5119
...
This is useful when trying to determine which data is stored on which physical disk for management.
LVM also has a large number of tools to display information about volume groups.
The vgscan
command can be used to scan the system to find available volume groups. It will also rebuild cache files when necessary. This is a good command when importing volume groups into a new system:
sudo vgscan
Output Reading all physical volumes. This may take a while...
Found volume group "LVMVolGroup" using metadata type lvm2
The command will not output much information, but it should be able to find every available volume group on the system. To display more information, you can use the vgs
and vgdisplay
commands.
Like its physical volume counterpart, the vgs
command is versatile and can display a large amount of information in various formats. Because its output can be easily manipulated, it is often used when scripting or automation is required. For example, some useful output modifications are to display physical device and logical volume paths:
sudo vgs -o +devices,lv_path
VG #PV #LV #SN Attr VSize VFree Devices Path
LVMVolGroup 240 wz--n-299.99g 10.00g /dev/sda(0)/dev/LVMVolGroup/projects
LVMVolGroup 240 wz--n-299.99g 10.00g /dev/sda(2560)/dev/LVMVolGroup/www
LVMVolGroup 240 wz--n-299.99g 10.00g /dev/sda(3840)/dev/LVMVolGroup/db
LVMVolGroup 240 wz--n-299.99g 10.00g /dev/sda(8960)/dev/LVMVolGroup/workspace
LVMVolGroup 240 wz--n-299.99g 10.00g /dev/sdb(0)/dev/LVMVolGroup/workspace
For more detailed, human-readable output, the vgdisplay
command is usually the best choice. Adding the -v
flag also provides information about the physical volumes that build the volume group, and the logical volumes created using the volume group:
sudo vgdisplay -v
Using volume group(s) on command line.--- Volume group ---
VG Name LVMVolGroup
...
- - - Logical volume ---
LV Path /dev/LVMVolGroup/projects
...
- - - Logical volume ---
LV Path /dev/LVMVolGroup/www
...
- - - Logical volume ---
LV Path /dev/LVMVolGroup/db
...
- - - Logical volume ---
LV Path /dev/LVMVolGroup/workspace
...
- - - Physical volumes ---
PV Name /dev/sda
...
PV Name /dev/sdb
...
The vgdisplay
command is useful because it can tie together information about many different elements of the LVM stack.
To display information about logical volumes, LVM has a set of related tools.
Like other LVM components, the lvscan
option scans the system and outputs minimal information about the logical volumes it finds:
sudo lvscan
ACTIVE '/dev/LVMVolGroup/projects'[10.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/www'[5.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/db'[20.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/workspace'[254.99 GiB] inherit
For more complete information, the lvs
command is flexible, powerful and easy to use in scripts:
sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
db LVMVolGroup -wi-ao----20.00g
projects LVMVolGroup -wi-ao----10.00g
workspace LVMVolGroup -wi-ao----254.99g
www LVMVolGroup -wi-ao----5.00g
To understand the number of stripes and the type of logical volume, use the following --segments
option:
sudo lvs --segments
LV VG Attr #Str Type SSize
db LVMVolGroup rwi-a-r---2 raid1 20.00g
mirrored_vol LVMVolGroup rwi-a-r---3 raid1 10.00g
test LVMVolGroup rwi-a-r---3 raid5 10.00g
test2 LVMVolGroup -wi-a-----2 striped 10.00g
test3 LVMVolGroup rwi-a-r---2 raid1 10.00g
The most readable output is produced by the lvdisplay
command.
When the -m
flag is added, the tool also displays information about how the logical volume is decomposed and distributed:
sudo lvdisplay -m
- - - Logical volume ---
LV Path /dev/LVMVolGroup/projects
LV Name projects
VG Name LVMVolGroup
LV UUID IN4GZm-ePJU-zAAn-DRO3-1f2w-qSN8-ahisNK
LV Write Access read/write
LV Creation host, time lvmtest,2016-09-0921:00:03+0000
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
- - - Segments ---
Logical extents 0 to 2559:
Type linear
Physical volume /dev/sda
Physical extents 0 to 2559
...
From the output to the bottom, you can see that the /dev/LVMVolGroup/projects
logical volume is completely contained in the /dev/sda
physical volume in this example. This information is useful if you need to delete the underlying device and want to move the data to a specific location.
This section will discuss how to create and expand physical volumes, volume groups and logical volumes.
To use storage devices with LVM, they must first be marked as physical volumes. This specifies that LVM can use the device in the volume group.
First, use the lvmdiskscan
command to find all block devices that LVM can view and use:
sudo lvmdiskscan
/dev/sda [200.00 GiB]/dev/sdb [100.00 GiB]2 disks
2 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
Here, we can see devices suitable for conversion in LVM physical volumes.
Warning: Please make sure to double check that the device you are going to use with LVM does not have any important data written into it. Using these devices in LVM will overwrite the current content. If you already have important data on your server, please back it up before continuing.
To mark a storage device as an LVM physical volume, use pvcreate
. You can pass in multiple devices at once:
sudo pvcreate /dev/sda /dev/sdb
This should write LVM headers on all target devices to mark them as LVM physical volumes.
To create a new volume group from an LVM physical volume, use the vgcreate
command. You must provide the volume group name, followed by at least one LVM physical volume:
sudo vgcreate volume_group_name /dev/sda
This example will create a volume group using a single initial physical volume. If you want, you can pass in multiple physical volumes when creating:
sudo vgcreate volume_group_name /dev/sda /dev/sdb /dev/sdc
Generally, only one volume group is required per server. All storage managed by LVM can be added to the pool, and then logical volumes can be allocated from it.
One reason you might want to have multiple volume groups is if you feel you need to use different extent sizes for different volumes. Normally you don't need to set the range size (the default size for most uses is 4M), but if necessary, you can create a volume group by passing the -s
option:
suod vgcreate -s 8M volume_group_name /dev/sda
This will create a new volume group with a size of 8M range.
To extend the volume group by adding other physical volumes, use the vgextend
command. This command takes a volume group followed by the physical volume to be added. If you want, you can pass in multiple devices at once:
sudo vgextend volume_group_name /dev/sdb
The physical volume will be added to the volume group to expand the usable capacity of the storage pool.
To create a logical volume from a volume group storage pool, use the lvcreate
command. Use the -L
option to specify the size of the logical volume, use the option to specify the name -n
, and pass in the volume group to allocate space.
For example, to create a 10G logical volume named by test
from the LVMVolGroup
volume group, type:
sudo lvcreate -L 10G -n test LVMVolGroup
If the volume group has enough free space to accommodate the volume capacity, a new logical volume will be created.
If you want to use the remaining free space in the volume group to create a volume, use the vgcreate
command with the -n
naming option and pass in the volume group as before. Use the -l 100%FREE
option, instead of passing in the size, this option uses the remaining extents in the volume group to form the logical volume:
sudo lvcreate -l 100%FREE -n test2 LVMVolGroup
This should exhaust the remaining space in the logical volume.
You can also use some advanced options to create logical volumes. Some options you might want to consider are:
linear: The default type. The basic physical devices used (if more than one) will simply be attached to each other, one by one.
stripe: Similar to RAID 0, the stripe topology divides data into blocks and spreads them across the underlying physical volumes in a round-robin fashion. This can improve performance, but may result in larger data holes. This requires the -i
option described below and at least two physical volumes.
raid1: Create a mirrored RAID 1 volume. By default, the mirror will have two copies, but more copies can be specified via the -m
option described below. At least two physical volumes are required.
raid5: Create a RAID 5 volume. At least three physical volumes are required.
raid6: Create a RAID 6 volume. At least four physical volumes are required.
- m: Specify the number of copies of other data to keep. The value "1" specifies to maintain an additional copy, a total of two sets of data.
- i: Specify the number of stripes that should be maintained. This is required for the striped
type and can modify the default behavior of certain other RAID options.
- s: Specifies that the operation should create a snapshot from an existing logical volume instead of a new independent logical volume.
We will provide some examples of these options to demonstrate how they are commonly used.
To create a striped volume, at least two strips must be specified. This topology and stripe count requires at least two physical volumes with usable capacity:
sudo lvcreate --type striped -i 2-L 10G -n striped_vol LVMVolGroup
To create a mirrored volume, use the raid1
type. If you need more than two sets of data, please use the -m
option. This example uses -m 2
to create a total of three sets of data (LVM counts it as an original data set with two mirrors). You will need at least three physical volumes to succeed:
sudo lvcreate --type raid1 -m 2-L 20G -n mirrored_vol LVMVolGroup
To create a snapshot of a volume, the original logical volume must be provided to the snapshot instead of the volume group. The snapshot does not take up much space initially, but grows with changes to the logical volume being tracked. The size used in this process is the maximum size of the snapshot (snapshots exceeding this size are damaged and unusable; however, snapshots close to their capacity can be expanded):
sudo lvcreate -s -L 10G -n snap_test LVMVolGroup/test
Note: To restore the logical volume to the point in time of the snapshot, use the following lvconvert --merge
command:
sudo lvconvert --merge LVMVolGroup/snap_test
This will restore the origin of the snapshot to the state when the snapshot was taken.
As you can see, there are many options that can significantly change how logical volumes operate.
One of the main advantages of LVM is the flexibility it provides when configuring logical volumes. You can easily adjust the number or size of volumes without stopping the system.
To increase the size of an existing logical volume, use the lvresize
command. Use the -L
flag to specify the new size. You can also use relative sizes by adding a "+" size. In this case, LVM will increase the size of the logical volume by the specified amount. To automatically adjust the size of the file system in use on the logical volume, pass in the --resizefs
flag.
To correctly provide the name of the logical volume to be extended, you need to provide the volume group, followed by a slash, and then the logical volume:
sudo lvresize -L +5G --resizefs LVMVolGroup/test
In this example, the file system of the logical volume on the LVMVolGroup
volume group and the file system of the test
logical volume will increase by 5G.
If you want to handle file system expansion manually, you can remove the --resizefs
option and use the file system's native expansion utility. For example, for the Ext4 file system, you can type:
sudo lvresize -L +5G LVMVolGroup/test
sudo resize2fs /dev/LVMVolGroup/test
This will leave you with the same result.
Since capacity reduction can lead to data loss, the process of reducing usable capacity by reducing or removing components usually involves more.
To shrink a logical volume, you should first back up data. Because this reduces the available capacity, errors may cause data loss.
When you are ready, check the currently used space:
df -h
Filesystem Size Used Avail Use% Mounted on
... /dev/mapper/LVMVolGroup-test 4.8G 521M 4.1G 12%/mnt/test
In this example, it looks like the space currently being used is slightly higher than 521M. Use this option to help you estimate how far you can reduce the volume.
Next, unmount the file system. Unlike expansion, file system shrinkage should be performed when uninstalling:
cd ~
sudo umount /dev/LVMVolGroup/test
After uninstalling, check the file system to make sure everything is in order. Use the -t
option to pass in the file system type. Even if the file system is normal, we will use -f
to check:
sudo fsck -t ext4 -f /dev/LVMVolGroup/test
After checking the file system, you can use the file system's native tools to reduce the file system size. For Ext4 file system, this will be the resize2fs
command. Transfer the final size of the file system:
Warning: The safest option here is to choose a final size that is larger than your current usage. Give yourself some buffer space to avoid data loss and make sure the backup is in place.
sudo resize2fs -p /dev/LVMVolGroup/test 3G
After the operation is complete, pass the logical volume of the same size to the lvresize
command through the -L
flag to adjust the size of the logical volume:
sudo lvresize -L 3G LVMVolGroup/test
You will receive a warning about the possibility of data loss. If you are ready, type y to continue.
After reducing the logical volume, check the file system again:
sudo fsck -t ext4 -f /dev/LVMVolGroup/test
If everything is normal, you can remount the file system with the usual mount command:
sudo mount /dev/LVMVolGroup/test /mnt/test
Your logical volume should now be reduced to the appropriate size.
If the logical volume is no longer needed, you can use the lvremove
command to delete it.
First, unmount the currently mounted logical volume:
cd ~
sudo umount /dev/LVMVolGroup/test
Then, type the following command to delete the logical volume:
sudo lvremove LVMVolGroup/test
You will be asked to confirm the procedure. If you are sure you want to delete the logical volume, type y.
To delete the entire volume group (including all logical volumes in it), use the vgremove
command.
Before deleting a volume group, you should usually use the above procedure to delete logical volumes. At the very least, you must ensure that any logical volumes contained in the volume group are unmounted:
sudo umount /dev/LVMVolGroup/www
sudo umount /dev/LVMVolGroup/projects
sudo umount /dev/LVMVolGroup/db
After that, you can delete the entire volume group by passing the volume group name to the vgremove
command:
sudo vgremove LVMVolGroup
You will be prompted to confirm whether you want to delete the volume group. If you still have any logical volumes, you will be provided with a separate confirmation prompt before deleting.
If you want to delete a physical volume from LVM management, the required process depends on whether LVM is currently using the device.
If you are using a physical volume, you must move the physical extent on the device to another location. This requires that the volume group has enough other physical volumes to handle physical extents. If you use more complex logical volume types, even if there is enough free space to accommodate the topology, you may need to have other physical volumes.
If there are enough physical volumes in the volume group to handle physical extents, move them out of the physical volume to be deleted by typing:
sudo pvmove /dev/sda
This process may take a while, depending on the size of the volume and the amount of data to be transferred.
After relocating the extent to the peer volume, you can delete the physical volume from the volume group by typing:
sudo vgreduce LVMVolGroup /dev/sda
This will delete the vacated physical volume from the volume group. After this is done, you can delete the physical volume label from the storage device by typing:
sudo pvremove /dev/sda
You should now be able to use the deleted storage device for other purposes, or remove it from the system completely.
So far, you should have an understanding of how to use LVM to manage storage devices on Ubuntu 18.04. You should know how to obtain information about the status of existing LVM components, how to use LVM to compose a storage system, and how to modify volumes to meet your needs. You can test these concepts in a safe environment to better understand how they fit together.
For more Ubuntu tutorials, please go to [Tencent Cloud + Community] (https://cloud.tencent.com/developer?from=10680) to learn more.
Reference: "How To Use LVM To Manage Storage Devices on Ubuntu 18.0
Recommended Posts