RAID(Redundant Array of Independent Disks),独立冗余磁盘阵列。在早期大部分采用的都是单块磁盘进行数据存储和读写,I/O性能非常低,且存储容量较低。而且单块磁盘容易出现故障。为了解决问题,提出了将多块硬盘结合在一起使用,来提高I/O性能和容量,并提高一定的冗余性。于是RAID技术就出现了。下面介绍几种常用的RAID。
停止RAID阵列
[root@rhce ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
清除RAID成员中的超级快信息
[root@rhce ~]# mdadm --zero-superblock /dev/sd{a..d}
如果没有清除超级块信息,当我们重新创建RAID的时候,会提醒我们该磁盘以及在某个RAID阵列
[root@rhce ~]# mdadm -C /dev/md0 -l 10 -n 4 /dev/sd{a..d}
mdadm: /dev/sda appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdb appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdc appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
mdadm: /dev/sdd appears to be part of a raid array:
level=raid10 devices=4 ctime=Fri Aug 25 15:54:10 2023
(3)RAID阵列磁盘损害模拟
以RAID5为例,当损坏一块磁盘的时候,热备盘自动更换,数据重构。
[root@rhce ~]# mkdir /md5
[root@rhce ~]# mount /dev/md5 /md5/
[root@rhce ~]# df -H /dev/md5
Filesystem Size Used Avail Use% Mounted on
/dev/md5 42G 25k 40G 1% /md5
[root@rhce ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Aug 25 15:58:37 2023
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 16:00:23 2023
State : active
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : rhce:5 (local to host rhce)
UUID : f0d45dfb:cb5696d9:1d0077e2:d785cba2
Events : 23
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
4 8 32 2 active sync /dev/sdc
3 8 48 - spare /dev/sdd
往RAID5中写入数据
[root@rhce ~]# mount /dev/sr0 /cdrow/
mount: /cdrow: WARNING: source write-protected, mounted read-only.
[root@rhce ~]# cp -rf /cdrow/ /md5/ //将光盘镜像中数据全部复制到raid5中
[root@rhce ~]# ls /md5/
cdrow lost+found
[root@rhce ~]# du -h /md5/
16K /md5/lost+found
108M /md5/cdrow/images/pxeboot
857M /md5/cdrow/images
1.2G /md5/cdrow/BaseOS/Packages
2.4M /md5/cdrow/BaseOS/repodata
1.2G /md5/cdrow/BaseOS
7.0G /md5/cdrow/AppStream/Packages
7.7M /md5/cdrow/AppStream/repodata
7.0G /md5/cdrow/AppStream
109M /md5/cdrow/isolinux
2.3M /md5/cdrow/EFI/BOOT/fonts
6.5M /md5/cdrow/EFI/BOOT
6.5M /md5/cdrow/EFI
9.1G /md5/cdrow
9.1G /md5/
模拟/dev/sdb磁盘损坏
mdadm: set /dev/sdb faulty in /dev/md5
[root@rhce ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri Aug 25 15:58:37 2023
Raid Level : raid5
Array Size : 41908224 (39.97 GiB 42.91 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Aug 25 16:04:57 2023
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 10% complete
Name : rhce:5 (local to host rhce)
UUID : f0d45dfb:cb5696d9:1d0077e2:d785cba2
Events : 27
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
3 8 48 1 spare rebuilding /dev/sdd
4 8 32 2 active sync /dev/sdc
1 8 16 - faulty /dev/sdb
[root@rhce ~]# du -h /md5/
16K /md5/lost+found
108M /md5/cdrow/images/pxeboot
857M /md5/cdrow/images
1.2G /md5/cdrow/BaseOS/Packages
2.4M /md5/cdrow/BaseOS/repodata
1.2G /md5/cdrow/BaseOS
7.0G /md5/cdrow/AppStream/Packages
7.7M /md5/cdrow/AppStream/repodata
7.0G /md5/cdrow/AppStream
109M /md5/cdrow/isolinux
2.3M /md5/cdrow/EFI/BOOT/fonts
6.5M /md5/cdrow/EFI/BOOT
6.5M /md5/cdrow/EFI
9.1G /md5/cdrow
9.1G /md5/
[root@rhce ~]# pvdisplay
--- Physical volume ---
PV Name /dev/nvme0n1p2
VG Name rhel
PV Size <19.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4863
Free PE 0
Allocated PE 4863
PV UUID HPRENt-Y8n0-cr0N-1u8h-3oGG-RXgY-9oBMZf
"/dev/sda" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sda
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID zJ0SHT-rdQO-NzL4-ySD3-G95F-LKKu-B0lEAg
"/dev/sdb" is a new physical volume of "20.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 20.00 GiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID BrBVT9-semZ-0i6u-wzgH-jdVd-chGj-k4XBZI
[root@rhce ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p2 rhel lvm2 a-- <19.00g 0
/dev/sda lvm2 --- 20.00g 20.00g
/dev/sdb lvm2 --- 20.00g 20.00g
创建卷组
使用 vgcreate命令将一个或多个物理卷组成一个卷组。
[root@rhce ~]# vgcreate vg0 /dev/sda /dev/sdb
Volume group "vg0" successfully created
创建一个名称为vg0的卷组。大小是/dev/sda和/dev/sdb两个PV的大小之和(以PE为单位计算)。
[root@rhce ~]# resize2fs /dev/vg0/lv02
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/vg0/lv02 to 512000 (4k) blocks.
The filesystem on /dev/vg0/lv02 is now 512000 (4k) blocks long.
卸载文件系统
[root@rhce ~]# umount /date1
检查磁盘是否存在错误
[root@rhce ~]# e2fsck -f /dev/vg0/lv02
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/vg0/lv02: 11/128000 files (0.0% non-contiguous), 12890/512000 blocks
设定文件系统大小
[root@rhce ~]# resize2fs /dev/vg0/lv02 500M
resize2fs 1.46.5 (30-Dec-2021)
Resizing the filesystem on /dev/vg0/lv02 to 128000 (4k) blocks.
The filesystem on /dev/vg0/lv02 is now 128000 (4k) blocks long.
逻辑卷缩容
使用 lvresize命令重新设置大小
[root@rhce ~]# lvresize -L 500M /dev/vg0/lv02
File system ext4 found on vg0/lv02.
File system size (500.00 MiB) is equal to the requested size (500.00 MiB).
File system reduce is not needed, skipping.
Size of logical volume vg0/lv02 changed from 1.95 GiB (500 extents) to 500.00 MiB (125 extents).
Logical volume vg0/lv02 successfully resized.
检查是否缩容成功
[root@rhce ~]# mount /dev/vg0/lv02 /date1
[root@rhce ~]# df -h /dev/vg0/lv02
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-lv02 475M 24K 440M 1% /date1
卷组缩容
使用 vgreduce命令从卷组中移除物理卷
[root@rhce ~]# vgreduce vg0 /dev/sdc
Removed "/dev/sdc" from volume group "vg0"
删除逻辑卷
删除逻辑卷
使用 lvremove命令删除逻辑卷
[root@rhce ~]# lvremove /dev/vg0/lv01
Do you really want to remove active logical volume vg0/lv01? [y/n]: y
Logical volume "lv01" successfully removed.
[root@rhce ~]# lvremove /dev/vg0/lv02
Do you really want to remove active logical volume vg0/lv02? [y/n]: y
Logical volume "lv02" successfully removed.
删除卷组
使用 vgremove命令删除卷组
[root@rhce ~]# vgremove vg0
Volume group "vg0" successfully removed
删除物理卷
使用 pvremove命令来移除PV。
[root@rhce ~]# pvremove /dev/sd{a..c}
Labels on physical volume "/dev/sda" successfully wiped.
Labels on physical volume "/dev/sdb" successfully wiped.
Labels on physical volume "/dev/sdc" successfully wiped.
RAID和逻辑卷
一、RAID管理
(1)RAID介绍
RAID(Redundant Array of Independent Disks),独立冗余磁盘阵列。在早期大部分采用的都是单块磁盘进行数据存储和读写,I/O性能非常低,且存储容量较低。而且单块磁盘容易出现故障。为了解决问题,提出了将多块硬盘结合在一起使用,来提高I/O性能和容量,并提高一定的冗余性。于是RAID技术就出现了。下面介绍几种常用的RAID。
RAID0
RAID1
RAID3
RAID5
RAID6
RAID10
硬RAID和软RAID
硬RAID:通过独立的RAID卡或者控制器来实现RIAD功能。
软RAID:没有独立的RAID卡,由操作系统和CPU来实现RAID功能。
(2)RAID配置
使用
mdadm
命令来管理RAID组。RAID0配置
(2)逻辑卷配置
以下实例完整的磁盘为例。也可以将磁盘进行分区后,来创建逻辑卷。
创建逻辑卷
创建逻辑卷基本步骤:
创建物理卷
使用
pvcreate
命令将物理设备标记为物理卷使用
pvdisplay
或者pvs
利用查看物理卷信息。创建卷组
使用
vgcreate
命令将一个或多个物理卷组成一个卷组。使用
vgdisplay
或者vgs
查看卷组信息。