linux使用mdadm命令创建raid设备

 admin   2025-07-30 12:01   12 人阅读  0 条评论

还是准备三块硬盘,分别为20G

创建raid1

[root@oracletest ~]# mdadm --create /dev/md/raid1 --level=1 --raid-devices=2 /dev/nvme0n2 /dev/nvme0n3
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/raid1 started.

查看raid信息

[root@oracletest ~]# mdadm --detail /dev/md/raid1 
/dev/md/raid1:
           Version : 1.2
     Creation Time : Wed Jul 30 12:05:54 2025
        Raid Level : raid1
        Array Size : 20954112 (19.98 GiB 21.46 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jul 30 12:06:27 2025
             State : clean, resyncing
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 35% complete

              Name : oracletest:raid1  (local to host oracletest)
              UUID : 2e06ea2e:56f77441:70f095b5:a0fef6e0
            Events : 5

    Number   Major   Minor   RaidDevice State
       0     259        3        0      active sync   /dev/nvme0n2
       1     259        4        1      active sync   /dev/nvme0n3

删除raid1

[root@oracletest ~]# mdadm -S /dev/md/raid1
mdadm: stopped /dev/md/raid1
[root@oracletest ~]# mdadm --zero-superblock /dev/nvme0n2 /dev/nvme0n3

创建raid0

[root@oracletest ~]# mdadm --create /dev/md/raid0 --level=0 --raid-devices=3 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/raid0 started.

查看raid信息

[root@oracletest ~]# mdadm --detail /dev/md/raid0
/dev/md/raid0:
           Version : 1.2
     Creation Time : Wed Jul 30 12:11:04 2025
        Raid Level : raid0
        Array Size : 62862336 (59.95 GiB 64.37 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Jul 30 12:11:04 2025
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : oracletest:raid0  (local to host oracletest)
              UUID : e49d31f0:e891eb71:5c281fca:ffde3bc7
            Events : 0

    Number   Major   Minor   RaidDevice State
       0     259        3        0      active sync   /dev/nvme0n2
       1     259        4        1      active sync   /dev/nvme0n3
       2     259        5        2      active sync   /dev/nvme0n4

删除raid0

[root@oracletest ~]# mdadm -S /dev/md/raid1
[root@oracletest ~]# mdadm --zero-superblock /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4 

创建raid5

[root@oracletest ~]# mdadm --create /dev/md/raid5 --level=5 --raid-devices=3 /dev/nvme0n2 /dev/nvme0n3 /dev/nvme0n4
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/raid5 started.
[root@oracletest ~]# mdadm --detail /dev/md/raid5
/dev/md/raid5:
           Version : 1.2
     Creation Time : Wed Jul 30 12:16:06 2025
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Jul 30 12:16:13 2025
             State : clean, degraded, recovering
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 6% complete

              Name : oracletest:raid5  (local to host oracletest)
              UUID : 8b158f48:03d6f076:202d876f:5e8a9a3e
            Events : 2

    Number   Major   Minor   RaidDevice State
       0     259        3        0      active sync   /dev/nvme0n2
       1     259        4        1      active sync   /dev/nvme0n3
       3     259        5        2      spare rebuilding   /dev/nvme0n4

挂载raid卷

[root@oracletest ~]# mkfs.xfs /dev/md/raid5 
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md/raid5          isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@oracletest ~]# mount /dev/md/raid5 /mnt/
[root@oracletest ~]# df -h
文件系统             容量  已用  可用 已用% 挂载点
devtmpfs             7.7G     0  7.7G    0% /dev
tmpfs                7.7G     0  7.7G    0% /dev/shm
tmpfs                7.7G  8.8M  7.7G    1% /run
tmpfs                7.7G     0  7.7G    0% /sys/fs/cgroup
/dev/mapper/ol-root   92G  4.7G   87G    6% /
/dev/nvme0n1p1      1014M  284M  731M   28% /boot
tmpfs                1.6G     0  1.6G    0% /run/user/0
/dev/md127            40G  319M   40G    1% /mnt

分区并挂载

[root@oracletest ~]# parted /dev/md/raid5 
GNU Parted 3.2
使用 /dev/md/raid5
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
新的磁盘标签类型? gpt
警告: The existing disk label on /dev/md/raid5 will be destroyed and all data on this disk will be lost. Do you want to continue?
是/Yes/否/No? yes
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md/raid5: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  标志

(parted) mkpart
分区名称?  []? 1
文件系统类型?  [ext2]? xfs
起始点? 1
结束点? 42G
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md/raid5: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  标志
 1      1049kB  42.0GB  42.0GB  xfs          1
[root@oracletest ~]# mkfs.xfs /dev/md/raid5p1 
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/md/raid5p1        isize=512    agcount=16, agsize=640896 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=10253568, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=5008, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@oracletest ~]# mount /dev/md/raid5p1 /mnt

写进fatab

[root@oracletest ~]# cat /etc/fstab 

/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=eb870bbd-bb99-43eb-b05a-a2dc051a4338 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     none                    swap    defaults        0 0
/dev/md/raid5p1 /data   xfs     defaults        0 0
本文地址:https://liuchunjie.top/?id=811
版权声明:本文为原创文章,版权归 admin 所有,欢迎分享本文,转载请保留出处!
NEXT:已经是最新一篇了

评论已关闭!