From 2b2562f7f7acccd14cd19479ecf2ef9f10cdff62 Mon Sep 17 00:00:00 2001 From: Chase Qi Date: Mon, 21 Nov 2016 10:33:30 +0800 Subject: manual: added Linux software RAID test Change-Id: I6250e84a65a0d8023fd6e3b38d4c63c94bd93414 Signed-off-by: Chase Qi --- manual/generic/linux/software-raid0.yaml | 54 +++++++++++++++++++++++++++ manual/generic/linux/software-raid1.yaml | 63 ++++++++++++++++++++++++++++++++ manual/generic/linux/software-raid5.yaml | 61 +++++++++++++++++++++++++++++++ 3 files changed, 178 insertions(+) create mode 100644 manual/generic/linux/software-raid0.yaml create mode 100644 manual/generic/linux/software-raid1.yaml create mode 100644 manual/generic/linux/software-raid5.yaml (limited to 'manual') diff --git a/manual/generic/linux/software-raid0.yaml b/manual/generic/linux/software-raid0.yaml new file mode 100644 index 0000000..44005f8 --- /dev/null +++ b/manual/generic/linux/software-raid0.yaml @@ -0,0 +1,54 @@ +metadata: + name: software-raid0 + format: "Manual Test Definition 1.0" + description: "Use Linux utility mdadm to create and delete software RAID0. + RAID0 consists of striping, without mirroring or parity." + maintainer: + - chase.qi@linaro.org + os: + - debian + - ubuntu + - centos + - fedora + scope: + - functional + devices: + - d02 + - d03 + - d05 + - overdrive + - moonshot + - thunderX + environment: + - manual-test + +run: + steps: + - Install OS on the SUT(system under test) and make sure it boots. + - Power off the SUT and install two extra hard drives(use sd(b|c) + here). The two hard drives shoud have the same mode, at least the + same capacity. + - Boot to OS and make sure mdadm utility installed. + - Create 'Linux RAID auto' partition on each of the two hard drives + by running the following steps. + - 1) "fdisk /dev/sdx" + - 2) Delete all existing partitions with fdisk command "d" + - 3) Create Linux raid auto partition with fdisk commands + "n -> p -> 1 -> enter -> enter -> t -> fd -> w" + - Run the folliwng steps to test RAID0. + - 1) "mdadm -C /dev/md0 -l raid0 -n 2 /dev/sd[b-c]1" + - 2) 'cat /proc/mdstat' to see if md0 created and running." + - 3) "mkfs.ext4 /dev/md0" + - 4) Run dd performance test on md0 + "automated/linux/dd-wr-speed.sh -p /dev/md0 -t ext4" + - 5) Inspect the above test result. Compare with test result on single + disk, you should see performance boost. + - Remove the md0 by running the following steps. + - 1) "umount /dev/md0" + - 2) "mdadm --stop /dev/md0" + - 3) "mdadm --remove /dev/md0" + - 4) "mdadm --zero-superblock /dev/sdb1 /dev/sdc1" + + expected: + - RAID0 array creating and deleting are successful. + - Read/write performance on RAID0 array is faster then single disk. diff --git a/manual/generic/linux/software-raid1.yaml b/manual/generic/linux/software-raid1.yaml new file mode 100644 index 0000000..34755a6 --- /dev/null +++ b/manual/generic/linux/software-raid1.yaml @@ -0,0 +1,63 @@ +metadata: + name: software-raid1 + format: "Manual Test Definition 1.0" + description: "Use Linux utility mdadm to create, rebuild and delete + software RAID1. RAID1 consists of data mirroring, without + parity or striping." + maintainer: + - chase.qi@linaro.org + os: + - debian + - ubuntu + - centos + - fedora + scope: + - functional + devices: + - d02 + - d03 + - d05 + - overdrive + - moonshot + - thunderX + environment: + - manual-test + +run: + steps: + - Install OS on the SUT(system under test) and make sure it boots. + - Power off the SUT and install three extra hard drives(use sd(b|c|d) + here). The three hard drives shoud have the same mode, at least the + same capacity. + - Boot to OS and make sure mdadm utility installed. + - Create 'Linux RAID auto' partition on each of the three hard drives + by running the following steps. + - 1) "fdisk /dev/sdx" + - 2) Delete all existing partitions with fdisk command "d" + - 3) Create Linux raid auto partition with fdisk commands + "n -> p -> 1 -> enter -> enter -> t -> fd -> w" + - Run the following steps to test RAID1. + - 1) "mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sd[b-c]1" + - 2) Monitor '/proc/mdstat' to check if md0 created and running. + - 3) "mkfs.ext4 /dev/md0" + - 4) "mount /dev/md0 /mnt" + - 5) "echo 'RAID1 test' > /mnt raid1-test.txt" + - 6) Intentionally set faulty partition with command + "mdadm --manage --set-faulty /dev/md0 /dev/sdc1" + - 7) Execute "mdadm --detail /dev/md0" and check if RAID array shown as + 'degraded' and sdc1 shown as 'faulty spare'. + - 8) Execute "mdadm --manage /dev/md0 -r /dev/sdc1" to remove sdc1. + - 9) Verify that '/mnt/raid1-test.txt' is not damaged. + - 10) Execute "mdadm --manage /dev/md0 -a /dev/sdd1" to add sdd1. + - 11) Monitor the output of "mdadm --detail /dev/md0" and make sure + that md0 'rebuilding' can be finished. + - Remove the md0 by running the following steps. + - 1) "umount /dev/md0" + - 2) "mdadm --stop /dev/md0" + - 3) "mdadm --remove /dev/md0" + - 4) "mdadm --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1" + + expected: + - RAID1 array creating, deleting and rebuilding are successful. + - When one disk of RAID1 array damaged or removed, no data loss. + - When faulty disk replaced with a new one, RAID1 array can be rebuilt. diff --git a/manual/generic/linux/software-raid5.yaml b/manual/generic/linux/software-raid5.yaml new file mode 100644 index 0000000..8e323d3 --- /dev/null +++ b/manual/generic/linux/software-raid5.yaml @@ -0,0 +1,61 @@ +metadata: + name: software-raid5 + format: "Manual Test Definition 1.0" + description: "Use Linux utility mdadm to create, rebuilt and delete + software RAID5. RAID5 consists of block-level striping with + distributed parity." + maintainer: + - chase.qi@linaro.org + os: + - debian + - ubuntu + - centos + - fedora + scope: + - functional + devices: + - d02 + - d03 + - d05 + - overdrive + - moonshot + - thunderX + environment: + - manual-test + +run: + steps: + - Install OS on the SUT(system under test) and make sure it boots. + - Power off the SUT and install three extra hard drives(use sd[b|c|d] + here). The three hard drives shoud have the same mode, at least the + same capacity. + - Boot to OS and make sure mdadm utility installed. + - Create 'Linux RAID auto' partition on each of the three hard drives + by running the following steps. + - 1) "fdisk /dev/sdx" + - 2) Delete all existing partitions with fdisk command "d" + - 3) Create Linux raid auto partition with fdisk commands + "n -> p -> 1 -> enter -> enter -> t -> fd -> w" + - Run the following steps to test RAID5. + - 1) Remove the existing md0. Refer to md0 removing steps above. + - 2) "mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1" + - 3) Monitor '/proc/mdstat' to check if md0 created and running. + - 4) "mkfs.ext4 /dev/md0" + - 5) "mount /dev/md0 /mnt" + - 6) "echo 'RAID5 test' > /mnt/raid5-test.txt" + - 7) "mdadm --manage --set-faulty /dev/md0 /dev/sdd1" + - 8) "mdadm --manage /dev/md0 -r /dev/sdd1" + - 9) Verify that '/mnt/raid5-test.txt' is not damaged. + - 10) "mdadm --manage /dev/md0 -a /dev/sdd1" + - 11) Monitor the output of "mdadm --detail /dev/md0" and make sure + that md0 'rebuilding' can be finished. + - Remove the md0 by running the following steps. + - 1) "umount /dev/md0" + - 2) "mdadm --stop /dev/md0" + - 3) "mdadm --remove /dev/md0" + - 4) "mdadm --zero-superblock /dev/sdb1 /dev/sdc1 /dev/sdd1" + + expected: + - RAID5 array creating, deleting and rebuilding are successful. + - When one disk of RAID5 array damaged or removed, no data loss. + - When faulty disk replaced with a new one, RAID5 array can be rebuilt. -- cgit v1.2.3