안녕하세요. 썬구루입니다. 이제 CentOS 리눅스 환경에서 mdadm 명령을 이용한 RAID 5와 RAID 6 구성에 대해 알아보도록 하겠습니다. RAID 5는 실무에서 흔하게 사용되는 RAID 기술입니다.
만약 RAID 0, 1, 5, 6, 10을 지원하는 컨트롤러 장착한 서버를 구성한다면 OS 디스크는 RAID 1을 데이터 디스크는 RAID 5를 그리고 Hot Spare를 하나 두는 것이죠. 그리고 시간이 지나 데이터 서비스 I/O에 대한 부하가 커진다면 디스크 스토리지를 도입 후 디스크 스토리지에 RAID 5 또는 RAID 6을 구성하여 서버에 연결하면 볼륨 매니저 기능을 사용하여 볼륨 생성 그리고 파일시스템 생성을 합니다.
그리고 서버에 있는 데이터를 디스크 스토리지로 마이그레이션(Migration) 시킨 후 데이터 서비스 I/O는 스토리지에서 일어나도록 만드는 것이죠. 이렇게 하면 데이터 서비스 속도가 확실히 좋아집니다. 돈을 투자한 만큼 성능은 당연히 좋아지죠.^^
아무튼 이 게시물에서는 아래와 같은 내용에 대해 알아볼 것입니다.
☞ RAID 5 구성
☞ RAID 5에 Hot Spare 구성
☞ RAID 5에 디스크 장애 유도 및 확인
☞ RAID 5의 구성 요소 디스크 교체
☞ RAID 5 디바이스 삭제
☞ RAID 6 구성
☞ RAID 6에 Hot Spare 구성
☞ RAID 6에 디스크 장애 유도 및 확인
☞ RAID 6의 구성 요소 디스크 교체
☞ RAID 6 디바이스 삭제
■ RAID 5 (Stripe with Parity) 구성
이것은 RAID 5 구성에 사용되는 명령 형식은 다음과 같다.
mdadm --create RAID DEVICE --level=5 --raid-devices=N COMPONENT_DEVICE...
|
1). 예제
아래는 4개의 하드디스크 디바이스( sdd, sde, sdf, sdg)를 RAID 5를 구성하는 것을 보여준다. 여기서 자세히 보아야 할 것은 각각 10GB 용량을 가진 4개의 디스크를 가지고 RAID 5로 구성을 하면 실제 데이터를 저장할 수 있는 용량(Usable Capacity)은 30GB라는 것이다.
[root@sunguru ~]# mdadm --detail --scan
[root@sunguru ~]# [root@sunguru ~]# lsblk | grep 10G sdd 8:48 0 10G 0 disk sdf 8:80 0 10G 0 disk sde 8:64 0 10G 0 disk sdh 8:112 0 10G 0 disk sdg 8:96 0 10G 0 disk sdi 8:128 0 10G 0 disk [root@sunguru ~]# [root@sunguru ~]# mdadm --create /dev/md0 --level=5 --raid-devices=4 \ > /dev/sdd /dev/sde /dev/sdf /dev/sdg mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@sunguru ~]# [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 13:07:37 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Tue Jul 5 13:08:57 2016 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 18 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg [root@sunguru ~]# [root@sunguru ~]# mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg [root@sunguru ~]# [root@sunguru ~]# lsblk /dev/md0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md0 9:0 0 30G 0 raid5 [root@sunguru ~]# [root@sunguru ~]# mkfs -t ext4 /dev/md0 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=384 blocks 1966080 inodes, 7858176 blocks 392908 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 240 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@sunguru ~]# [root@sunguru ~]# blkid /dev/md0 /dev/md0: UUID="2b1871ac-6cec-4cd7-8e1e-633585bfbba7" TYPE="ext4" [root@sunguru ~]# [root@sunguru ~]# mount /dev/md0 /datafile01 [root@sunguru ~]# [root@sunguru ~]# df -h /datafile01 Filesystem Size Used Avail Use% Mounted on /dev/md0 30G 44M 28G 1% /datafile01 <= Min Free 가 잡혀있어서 28GB로 나옴 [root@sunguru ~]# [root@sunguru ~]# vi /etc/fstab /dev/md0 /datafile01 ext4 defaults 1 2 <= 추가 [root@sunguru ~]# [root@sunguru ~]# cd /datafile01 [root@sunguru datafile01]# [root@sunguru datafile01]# dd bs=1024 if=/dev/zero of=file1 count=409600 409600+0 records in 409600+0 records out 419430400 bytes (419 MB) copied, 3.26294 s, 129 MB/s [root@sunguru datafile01]# [root@sunguru datafile01]# ls -l 합계 409616 -rw-r--r--. 1 root root 419430400 2016-07-05 13:18 file1 drwx------. 2 root root 16384 2016-07-05 13:14 lost+found [root@sunguru datafile01]# [root@sunguru datafile01]# df -h /datafile01 Filesystem Size Used Avail Use% Mounted on /dev/md0 30G 444M 28G 2% /datafile01 [root@sunguru datafile01]# |
■ RAID 5 에 Hot Spare 추가
[root@sunguru ~]# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg [root@sunguru ~]# [root@sunguru ~]# mdadm /dev/md0 --add /dev/sdh mdadm: added /dev/sdh [root@sunguru ~]# [root@sunguru ~]# mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 devices=/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh [root@sunguru ~]# [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 13:07:37 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 15:00:53 2016 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 21 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 5 8 112 - spare /dev/sdh [root@sunguru ~]# [root@sunguru ~]# mdadm --detail --brief /dev/md0 ARRAY /dev/md0 metadata=1.2 spares=1 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 [root@sunguru ~]# mdadm --detail --brief /dev/md0 >> /etc/mdadm.conf [root@sunguru ~]# [root@sunguru ~]# more /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 spares=1 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 [root@sunguru ~]# |
■ RAID 5 에 디스크 장애 유도 및 확인
CentOS 리눅스에 디스크 장애를 유도하기 위해 VM 메뉴 > Settings > Hard Disk 4 ( 노드 번호 0:3 )을 삭제한다.그리고 새로운 터미널 상에서 아래와 같이 mdadm 명령을 사용하여 /dev/md0 RAID 디바이스를 모니터링한다.
[root@sunguru ~]# mdadm --monitor /dev/md0
Jul 5 15:28:13: RebuildStarted on /dev/md0 unknown device Jul 5 15:28:13: Fail on /dev/md0 /dev/sdd Jul 5 15:30:08: RebuildFinished on /dev/md0 unknown device Jul 5 15:30:08: SpareActive on /dev/md0 /dev/sdh |
RAID 디바이스 위에 만들어진 파일시스템에 I/O 가 발생하면 제거된 디스크가 감지되고 RAID 디바이스 구성에 문제가 있다고 판단을 한다. 그러면 Hot Spare에 있는 디스크가 장애 난 디스크를 대처하게 되고 데이터 재동기화가 발생하게 된다. 동기화가 끝나면 정상적인 구성으로 동작하게 된다. 다시 말해 디스크 단일 장애가 발생하여도 데이터 서비스는 계속 지속된다.
아래 내용을 보면 노드 번호 0:3을 가지는 sdd 디스크는 faulty로 표시되어 있고 Hot Spare였던 sdf 디스크가 spare rebuilding으로 표시되어 sde 디스크와 동기화(Sync) 되는 것을 확인할 수 있다.
[root@sunguru ~]# cd /datafile01
[root@sunguru datafile01]# ls file1 lost+found [root@sunguru datafile01]# touch file2 [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 13:07:37 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 15:29:03 2016 State : clean, degraded, recovering Active Devices : 3 Working Devices : 4 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Rebuild Status : 45% complete Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 37 Number Major Minor RaidDevice State 5 8 112 0 spare rebuilding /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 0 8 48 - faulty /dev/sdd [root@sunguru datafile01]# |
mdadm 명령으로 데이터 동기화 여부를 확인할 수 있고 동기화가 종료되면 아래와 같이 장애 전과 동일하게 active synce로 동작하게 된다.
[root@sunguru datafile01]# mdadm --detail /dev/md0
/dev/md0: Version : 1.2 Creation Time : Tue Jul 5 13:07:37 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 15:30:08 2016 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 47 Number Major Minor RaidDevice State 5 8 112 0 active sync /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 0 8 48 - faulty /dev/sdd [root@sunguru datafile01]# |
■ RAID 5 의 구성 요소 디스크 교체
[root@sunguru datafile01]# mdadm --detail /dev/md0 | tail -11
Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 47 Number Major Minor RaidDevice State 5 8 112 0 active sync /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 0 8 48 - faulty /dev/sdd [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --manage /dev/md0 --fail /dev/sdd mdadm: set /dev/sdd faulty in /dev/md0 [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --detail /dev/md0 | tail -11 Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 47 Number Major Minor RaidDevice State 5 8 112 0 active sync /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 0 8 48 - faulty /dev/sdd [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --manage /dev/md0 --remove /dev/sdd mdadm: hot removed /dev/sdd from /dev/md0 [root@sunguru datafile01]# [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --detail /dev/md0 | tail -11 Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 48 Number Major Minor RaidDevice State 5 8 112 0 active sync /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg [root@sunguru datafile01]# [root@sunguru datafile01]# mdadm --manage /dev/md0 --add /dev/sdi mdadm: added /dev/sdi [root@sunguru datafile01]# mdadm --detail /dev/md0 | tail -11 Name : centos01:0 (local to host centos01) UUID : a9eadc71:1eb64294:9e40c7bc:3ccc59b5 Events : 49 Number Major Minor RaidDevice State 5 8 112 0 active sync /dev/sdh 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 6 8 128 - spare /dev/sdi [root@sunguru datafile01]# |
■ RAID 5 용량 증설
RAID 5는 디스크를 추가하여 용량을 증가시킬 수 있다. 구성 재조정이 되는 것이니 그것을 위한 공간은 가지고 있어야 한다. 10GB 디스크 4개로 RAID 5를 구성한 /dev/md0 RAID 디바이스를 가지고 예를 들어볼 것이다. 아래는 테스트할 /dev/md0 RAID 디바이스 정보이다.
[root@sunguru ~]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md0 9:0 0 30G 0 raid5 /datafile01 [root@sunguru ~]# [root@sunguru ~]# blkid /dev/md0 /dev/md0: UUID="df5437e0-bcc1-45f4-a7e3-8a8350e833ba" TYPE="ext4" [root@sunguru ~]# [root@sunguru ~]# df -h /datafile01 Filesystem Size Used Avail Use% Mounted on /dev/md0 30G 44M 28G 1% /datafile01 [root@sunguru ~]# [root@sunguru ~]# grep /datafile01 /etc/fstab /dev/md0 /datafile01 ext4 defaults 1 2 [root@sunguru ~]# [root@sunguru ~]# grep md0 /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 name=centos01:0 UUID=5d13c7c9:01b7d35f:1a52092a:1e4de3a7 [root@sunguru ~]# [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 16:36:31 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Tue Jul 5 16:38:14 2016 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : 5d13c7c9:01b7d35f:1a52092a:1e4de3a7 Events : 22 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg [root@sunguru ~]# |
용량 증설에 사용될 디스크를 /dev/md0에 추가합니다.
[root@sunguru ~]# mdadm --manage /dev/md0 --add /dev/sdh
mdadm: added /dev/sdh [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 16:36:31 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 4 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 16:43:41 2016 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : 5d13c7c9:01b7d35f:1a52092a:1e4de3a7 Events : 23 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 5 8 112 - spare /dev/sdh [root@sunguru ~]# |
Hot Spare를 추가한 것과 동일한 방법이다. 다음과 같은 명령을 사용하여 추가한 디스크로 용량을 늘립니다.
[root@sunguru ~]# mdadm --grow --raid-devices=5 /dev/md0
[root@sunguru ~]# [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 16:36:31 2016 Raid Level : raid5 Array Size : 31432704 (29.98 GiB 32.19 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 16:46:00 2016 State : clean, reshaping Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Reshape Status : 2% complete Delta Devices : 1, (4->5) Name : centos01:0 (local to host centos01) UUID : 5d13c7c9:01b7d35f:1a52092a:1e4de3a7 Events : 46 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 5 8 112 4 active sync /dev/sdh [root@sunguru ~]# |
구성 재조정이 완료되면 아래와 같이 Reshape Status와 Delta Devices는 사라지고 정상적인 구조로 구성 완료됩니다.
[root@sunguru ~]# mdadm --detail /dev/md0
/dev/md0: Version : 1.2 Creation Time : Tue Jul 5 16:36:31 2016 Raid Level : raid5 Array Size : 41910272 (39.97 GiB 42.92 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jul 5 16:51:16 2016 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : 5d13c7c9:01b7d35f:1a52092a:1e4de3a7 Events : 81 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 5 8 112 4 active sync /dev/sdh [root@sunguru ~]# [root@sunguru ~]# lsblk /dev/md0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md0 9:0 0 40G 0 raid5 /datafile01 [root@sunguru ~]# |
그리고 아래와 같이 Hot Spare를 구성해주면 RAID 디바이스 용량 증설은 완료됩니다.
[root@sunguru ~]# mdadm --manage /dev/md0 --add /dev/sdi
mdadm: added /dev/sdi [root@sunguru ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Tue Jul 5 16:36:31 2016 Raid Level : raid5 Array Size : 41910272 (39.97 GiB 42.92 GB) Used Dev Size : 10477568 (9.99 GiB 10.73 GB) Raid Devices : 5 Total Devices : 6 Persistence : Superblock is persistent Update Time : Tue Jul 5 16:55:59 2016 State : clean Active Devices : 5 Working Devices : 6 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : centos01:0 (local to host centos01) UUID : 5d13c7c9:01b7d35f:1a52092a:1e4de3a7 Events : 82 Number Major Minor RaidDevice State 0 8 48 0 active sync /dev/sdd 1 8 64 1 active sync /dev/sde 2 8 80 2 active sync /dev/sdf 4 8 96 3 active sync /dev/sdg 5 8 112 4 active sync /dev/sdh 6 8 128 - spare /dev/sdi [root@sunguru ~]# [root@sunguru ~]# more /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 name=centos01:0 UUID=5d13c7c9:01b7d35f:1a52092a:1e4de3a7 [root@sunguru ~]# [root@sunguru ~]# mdadm --detail --brief /dev/md0 > /etc/mdadm.conf [root@sunguru ~]# [root@sunguru ~]# more /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 spares=1 name=centos01:0 UUID=5d13c7c9:01b7d35f:1a52092a:1e4de3a7 [root@sunguru ~]# |
파일시스템 용량은 자동 재조정이 되지 않기 때문에 resize2fs 명령을 사용하여 아래와 같이 용량을 키워준다.
[root@sunguru ~]# df -h /datafile01
Filesystem Size Used Avail Use% Mounted on /dev/md0 30G 44M 28G 1% /datafile01 [root@sunguru ~]# [root@sunguru ~]# resize2fs /dev/md0 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/md0 is mounted on /datafile01; on-line resizing required old desc_blocks = 2, new_desc_blocks = 3 Performing an on-line resize of /dev/md0 to 10477568 (4k) blocks. The filesystem on /dev/md0 is now 10477568 blocks long. [root@sunguru ~]# df -h /datafile01 Filesystem Size Used Avail Use% Mounted on /dev/md0 40G 48M 38G 1% /datafile01 [root@sunguru ~]# |
■ RAID 5 디바이스 삭제
RAID 5 디바이스 삭제 방법은 RAID 1 과 동일하다. /etc/mdadm.conf 파일에 RAID 디바이스에 대한 정보가 하나밖에 없다면 아래와 같이 /dev/null 을 사용하여 파일의 내용을 제거할 수 있다. 하지만 만약 여러 개 RAID 디바이스 정보가 있다면 vi 편집기를 사용하여 해당 내용을 삭제하는 것이 좋다.
[root@sunguru datafile01]# lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT md0 9:0 0 30G 0 raid5 /datafile01 [root@sunguru datafile01]# [root@sunguru datafile01]# df -h /dev/md0 Filesystem Size Used Avail Use% Mounted on /dev/md0 30G 444M 28G 2% /datafile01 [root@sunguru datafile01]# [root@sunguru datafile01]# grep /dev/md0 /etc/fstab /dev/md0 /datafile01 ext4 defaults 1 2 [root@sunguru datafile01]# [root@sunguru datafile01]# grep /dev/md0 /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 spares=1 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 [root@sunguru datafile01]# [root@sunguru datafile01]# cd / [root@sunguru /]# [root@sunguru /]# umount /datafile01 [root@sunguru /]# [root@sunguru /]# vi /etc/fstab /dev/md0 /datafile01 ext4 defaults 1 2 <== 내용 삭제 [root@sunguru /]# [root@sunguru /]# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 [root@sunguru /]# mdadm --remove /dev/md0 mdadm: error opening /dev/md0: No such file or directory [root@sunguru /]# [root@sunguru /]# mdadm --zero-superblock /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi mdadm: Unrecognised md component device - /dev/sdd [root@sunguru /]# [root@sunguru /]# mdadm --zero-superblock /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi mdadm: Unrecognised md component device - /dev/sde mdadm: Unrecognised md component device - /dev/sdf mdadm: Unrecognised md component device - /dev/sdg mdadm: Unrecognised md component device - /dev/sdh mdadm: Unrecognised md component device - /dev/sdi [root@sunguru /]# [root@sunguru /]# more /etc/mdadm.conf ARRAY /dev/md0 metadata=1.2 spares=1 name=centos01:0 UUID=a9eadc71:1eb64294:9e40c7bc:3ccc59b5 [root@sunguru /]# [root@sunguru /]# cat /dev/null > /etc/mdadm.conf [root@sunguru /]# [root@sunguru /]# more /etc/mdadm.conf [root@sunguru /]# [root@sunguru /]# mdadm --detail --scan [root@sunguru /]# |
mdadm --detail --scan 명령을 사용하여 해당 RAID 디바이스 관련 내용이 제거되었는지 확인한다.
RAID 6는 RAID 5와 구성 방법이 거의 유사하기 때문에 그것에 대한 실습 방법은 언급하지 않을 것입니다. --level=6 으로 설정한다는 것과 듀얼 패러티이기 때문에 디스크 2개 용량만큼 사용량이 줄어든다는 것, 만약 10GB 5개 디스크로 RAID 6를 구성한다면 2개는 패러티 용도로 사용되기 때문에 데이터 저장을 위해 사용 가능한 용량은 30GB 이다는 것만 잘 알아두시면 될 것 같습니다.
실무에 RAID 구성 또는 용량 증설을 하기 위해 반드시!! 필히!! 충분한 테스트와 안전성을 검토한 후에 적용해야 합니다. 반드시!!!
다음 게시물에서는 리눅스 볼륨 매니저인 LVM 구성에 대해 알아보도록 하겠습니다.
No comments:
Post a Comment