日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > ChatGpt >内容正文

ChatGpt

服务器硬盘raid5扩容,超实用,物理服务器RAID扩容详解

發(fā)布時間:2025/3/15 ChatGpt 44 豆豆
生活随笔 收集整理的這篇文章主要介紹了 服务器硬盘raid5扩容,超实用,物理服务器RAID扩容详解 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

服務(wù)器RAID卡,也稱陣列卡,用于將多塊物理硬盤組建成邏輯的卷,RAID卡是一個硬件,上面通常會配置Cache和電池,用于提升RAID性能和保護在斷電情況下避免未落盤的數(shù)據(jù)丟失。

配置RAID卡,通常有二種方式,第一種是在服務(wù)器啟動的時候進入RAID卡圖形配置界面;第二種是通過服務(wù)器遠程控制卡。

對RAID的一些普通操作,使用圖形和web,比如新建raid,刪除raid,添加熱備盤,清除foreign信息,但是RAID擴容的操作,都做不了。那么這時就需要通過系統(tǒng)內(nèi)部經(jīng)常配置,本篇將進行詳細的介紹。

本手冊使用場景與限制:

僅適用于Dell的MegaRaid型號的RAID。服務(wù)器操作系統(tǒng)為CentOS6 或以上

添加硬盤并動態(tài)擴容空間

通常的物理服務(wù)器會有多塊硬盤,最常見的如8個,12個,16個盤位的服務(wù)器。最好的情況當然是我們在采購和業(yè)務(wù)上線的時候就按需求采購合適服務(wù)器,配滿硬盤。但是,在實際的場景中,我們的規(guī)劃沒有做的那么好,這也就出現(xiàn)了標題所指的問題,當一臺在運行的服務(wù)器硬盤容量不夠了,但是還有空的硬盤槽位,那這個時候應(yīng)該怎么辦呢?業(yè)務(wù)不能停,數(shù)據(jù)沒地方移,怎么辦?

加盤,在線擴容,這是最好的辦法。能做到嗎?

目前來看,不行。

那有退一步的方案嗎?停一下業(yè)務(wù),實現(xiàn)擴容的方案?

這個可以有,繼續(xù)往下看吧。

在一臺有空余硬盤插槽的物理服務(wù)器上,新加兩塊硬盤,然后再將新添加的這部分硬盤容量添加到一個現(xiàn)有的RAID中,并且將系統(tǒng)分區(qū)空間擴容。

注意:操作過程,需要服務(wù)器重啟一次。

操作方法:

在系統(tǒng)內(nèi)部使用MegaCli工具直接操作硬盤和修改raid卡配置。

操作步驟:

1. 服務(wù)器中添加硬盤,開機Ctrl+R 進入RAID卡配置界面,如圖1-1,配置了兩個RAID,一個RAID1容量111.250G,一個RAID10容量4.364T

圖1-1 RAID卡配置界面

2. 查看新加的硬盤是否有Foreign信息,如圖1-2,在Foreign View標簽下,看到有兩條正常的Disk Group信息,一條Disk Group:2 Raid10(Foreign)信息,這條信表示目前接在Raid卡上的硬盤中所包含的RAID信息與RAID卡上存儲的RAID信息不一致,需要做同步,同步的方式有兩種,Import和Clear,如圖1-3

圖1-2 Foreign View

圖1-3 RAID信息同步方式Import或者Clear

3. Import表示將硬盤中的RAID信息同步到RAID卡上,Clear表示將硬盤中的RAID信息清除掉。如果是更換備機的場景,需要將硬盤中的RAID信息Import到RAID卡中。此場景,是要講新添加的硬盤,加到現(xiàn)有的RAID組中,所以需要Clear,硬盤中的Foreign信息。

4. 清除之后,再RAID開配置頁面,Foreign View標簽就會消失,如圖1-4,此時出現(xiàn)了3塊沒有配置的物理磁盤,這3塊就是我們新加的硬盤。

圖1-4 清除Foreign信息之后的狀態(tài)

5. 在RAID卡配置界面無法完成在現(xiàn)有RAID中添加新硬盤的功能,所以只能在系統(tǒng)內(nèi)部通過MagaCli工具來做接下來的操作。圖1-4,界面,按esc,保存退出,重啟服務(wù)器進入系統(tǒng)。

6. 在系統(tǒng)內(nèi)安裝Megacli工具

[root@kvmhost ~]# yum install MegaCli

[root@kvmhost ~]#rpm –ivh MegaCli-8.04.07-1.noarch.rpm Lib_Utils-1.00-09.noarch.rpm

[root@kvmhost ~]# ln -s /opt/MegaRAID/MegaCli/MegaCli64 /bin/MegaCli64

注意:

以上1-5步執(zhí)行的操作就是清除硬盤中的Foreign信息,通過Megacli也同樣可以做

[root@kvmhost ~]# MegaCli64 -PDlist -aALL | grep "Foreign State"

查看硬盤的Foreign信息,不是None的表示存在Foreign信息

Foreign State: None

Foreign State: None

Foreign State: Foreign

Foreign State: Foreign

Foreign State: Foreign //三個硬盤都有foreign信息

[root@kvmhost ~]# MegaCli64 -CfgForeign -Scan -a0 // 掃描RAID上硬盤的Foreign信息

There are 3 foreign configuration(s) on controller 0.

Exit Code: 0x00

[root@kvmhost ~]# MegaCli64 -CfgForeign -Clear -a0 //清空Foreign信息

Foreign configuration 0 is cleared on controller 0.

Foreign configuration 1 is cleared on controller 0.

Foreign configuration 2 is cleared on controller 0.

Exit Code: 0x00

[root@kvmhost ~]# MegaCli64 -PDlist -aALL | grep "Foreign State"

Foreign State: None

Foreign State: None

7. 查看當前服務(wù)器上的硬盤數(shù)量

[root@kvmhost ~]# MegaCli64 –adpCount //當前服務(wù)器上有一個RAID卡

Controller Count: 1.

[root@kvmhost ~]# MegaCli64 -PDGetNum -a0 //在raid卡0 上有13塊硬盤

Number of Physical Drives on Adapter 0: 13

8. 查看當前服務(wù)器上的邏輯卷個數(shù)

[root@kvmhost ~]# MegaCli64 -LDGetNum -a0 //查看邏輯卷個數(shù),如下有兩個邏輯卷

Number of Virtual Drives Configured on Adapter 0: 2

Exit Code: 0x02

9. 查看兩個邏輯卷的信息

[root@kvmhost ~]# MegaCli64 -LDInfo -L0 -a0 //查看第一個邏輯卷信息

Adapter 0 -- Virtual Drive Information:

Virtual Drive: 0 (Target Id: 0)

Name :ssd

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 //RAID級別為RAID1

Size : 111.25 GB //邏輯卷容量111.25G

Is VD emulated : Yes

Mirror Data : 111.25 GB

State : Optimal

Strip Size : 64 KB

Number Of Drives : 2 //包含2塊硬盤

Span Depth : 1

Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU //當前邏輯卷的緩存策略

Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Default Access Policy: Read/Write

Current Access Policy: Read/Write

Disk Cache Policy : Disk's Default

Encryption Type : None

Default Power Savings Policy: Controller Defined

Current Power Savings Policy: None

Can spin up in 1 minute: No

LD has drives that support T10 power conditions: No

LD's IO profile supports MAX power savings with cached writes: No

Bad Blocks Exist: No

Is VD Cached: No

Exit Code: 0x00

[root@kvmhost ~]# MegaCli64 -LDInfo -L1 -a0 //查看第二個邏輯卷信息

Adapter 0 -- Virtual Drive Information:

Virtual Drive: 1 (Target Id: 1)

Name :hdd //表示普通的非ssd卷

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 //RAID級別為RAID10

Size : 4.364 TB

Is VD emulated : No

Mirror Data : 4.364 TB

State : Optimal

Strip Size : 64 KB

Number Of Drives : 8 //包含8塊硬盤

Span Depth : 1

Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Default Access Policy: Read/Write

Current Access Policy: Read/Write

Disk Cache Policy : Disk's Default

Encryption Type : None

Default Power Savings Policy: Controller Defined

Current Power Savings Policy: None

Can spin up in 1 minute: Yes

LD has drives that support T10 power conditions: Yes

LD's IO profile supports MAX power savings with cached writes: No

Bad Blocks Exist: No

Is VD Cached: No

Exit Code: 0x00

10. 查看每個邏輯卷,包含的具體硬盤槽位信息

[root@kvmhost ~]# MegaCli64 -LdPdInfo a0|grep -E "RAID Level|Number of PDs|Slot Number"

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0

Span: 0 - Number of PDs: 2

Slot Number: 0

Slot Number: 1

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0

Span: 0 - Number of PDs: 8

Slot Number: 2

Slot Number: 3

Slot Number: 4

Slot Number: 5

Slot Number: 6

Slot Number: 7

Slot Number: 8

Slot Number: 9

11. 查看RAID卡上所有的硬盤,及其槽位信息,并與上面已經(jīng)被現(xiàn)有邏輯卷使用的槽位,找出空的硬盤所在的槽位

[root@kvmhost ~]# MegaCli64 -PDlist -aALL -NoLog| grep -E "Enclosure Device ID|Slot|Firmware state"

Enclosure Device ID: 32 // Enclosure Device ID用于在指定硬盤的時候使用

Slot Number: 0 //硬盤的槽位信息

Firmware state: Online, Spun Up //硬盤的狀態(tài),Online表示使用狀態(tài)

Enclosure Device ID: 32

Slot Number: 9

Firmware state: Online, Spun Up

Enclosure Device ID: 32 //附件ID這個值在定位硬盤時使用

Slot Number: 10 //第10個槽位

Firmware state: Unconfigured(good), Spun Up //狀態(tài)未unconfigured表示,當前狀態(tài)未使用

Enclosure Device ID: 32

Slot Number: 11

Firmware state: Unconfigured(good), Spun Up

Enclosure Device ID: 32

Slot Number: 12

Firmware state: Unconfigured(good), Spun Up

12. 將第10,11槽位的硬盤添加到第二個邏輯卷中,也就是sas那個卷

[root@kvmhost ~]# MegaCli64 -LDRecon -Start -r1 -Add -Physdrv[32:10,32:11] -L1 -a0

Start Reconstruction of Virtual Drive Success. //返回成功

Exit Code: 0x00

13. 確認步驟一,查看第二個邏輯卷當前的信息

[root@kvmhost ~]# MegaCli64 -LDInfo -L1 -a0

Adapter 0 -- Virtual Drive Information:

Virtual Drive: 1 (Target Id: 1)

Name :hdd

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0

Size : 4.364 TB //空間不會馬上變大,需要等重建完成才會變大

Is VD emulated : No

Mirror Data : 4.364 TB

State : Optimal

Strip Size : 64 KB

Number Of Drives : 8

Span Depth : 1

Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Current Cache Policy: WriteThrough, ReadAheadNone, Cached, No Write Cache if Bad BBU

Default Access Policy: Read/Write

Current Access Policy: Read/Write

Disk Cache Policy : Disk's Default

Ongoing Progresses:

Reconstruction : Completed 0%, Taken 1 min. //邏輯卷正在重建,到100%之后完成,需要重啟服務(wù)器,邏輯卷的空間才會被拉伸

Encryption Type : None

Default Power Savings Policy: Controller Defined

Current Power Savings Policy: None

Can spin up in 1 minute: Yes

LD has drives that support T10 power conditions: Yes

LD's IO profile supports MAX power savings with cached writes: No

Bad Blocks Exist: No

Is VD Cached: No

14. 確認步驟二,查看硬盤的狀態(tài)信息

[root@kvmhost ~]# MegaCli64 -PDlist -aALL -NoLog| grep -E "Firmware state"

Firmware state: Online, Spun Up

…. 省略多行相同輸出

Firmware state: Online, Spun Up //狀態(tài)已經(jīng)是online了,表示硬盤狀態(tài)已正常

Firmware state: Unconfigured(good), Spun Up //剛才用了兩塊盤,還剩下一塊盤,未操作所以還是未配置

15. 待重構(gòu)完畢,重啟系統(tǒng)之后,再次查看邏輯卷的大小

[root@kvmhost-10-30-11-32 ~]# MegaCli64 -LDInfo -L1 -a0

Adapter 0 -- Virtual Drive Information:

Virtual Drive: 1 (Target Id: 1)

Name :hhd

RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0

Size : 5.455 TB //容量拉伸到5.455T

Is VD emulated : No

Mirror Data : 5.455 TB

State : Optimal

Strip Size : 64 KB

Number Of Drives : 10

Span Depth : 1

Default Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Current Cache Policy: WriteBack, ReadAhead, Direct, No Write Cache if Bad BBU

Default Access Policy: Read/Write

Current Access Policy: Read/Write

Disk Cache Policy : Disk's Default

Encryption Type : None

Default Power Savings Policy: Controller Defined

Current Power Savings Policy: None

Can spin up in 1 minute: Yes

LD has drives that support T10 power conditions: Yes

LD's IO profile supports MAX power savings with cached writes: No

Bad Blocks Exist: No

Is VD Cached: No

Number of Dedicated Hot Spares: 1

0 : EnclId - 32 SlotId - 12

Exit Code: 0x00

[root@kvmhost datapool]# fdisk –l

//使用fdisk查看,硬盤sdb的大小,發(fā)現(xiàn)還是之前的4798.6G,并沒有變成5.455 TB,所以需要重啟一下服務(wù)器

… 省略多行無用的輸出

Disk /dev/sdb: 4798.6 GB, 4798552211456 bytes

255 heads, 63 sectors/track, 583390 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Device Boot Start End Blocks Id System

/dev/sdb1 1 267350 2147483647+ ee GPT

… 省略多行無用的輸出

16. 重啟服務(wù)器,查看硬盤空間

[root@kvmhost ~]# fdisk -l

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdb: 5998.2 GB, 5998190264320 bytes //sdb已經(jīng)被擴大了

255 heads, 63 sectors/track, 729238 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Device Boot Start End Blocks Id System

/dev/sdb1 1 267350 2147483647+ ee GPT

17. 查看當前系統(tǒng)的分區(qū),分為兩種情況

a) 如果系統(tǒng)使用了LVM

[root@kvmhost ~]# df –Th //查看分區(qū)情況,發(fā)現(xiàn)/datapool屬于lvm的邏輯卷

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/kvmvg-root

ext4 94G 1.6G 88G 2% /

tmpfs tmpfs 127G 0 127G 0% /dev/shm

/dev/sda1 ext3 248M 51M 185M 22% /boot

/dev/mapper/datavg-datapool

xfs 4.4T 3.9G 4.4T 1% /datapool

[root@kvmhost ~]# lvs //查看邏輯卷名稱

LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert

datapool datavg -wi-ao---- 4.36t

root kvmvg -wi-ao---- 94.97g

swap kvmvg -wi-ao---- 16.00g

[root@kvmhost ~]# vgs //查看卷組信息

VG #PV #LV #SN Attr VSize VFree

datavg 1 1 0 wz--n- 4.36t 0

kvmvg 1 2 0 wz--n- 110.97g 0

[root@kvmhost ~]# pvs //查看物理卷信息

PV VG Fmt Attr PSize PFree

/dev/sda2 kvmvg lvm2 a--u 110.97g 0

/dev/sdb1 datavg lvm2 a--u 4.36t 0

需要將多出的空間使用parted創(chuàng)建一個新的分區(qū),再將分區(qū)創(chuàng)建為pv,添加到vg,再拉伸lv

[root@kvmhost ~]# parted /dev/sdb

GNU Parted 2.1

Using /dev/sdb

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print //查看分區(qū)情況

Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 2343043072 blocks) or continue with the current setting?

Fix/Ignore? Fix //使用parted操作sdb,會提示有空余的空間,是否更新配置,輸入fix

Model: DELL PERC H730P Mini (scsi)

Disk /dev/sdb: 5998GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Number Start End Size File system Name Flags

1 1049kB 4799GB 4799GB lvm

(parted) mkpart primary 4799GB -1 創(chuàng)建分區(qū),分區(qū)類型為主分區(qū),起點是4799GB, -1表示剩余所有容量

Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. 提示,需要重啟系統(tǒng)才能識別

(parted) print //再次查看當前分區(qū)

Model: DELL PERC H730P Mini (scsi)

Disk /dev/sdb: 5998GB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Number Start End Size File system Name Flags

1 1049kB 4799GB 4799GB lvm

2 4799GB 5998GB 1200GB primary

重啟系統(tǒng)之后

[root@kvmhost ~]# pvcreate /dev/sdb2 //創(chuàng)建PV

[root@kvmhost ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda2 kvmvg lvm2 a--u 110.97g 0

/dev/sdb1 datavg lvm2 a--u 4.36t 0

/dev/sdb2 lvm2 ---- 1.09t 1.09t

[root@kvmhost ~]# vgextend datavg /dev/sdb2 //拉伸VG

[root@kvmhost ~]# vgs

VG #PV #LV #SN Attr VSize VFree

datavg 2 1 0 wz--n- 5.46t 1.09t

kvmvg 1 2 0 wz--n- 110.97g 0

[root@kvmhost ~]# df –Th //查看當前分區(qū)

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/kvmvg-root

ext4 94G 2.0G 87G 3% /

tmpfs tmpfs 127G 0 127G 0% /dev/shm

/dev/sda1 ext3 248M 51M 185M 22% /boot

/dev/mapper/datavg-datapool

xfs 4.4T 132G 4.3T 3% /datapool

[root@kvmhost ~]# lvextend -l +100%FREE /dev/mapper/datavg-datapool //拉伸LV

[root@kvmhost ~]# xfs_growfs /dev/mapper/datavg-datapool //拉伸文件系統(tǒng)

meta-data=/dev/mapper/datavg-datapool isize=256 agcount=5, agsize=268435455 blks

= sectsz=512 attr=2, projid32bit=0

data = bsize=4096 blocks=1171513344, imaxpct=5

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0

log =internal bsize=4096 blocks=521728, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0

data blocks changed from 1171513344 to 1464385536

[root@kvmhost ~]# df –Th //驗證分區(qū)容量是否擴大

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/kvmvg-root

ext4 94G 2.0G 87G 3% /

tmpfs tmpfs 127G 0 127G 0% /dev/shm

/dev/sda1 ext3 248M 51M 185M 22% /boot

/dev/mapper/datavg-datapool

xfs 5.5T 132G 5.4T 3% /datapool 拉伸成功

b) 如果系統(tǒng)直接將sdb格式化掛載使用

[root@kvmhost ~]# df -Th

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/kvmvg-root

ext4 94G 1.6G 88G 2% /

tmpfs tmpfs 190G 0 190G 0% /dev/shm

/dev/sda1 ext3 248M 51M 185M 22% /boot

/dev/sdb xfs 4.4T 2.4G 4.4T 1% /datapool

[root@kvmhost ~]# fdisk -l

< 省略多行無用的輸出>

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdb: 5998.2 GB, 5998190264320 bytes

255 heads, 63 sectors/track, 729238 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

< 省略多行無用的輸出>

[root@kvmhost ~]# xfs_growfs /dev/sdb //拉伸xfs分區(qū)

[root@kvmhost ~]# df –Th

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/kvmvg-root

ext4 94G 1.6G 88G 2% /

tmpfs tmpfs 190G 0 190G 0% /dev/shm

/dev/sda1 ext3 248M 51M 185M 22% /boot

/dev/sdb xfs 5.5T 2.4G 5.5T 1% /datapool

驗證,容量已經(jīng)擴充到5.5T

18. 上面兩個情況,都可以實現(xiàn)分區(qū)拉伸,不過可以發(fā)現(xiàn),非lvm的模式更加簡單。

作者介紹

楊俊俊 新鈦云服運維架構(gòu)師

十年運維經(jīng)驗,曾任盛大游戲資深云工程師,前隆科技系統(tǒng)運維主管。《深度實踐KVM》,《Linux運維最佳實踐》作者。精通KVM,VMWare,Docker等虛擬化相關(guān)技術(shù)。在基礎(chǔ)架構(gòu)、虛擬化和自動化運維方面具有豐富的實戰(zhàn)經(jīng)驗,主導實施過上萬臺服務(wù)器上云遷移。

版權(quán)聲明:本文為新鈦云服原創(chuàng)編譯,謝絕轉(zhuǎn)載,否則將追究法律責任!

舉報/反饋

總結(jié)

以上是生活随笔為你收集整理的服务器硬盘raid5扩容,超实用,物理服务器RAID扩容详解的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。