日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CEPH块存储管理

發布時間:2025/3/20 编程问答 60 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CEPH块存储管理 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、檢查CEPH和集群各參數

1、檢查ceph安裝狀態:

命令:ceph –s 或者 ceph status(顯示結果一樣)

示例:

root@node1:/home/ceph/ceph-cluster# ceph -s

???????cluster 2f54214e-b6aa-44a4-910c-52442795b037

???????health HEALTH_OK

???????monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

???????osdmap e56: 5 osds: 5 up, 5 in

???????pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

???????????376 MB used, 134 GB / 134 GB avail

???????????????? 192 active+clean

2、檢查集群健康狀態:

命令:ceph –w

示例:

root@node1:/home/ceph/ceph-cluster# ceph -w

???????cluster 2f54214e-b6aa-44a4-910c-52442795b037

???????health HEALTH_OK

???????monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

???????osdmap e56: 5 osds: 5 up, 5 in

???????pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

???????????376 MB used, 134 GB / 134 GB avail

???????????????? 192 active+clean

2016-09-08 09:40:18.084097 mon.0 [INF] pgmap v323: 192 pgs: 192active+clean; 8 bytes data, 182 MB used, 134 GB / 134 GB avail

3、檢查cephmonitor仲裁狀態

命令:ceph quorum_status --formatjson-pretty

示例:

root@node1:/home/ceph/ceph-cluster# ceph quorum_status --formatjson-pretty

{ "election_epoch": 1,

???????"quorum": [

?????????0],

???????"quorum_names": [

???????"node1"],

??????"quorum_leader_name": "node1",

??????"monmap": { "epoch": 1,

??????"fsid": "2f54214e-b6aa-44a4-910c-52442795b037",

??????"modified": "0.000000",

??????"created": "0.000000",

??????"mons": [

???????????{ "rank": 0,

????????????? "name":"node1",

????????????? "addr":"192.168.2.13:6789\/0"}]}}

4、導出cephmonitor信息

命令:ceph mon dump

示例:

root@node1:/home/ceph/ceph-cluster# ceph mon dump

dumped monmap epoch 1

epoch 1

fsid 2f54214e-b6aa-44a4-910c-52442795b037

last_changed 0.000000

created 0.000000

0: 192.168.2.13:6789/0 mon.node1

5、檢查集群使用狀態

??? 命令:ceph df

??? 示例:

root@node1:/home/ceph/ceph-cluster# ceph df

GLOBAL:

???????SIZE???? AVAIL???? RAW USED???? %RAW USED

???????134G????? 134G???????? 376M????????? 0.27

POOLS:

???????NAME???????? ID???? USED??????%USED???? MAX AVAIL???? OBJECTS

???????data???????? 0?????????? 0???????? 0??????? 45882M???? ??????0

???????metadata???? 1?????????? 0???????? 0??????? 45882M?????????? 0

rbd????????? 2????? 70988k????? 0.05??????? 45882M????????? 26

6、檢查ceph monitorosd和pg(配置組)狀態

??? 命令:ceph mon statceph osd statceph pg stat

示例:

root@node1:/home/ceph/ceph-cluster# ceph mon stat

e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1, quorum0 node1

root@node1:/home/ceph/ceph-cluster# ceph osd stat

???????osdmap e56: 5 osds: 5 up, 5 in

root@node1:/home/ceph/ceph-cluster# ceph pg stat

v289: 192 pgs: 192 active+clean; 70988 kB data, 376 MB used, 134 GB/ 134 GB avail

7、列表PG

??? 命令:ceph pg dump

示例:

root@node1:/home/ceph/ceph-cluster# ceph pg dump

dumped all in format plain

version 289

stamp 2016-09-08 08:44:35.249418

last_osdmap_epoch 56

last_pg_scan 1

full_ratio 0.95

nearfull_ratio 0.85

????????????????????????? ???……………

8、列表ceph存儲池

??? 命令:ceph osd lspools

示例:

root@node1:/home/ceph/ceph-cluster# ceph osd lspools

0 data,1 metadata,2 rbd,

9、檢查OSDCRUSH map

??? 命令:ceph osd tree

示例:

root@node1:/home/ceph/ceph-cluster# ceph osd tree

# id????? weight?? type name???? up/down?????? reweight

-1? 0.15?????? root default

-2? 0.06????????????? host node2

0?? 0.03???????????????????? osd.0????? up?? 1????

3?? 0.03???????????????????? osd.3????? up?? 1????

-3? 0.06????????????? host node3

1?? 0.03???????????????????? osd.1????? up?? 1????

4?? 0.03???????????????????? osd.4????? up?? 1????

-4? 0.03????????????? host node1

2?? 0.03???????????????????? osd.2????? up?? 1????

10、列表群集的認證秘鑰:

??? 命令:ceph auth list

示例:

root@node1:/home/ceph/ceph-cluster# ceph auth list

installed auth entries:

osd.0

?????? ?????? key:AQCM089X8OHnIhAAnOnRZMuyHVcXa6cnbU2kCw==

?????? ?????? caps: [mon] allow profile osd

?????? ?????? caps: [osd] allow *

osd.1

?????? ?????? key:AQCU089X0KSCIRAAZ3sAKh+Fb1EYV/ROkBd5mA==

?????? ?????? caps: [mon] allow profile osd

?????? ?????? caps: [osd] allow *

osd.2

?????? ?????? key:AQAb1c9XWIuxEBAA3PredgloaENDaCIppxYTbw==

?????? ?????? caps: [mon] allow profile osd

?????? ?????? caps: [osd] allow *

osd.3

?????? ?????? key:AQBF1c9XuBOpMBAAx8ELjaH0b1qwqKNwM17flA==

?????? ?????? caps: [mon] allow profile osd

?????? ?????? caps: [osd] allow *

osd.4

?????? ?????? key:AQBc1c9X4LXCEBAAcq7UVTayMo/e5LBykmZZKg==

?????? ?????? caps: [mon] allow profile osd

?????? ?????? caps: [osd] allow *

client.admin

?????? ?????? key:AQAd089XMI14FRAAdcm/woybc8fEA6dH38AS6g==

?????? ?????? caps: [mds] allow

?????? ?????? caps: [mon] allow *

?????? ?????? caps: [osd] allow *

client.bootstrap-mds

?????? ?????? key:AQAd089X+GahIhAAgC+1MH1v0enAGzKZKUfblg==

?????? ?????? caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

?????? ?????? key: AQAd089X8B5wHBAAnrM0MQK3to1iBitDzk+LYA==

?????? ?????? caps: [mon] allow profile bootstrap-osd

二、塊存儲高級管理

1、創建塊設備

命令:rbd create {p_w_picpath-name} --size{megabytes} --pool {pool-name} --p_w_picpath-format 2

注意:--p_w_picpath-format 2 用于指定format類型為2,不加則默認為1類型,保護快照功能僅支持2類型。1類型為淘汰類型,一般用2類型,這里演示用

示例:

root@node1:/home/ceph/ceph-cluster# rbd create zhangbo --size 2048--pool rbd

2、列出塊設備

命令:rbd ls {pool-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd ls rbd

zhangbo

3、檢索塊信息

命令:rbd –p_w_picpath {p_w_picpath-name } info

rbd info {p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd --p_w_picpath zhangbo? info

rbd p_w_picpath 'zhangbo':

?????? ??????? size 2048 MB in 512 objects

?????? ??????? order 22 (4096 kB objects)

?????? ??????? block_name_prefix: rb.0.5e56.2ae8944a

?????? ??????? format: 1

root@node1:/home/ceph/ceph-cluster# rbd info zhangbo

rbd p_w_picpath 'zhangbo':

?????? ??????? size 2048 MB in 512 objects

?????? ??????? order 22 (4096 kB objects)

?????? ??????? block_name_prefix: rb.0.5e56.2ae8944a

?????? ??????? format: 1

4、更改塊大小

命令:rbd resize –p_w_picpath {p_w_picpath-name}–size {megabytes}

示例:

root@node1:/home/ceph/ceph-cluster# rbd resize --p_w_picpath zhangbo--size 4096

Resizing p_w_picpath: 100% complete...done.

root@node1:/home/ceph/ceph-cluster# rbd info zhangbo

rbd p_w_picpath 'zhangbo':

?????? ??????? size 4096 MB in 1024 objects

?????? ??????? order 22 (4096 kB objects)

?????? ??????? block_name_prefix: rb.0.5e56.2ae8944a

?????? ??????? format: 1

5、刪除塊設備

命令:rbd rm {p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd rm zhangbo

Removing p_w_picpath: 100% complete...done.

root@node1:/home/ceph/ceph-cluster# rbd ls

6、映射塊設備:

命令:rbd map {p_w_picpath-name} –pool{pool-name} –id {user-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd map zhangbo --pool rbd --idadmin

7、查看已映射塊設備

命令:rbd showmapped

示例:

root@node1:/home/ceph/ceph-cluster# rbd showmapped

id pool p_w_picpath?? snapdevice???

0? rbd? zhangbo -???/dev/rbd0

8、取消映射:

命令:rbd unmap/dev/rbd/{pool-name}/{p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd unmap /dev/rbd/rbd/zhangbo

root@node1:/home/ceph/ceph-cluster# rbd showmapped

9、格式化:

命令:mkfs.ext4 /dev/rbd0

示例:

root@node1:/home/ceph/ceph-cluster# mkfs.ext4 /dev/rbd0

mke2fs 1.42.9 (4-Feb-2014)

Discarding device blocks: 完成???????????????????????????

文件系統標簽=

OS type: Linux

塊大小=4096 (log=2)

分塊大小=4096 (log=2)

Stride=1024 blocks, Stripe width=1024 blocks

262144 inodes, 1048576 blocks

52428 blocks (5.00%) reserved for the super user

第一個數據塊=0

Maximum filesystem blocks=1073741824

32 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

?????? ???? 32768, 98304, 163840, 229376, 294912,819200, 884736

Allocating group tables: 完成???????????????????????????

正在寫入inode: 完成???????????????????????????

Creating journal (32768 blocks): 完成

Writing superblocks and filesystem accounting information: 完成

10、掛載

命令:mount /dev/rbd0 /mnt/{文件夾名}

示例:

root@node1:/home/ceph/ceph-cluster# mount /dev/rbd0/mnt/ceph-zhangbo/

root@node1:/home/ceph/ceph-cluster# df -h

文件系統??????? 容量? 已用?可用 已用% 掛載點

udev??????????? 989M? 4.0K?989M??? 1% /dev

tmpfs?????????? 201M? 1.1M?200M??? 1% /run

/dev/sda5??????? 19G? 4.0G??14G?? 23% /

none??????????? 4.0K???? 0?4.0K??? 0% /sys/fs/cgroup

none??????????? 5.0M???? 0?5.0M??? 0% /run/lock

none?????????? 1001M?? 76K 1001M???1% /run/shm

none??????????? 100M?? 32K?100M??? 1% /run/user

/dev/sda1?????? 9.3G?? 60M?8.8G??? 1% /boot

/dev/sda6?? ?????19G??67M?? 18G??? 1% /home

/dev/sdc1??????? 27G? 169M??27G??? 1% /var/lib/ceph/osd/ceph-2

/dev/rbd0?????? 3.9G? 8.0M?3.6G??? 1% /mnt/ceph-zhangbo

11、設置開機自動掛載(開機CEPH自動map和mount rbd塊設備)

vim /etc/ceph/rbdmap

{poolname}/{p_w_picpathname} id=client,keyring=/etc/ceph/ceph.client.keyring

rbd/zhangbo id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

vim /etc/fstab

/dev/rbd/rbd/zhangbo /mnt/ceph-zhangbo xfs defaults,noatime,_netdev

12、塊擴容

命令:rbd resize rbd/zhangbo –size 4096

rbd resize –p_w_picpath zhangbo –size 4096

支持文件系統在線擴容:resize2fs /dev/rbd0

13、使用塊設備完整操作流程:

1rbd createzhangbo --size 2048 --pool rbd

2rbd mapzhangbo --pool rbd --id admin

3mkfs.ext4/dev/rbd0

4mount/dev/rbd0 /mnt/ceph-zhangbo/

5、設置開機自動掛載

6、文件系統在線擴容

rbd resize rbd/zhangbo --size 2048

resize2fs /dev/rbd0

7umount/mnt/ceph-zhangbo

8rbd unmap/dev/rbd/rbd/zhangbo

9、刪除開機自動掛載添加的內容

10rbd rmzhangbo

三、快照和克隆

1、創建快照:

命令:rbd –pool {pool-name} snapcreate –snap {snap-name} {p_w_picpath-name}

rbd snap create{pool-name}/{p_w_picpath-name}@{snap-name}

??????? 示例:

root@node1:~# rbdsnap create rbd/zhangbo@zhangbo_snap

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME??????????? SIZE

??? ???????2zhangbo_snap 1024 MB

2、快照回滾

命令:rbd –pool {pool-name}snap sellback –snap {snap-name} {iname-name}

rbd snap rollback{pool-name}/{p_w_picpath-name}@{snap-name}

???????? 示例:

root@node1:~# rbdsnap rollback rbd/zhangbo@zhangbo_snap

Rolling back tosnapshot: 100% complete...done.

3、清除快照(刪除該塊設備下所有的快照)

命令:rbd –pool{pool-name} snap purge {p_w_picpath-name}

rbd snap purge{pool-name}/{p_w_picpath-name}

??????? 示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME???????????? SIZE

??? ??????2zhangbo_snap? 1024 MB

??? ??????3zhangbo_snap1 1024 MB

??? ??????4zhangbo_snap2 1024 MB

??? ??????5zhangbo_snap3 1024 MB

root@node1:~# rbdsnap purge rbd/zhangbo

Removing allsnapshots: 100% complete...done.

root@node1:~# rbdsnap ls rbd/zhangbo

root@node1:~#

4、刪除快照(刪除指定快照)

命令:rbd snap rm{pool-name}/{p_w_picpath-name}@(snap-name)

示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME???????????? SIZE

??? ???????10 zhangbo_snap1 1024 MB

??? ???????11 zhangbo_snap2 1024 MB

??? ???????12 zhangbo_snap3 1024 MB

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME???????????? SIZE

??? ???????10 zhangbo_snap1 1024 MB

12 zhangbo_snap31024 MB

5、列出快照:

命令:rbd –pool{pool-name} snap ls {p_w_picpath-name}

rbd snap ls{pool-name}/{p_w_picpath-name}

??????? 示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME???????????? SIZE

??? ????16 zhangbo_snap1 1024 MB

??? ????17 zhangbo_snap2 1024 MB

18 zhangbo_snap31024 MB

6、保護快照:

命令:rbd –pool {pool-name}snap pretect –p_w_picpath {p_w_picpath-name} –snap {snapshot-name}

rbd snap pretect{pool-name}/{p_w_picpath-name}@{snapshot-name}

??????? 示例:

root@node1:~# rbdsnap protect rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

rbd: snapshot'zhangbo_snap2' is protected from removal.

2016-09-0814:05:03.874498 7f35bddad7c0 -1 librbd: removing snapshot from header failed:(16) Device or resource busy

7、取消保護快照

命令:rbd –pool{pool-name} snap unprotect –p_w_picpath {p_w_picpath-name} –snap {snapshot-name}

rbd snapunprotect {pool-name}/{p_w_picpath-name}@{snapshot-name}

??????? 示例:

root@node1:~# rbdsnap unprotect rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME???????????? SIZE

??? ????22 zhangbo_snap1 1024 MB

??? ????24 zhangbo_snap3 1024 MB

8、快照克隆(需快照保護才能快照克隆)

注意:快照是只讀的,而克隆是基于快照可讀寫的

命令:rbd clone{pool-name}/{parent-p_w_picpath}@{snap-name} {pool-name}/{child-p_w_picpath-name}

示例:

root@node1:~# rbdclone rbd/zhangbo@zhangbo_snap2 rbd/zhangbo-snap-clone

root@node1:~# rbdls

zhangbo

zhangbo-snap-clone

9、創建分層快照和克隆

命令:rbd createzhangbo --size 1024 --p_w_picpath-format 2

rbd snap create{pool-name}/{p_w_picpath-name}@{snap-name}

rbd snap pretect{pool-name}/{p_w_picpath-name}@{snapshot-name}

rbd clone{pool-name}/{parent-p_w_picpath}@{snap-name} {pool-name}/{child-p_w_picpath-name}

10、查看快照的克隆:

命令:rbd --pool{pool-name} children --p_w_picpath {p_w_picpath-name} --snap {snap-name}

rbd children{pool-name}/{p_w_picpath-name}@{snapshot-name}

??????? 示例:

root@node1:~# rbdchildren rbd/zhangbo@zhangbo_snap2

rbd/zhangbo-snap-clone


轉載于:https://blog.51cto.com/11433696/1850780

總結

以上是生活随笔為你收集整理的CEPH块存储管理的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。