日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

RHCS图形界面建立GFS共享下

發布時間:2024/4/14 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 RHCS图形界面建立GFS共享下 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

我們上面通過圖形界面實現了GFS,我們這里使用字符界面實現

1.1.?????? 系統基礎配置

5臺節點均采用相同配置。

  • 配置/etc/hosts文件

    # vi /etc/hosts

    127.0.0.1??localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1????????localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.1.130 t-lg-kvm-001

    192.168.1.132 t-lg-kvm-002

    192.168.1.134 t-lg-kvm-003

    192.168.1.138 t-lg-kvm-005

    192.168.1.140 t-lg-kvm-006

  • 網絡設置

    關閉NetworkManager:

    # service NetworkManager stop

    # chkconfig NetworkManager off

  • 關閉SELinux

    修改/etc/selinux/config文件中設置SELINUX=disabled :

    # cat /etc/selinux/config

    ?

    # This file hctrls the state of SELinux on the system.

    # SELINUX= can take one of these three values:

    #???? enforcing - SELinux securitypolicy is enforced.

    #???? permissive - SELinux printswarnings instead of enforcing.

    #???? disabled - No SELinux policyis loaded.

    SELINUX=disabled

    # SELINUXTYPE= can take one of these two values:

    #???? targeted - Targeted processesare protected,

    #???? mls - Multi Level Securityprotection.

    SELINUXTYPE=targeted

    設置當前生效:

    # setenforce 0

  • 配置時間同步

    5臺節點已配置時間同步。

  • 1.2.??????配置yum

    Gfs2相關軟件直接存放在CentOS系統鏡像中,按照以下步驟進行操作:

    1、在192.168.1.130上掛載iso文件

    #mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD1.iso /var/www/html/DVD1

    #mount -o loop /opt/CentOS-6.5-x86_64-bin-DVD2.iso /var/www/html/DVD2

    2、在192.168.1.130修改/etc/yum.repos.d/CentOS-Media.repo:

    #vi /etc/yum.repos.d/CentOS-Media.repo

    [c6-media]

    name=CentOS-$releasever - Media

    baseurl=file:///var/www/html/DVD1

    ??????? file:///var/www/html/DVD2

    gpgcheck=0

    enabled=1

    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    3、在192.168.1.130上啟動httpd服務,以提供其他計算節點使用

    # service httpd start

    4、在其他4臺計算節點上配置yum

    #vi/etc/yum.repos.d/CentOS-Media.repo

    [c6-media]

    name=CentOS-$releasever - Media

    baseurl=http://192.168.1.130/DVD1

    ??????? http://192.168.1.130/DVD2

    gpgcheck=0

    enabled=1

    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

    1.3.??????安裝gfs2相關軟件

    1.1.1.??安裝gfs2相關軟件

    5臺計算節點上分別執行以下命令安裝gfs2軟件:

    安裝cmanrgmanager

    # yuminstall -y rgmanager cman

    安裝clvm

    # yuminstall -y lvm2-cluster

    安裝gfs2

    # yuminstall -y gfs*

    1.1.2.??配置防火墻策略

    5臺計算節點上分別執行以下命令配置防火墻策略:

    #iptables-A INPUT -p udp -m udp --dport 5404 -j ACCEPT

    #iptables-A INPUT -p udp -m udp --dport 5405 -j ACCEPT

    #iptables-A INPUT -p tcp -m tcp --dport 21064 -j ACCEPT

    #serviceiptables save

    以上過程執行完成后,建議重新啟動計算節點,否則有可能會出現cman服務啟動不成功的問題。

    ?

    1.4.??????配置cmanrgmanager 集群

    配置集群在一臺計算節點上執行即可,配置完成后同步到其他計算節點上,例如在192.168.1.130上進行配置:

    1、創建集群

    192.168.1.130上執行:

    root@t-lg-kvm-001:/#ccs_toolcreate kvmcluster

    2、配置集群節點

    總共有5臺計算節點,因1臺網卡問題暫未使用,目前配置過程中只有5臺計算節點,將計算節點添加到集群中,在192.168.1.130上執行:

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 1 t-lg-kvm-001

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 2 t-lg-kvm-002

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 3 t-lg-kvm-003

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 4 t-lg-kvm-005

    root@t-lg-kvm-001:/#ccs_tooladdnode -n 5 t-lg-kvm-006

    查看集群:

    root@t-lg-kvm-001:/root#ccs_toollsnode

    ?

    Clustername: kvmcluster, config_version: 24

    ?

    Nodename??????????????????????? Votes Nodeid Fencetype

    t-lg-kvm-001?????????????????????? 1??? 1???

    t-lg-kvm-002?????????????????????? 1??? 2???

    t-lg-kvm-003?????????????????????? 1? ??3???

    t-lg-kvm-005?????????????????????? 1??? 4???

    t-lg-kvm-006?????????????????????? 1??? 5?

    3、同步192.168.1.130上的配置文件到各節點

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.132:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.134:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.138:/etc/cluster/

    root@t-lg-kvm-001:/#scp/etc/cluster/cluster.conf 192.168.1.140:/etc/cluster/

    4、啟動各個節點上的cman服務

    5臺計算節點上均執行:

    #servicecman start

    集群配置完成,接下來配置clvm.

    1.5.??????配置CLVM

  • 啟用集群LVM

    在集群中的每個節點上均執行以下命令開啟集群lvm:

    #lvmconf--enable-cluster

    驗證集群lvm是否啟用:

    #cat/etc/lvm/lvm.conf | grep "locking_type = 3"

    locking_type= 3

    有返回值locking_type = 3證明集群lvm已啟動。

  • 啟動clvm服務

    在各節點上啟動clvm服務:

    #serviceclvmd start

  • 在集群節點上創建lvm

    此步驟在一臺節點上執行即可,例如在192.168.1.130上執行:

    查看共享存儲:

    #fdisk-l

    ?

    Disk/dev/sda: 599.0 GB, 598999040000 bytes

    255heads, 63 sectors/track, 72824 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x000de0e7

    ?

    ?? Device Boot????? Start???????? End????? Blocks??Id? System

    /dev/sda1?? *??????????1????????? 66????? 524288??83? Linux

    Partition1 does not end on cylinder boundary.

    /dev/sda2????????????? 66?????? 72825??584434688?? 8e? Linux LVM

    ?

    Disk/dev/mapper/vg01-lv01: 53.7 GB, 53687091200 bytes

    255heads, 63 sectors/track, 6527 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk/dev/mapper/vg01-lv_swap: 537.7 GB, 537676218368 bytes

    255heads, 63 sectors/track, 65368 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sdd: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sde: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sdf: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk /dev/sdg: 1073.7 GB, 1073741824000 bytes

    255heads, 63 sectors/track, 130541 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?

    ?

    Disk/dev/mapper/vg01-lv_bmc: 5368 MB, 5368709120 bytes

    255heads, 63 sectors/track, 652 cylinders

    Units= cylinders of 16065 * 512 = 8225280 bytes

    Sectorsize (logical/physical): 512 bytes / 512 bytes

    I/Osize (minimum/optimal): 512 bytes / 512 bytes

    Diskidentifier: 0x00000000

    ?????? 共6個lun,每個1TB。

    創建集群物理卷:

    root@t-lg-kvm-001:/root#pvcreate/dev/sdb

    root@t-lg-kvm-001:/root#pvcreate/dev/sdc

    root@t-lg-kvm-001:/root#pvcreate/dev/sdd

    root@t-lg-kvm-001:/root#pvcreate/dev/sde

    root@t-lg-kvm-001:/root#pvcreate/dev/sdf

    root@t-lg-kvm-001:/root#pvcreate/dev/sdg

    ?????? 創建集群卷組:

    root@t-lg-kvm-001:/root#vgcreatekvmvg /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

    ? Clustered volume group "kvmvg"successfully created

    root@t-lg-kvm-001:/root#vgs

    ? VG???#PV #LV #SN Attr?? VSize?? VFree

    ? kvmvg??6?? 0?? 0 wz--nc??5.86t 5.86t

    ? vg01???1?? 3?? 0 wz--n- 557.36g 1.61g

    ?????? 創建集群邏輯卷:

    root@t-lg-kvm-001:/root#lvcreate -L 5998G -n kvmlv kvmvg

    ? Logical volume "kvmlv" created

    root@t-lg-kvm-001:/root#lvs

    ? LV?????VG??? Attr?????? LSize??Pool Origin Data%? Move LogCpy%Sync Convert

    ??kvmlv??kvmvg -wi-a-----?? 5.86t????????????????????????????????????????????

    ? lv01?? ?vg01?-wi-ao----? 50.00g????????????????????????????????????????????

    ? lv_bmc?vg01? -wi-ao----?? 5.00g????????????????????????????????????????????

    ? lv_swap vg01?-wi-ao---- 500.75g???????????????????

    到此集群的邏輯卷創建完成,邏輯卷在一臺節點上創建完成后,在其他節點上都能看到。

    可登陸到其他節點上,使用lvs都能查看到該邏輯卷,驗證是否成功。

  • 1.6.??????配置gfs2

    1、將邏輯卷格式化成集群文件系統

    僅在一臺機器上執行即可,例如在192.168.1.130上執行:

    root@t-lg-kvm-001:/root#mkfs.gfs2 -j 7 -p lock_dlm -t kvmcluster:sharedstorage/dev/kvmvg/kvmlv

    Thiswill destroy any data on /dev/kvmvg/kvmlv.

    Itappears to contain: symbolic link to `../dm-3'

    ?

    Areyou sure you want to proceed? [y/n] y

    ?

    Device:??????????????????? /dev/kvmvg/kvmlv

    Blocksize:???????????????? 4096

    DeviceSize??????????????? 5998.00 GB(1572339712 blocks)

    FilesystemSize:?????????? 5998.00 GB (1572339710blocks)

    Journals:????????????????? 7

    ResourceGroups:?????????? 7998

    LockingProtocol:????????? "lock_dlm"

    LockTable:???????????????"kvmcluster:sharedstorage"

    UUID:?????????????????????39f35f4a-e42a-164f-9438-967679e48f9f

    2、將集群文件系統掛載到/openstack/instances目錄下

    ?? 該步驟在集群中的每個節點上都需要執行掛載命令:

    #mount-t gfs2 /dev/kvmvg/kvmlv /openstack/instances/

    查看掛載情況:

    #df-h

    Filesystem?????????????? Size? Used Avail Use% Mounted on

    /dev/mapper/vg01-lv01???? 50G??12G?? 35G? 26% /

    tmpfs??????????????????? 379G?? 29M?379G?? 1% /dev/shm

    /dev/mapper/vg01-lv_bmc? 5.0G?138M? 4.6G?? 3% /bmc

    /dev/sda1??????????????? 504M?? 47M?433M? 10% /boot

    /dev/mapper/kvmvg-kvmlv?5.9T? 906M? 5.9T??1% /openstack/instances

    設置開機自動掛載:

    #echo"/dev/kvmvg/kvmlv /openstack/instances gfs2 defaults 0 0" >>/etc/fstab

    啟動rgmanager服務:

    #servicergmanager start

    設置開機自啟動:

    #chkconfigclvmd on

    #chkconfigcman on

    #chkconfigrgmanager on

    #chkconfiggfs2 on

    3、設置掛載目錄權限

    因掛載目錄用于openstack存放虛擬機,目錄的權限需要設置成nova:nova.

    在集群中的任意節點上執行:

    #chown -R nova:nova /openstack/instances/

    在各節點上查看目錄權限是否正確:

    #ls-lh /openstack/

    總用量 4.0K

    drwxr-xr-x7 nova nova 3.8K 5? 26 14:12 instances


    轉載于:https://blog.51cto.com/3402313/1656136

    總結

    以上是生活随笔為你收集整理的RHCS图形界面建立GFS共享下的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。