日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Greenplum 6安装指南(CentOS 7.X)

發布時間:2023/12/14 编程问答 48 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Greenplum 6安装指南(CentOS 7.X) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、基本概念

Greenplum是一個面向數據倉庫應用的關系型數據庫,因為有良好的體系結構,所以
在數據存儲、高并發、高可用、線性擴展、反應速度、易用性和性價比等方面有非常
明顯的優勢。Greenplum是一種基于PostgreSQL的分布式數據庫,其采用sharednothing架構,主機、操作系統、內存、存儲都是自我控制的,不存在共享。
本質上講Greenplum是一個關系型數據庫集群,它實際上是由數個獨立的數據庫服務
組合成的邏輯數據庫。與RAC不同,這種數據庫集群采取的是MPP(Massively
Parallel Processing)架構。跟MySQL、Oracle 等關系型數據不同,Greenplum可以理解為分布式關系型數據庫。
關于Greenplum的更多信息請訪問https://greenplum.org/

二、安裝準備

1.下載離線安裝包

https://github.com/greenplum-db/gpdb/releases/tag/6.1.0

2.上傳到服務器,放在/home/softs下(自定義目錄)

3.關閉防火墻

- 關閉防火墻(所有機器)iptables (centos6.x)關閉:service iptables stop永久關閉:chkconfig iptables off - 檢查firewalld (centos 7.x)關閉:systemctl stop firewalld永久關閉 :systemctl disable firewalld

4.關閉SELINUX(所有機器)

[root@mdw ~]# vi /etc/selinux/config 確保SELINUX=disabled

5.配置/etc/hosts (所有機器)

為之后GP在各個節點之間相互通信做準備。 修改各臺主機的主機名稱。 一般建議的命名規則如下: 項目名_gp_節點Master : dis_gp_mdwStandby Master : dis_gp_smdwSegment Host : dis_gp_sdw1 dis_gp_sdw2 以此類推如果Standby也搭建在某Segment host下,則命名為:dis_gp_sdw3_smdw

[root@mdw ~]# vi /etc/hosts

添加每臺機器的ip 和hostname,確保所有機器的/etc/hosts中包含以下信息: 192.168.xxx.xxx gp-mdw 192.168.xxx.xxx gp-sdw1 192.168.xxx.xxx gp-sdw2 192.168.xxx.xxx gp-sdw3-mdw2

6.修改主機名

Centos7.x vi /etc/hostname Centos6.x vi /etc/sysconfig/network 修改完之后 reboot機器

7.配置sysctl.conf(所有機器)

vi /etc/sysctl.conf kernel.shmall = 197951838 # echo $(expr $(getconf _PHYS_PAGES) / 2) kernel.shmmax = 810810728448 # echo $(expr $(getconf _PHYS_PAGES) / 2 \* $(getconf PAGE_SIZE)) kernel.shmmni = 4096 vm.overcommit_memory = 2 vm.overcommit_ratio = 75 #vm.overcommit_ratio = (RAM - 0.026 * gp_vmem_rq) / RAM #gp_vmem_rq = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7 net.ipv4.ip_local_port_range = 10000 65535 kernel.sem = 500 2048000 200 4096 kernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.msgmni = 2048 net.ipv4.tcp_syncookies = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.conf.all.arp_filter = 1 net.core.netdev_max_backlog = 10000 net.core.rmem_max = 2097152 net.core.wmem_max = 2097152 vm.swappiness = 10 vm.zone_reclaim_mode = 0 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 #vm.min_free_kbytes = 487119#awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo#對于大于64GB的主機,加上下面4行 vm.dirty_background_ratio = 0 vm.dirty_ratio = 0 vm.dirty_background_bytes = 1610612736 # 1.5GB vm.dirty_bytes = 4294967296 # 4GB #對于小于64GB的主機刪除dirty_background_bytes dirty_bytes,加上下面2行 vm.dirty_background_ratio = 3 vm.dirty_ratio = 10 #vm.min_free_kbytes在內存 > 64GB系統的時候可以設置,一般比較少設置此參數。 #vm.min_free_kbytes,確保網絡和存儲驅動程序PF_MEMALLOC得到分配。這對內存大的系統尤其重要。一般系統上,默認值通常太低??梢允褂胊wk命令計算vm.min_free_kbytes的值,通常是建議的系統物理內存的3%: #awk 'BEGIN {OFMT = "%.0f";} /MemTotal/ {print "vm.min_free_kbytes =", $2 * .03;}' /proc/meminfo >> /etc/sysctl.conf #不要設置 vm.min_free_kbytes 超過系統內存的5%,這樣做可能會導致內存不足。 file-max:這個參數表示進程可以同時打開的最大句柄數,這個參數直接限制最大并發連接數。 tcp_tw_reuse:這個參數設置為1,表示允許將TIME-WAIT狀態的socket重新用于新的TCP鏈接。這個對服務器來說很有意義,因為服務器上總會有大量TIME-WAIT狀態的連接。 tcp_keepalive_time:這個參數表示當keepalive啟用時,TCP發送keepalive消息的頻度。默認是7200 seconds,意思是如果某個TCP連接在idle 2小時后,內核才發起probe。若將其設置得小一點,可以更快地清理無效的連接。 tcp_fin_timeout:這個參數表示當服務器主動關閉連接時,socket保持在FIN-WAIT-2狀態的最大時間。 tcp_max_tw_buckets:這個參數表示操作系統允許TIME_WAIT套接字數量的最大值,如果超過這個數字,TIME_WAIT套接字將立刻被清除并打印警告信息。默認是i180000,過多TIME_WAIT套接字會使Web服務器變慢。 tcp_max_syn_backlog:這個參數表示TCP三次握手建立階段接受WYN請求隊列的最大長度,默認1024,將其設置大一些可以使出現Nginx繁忙來不及accept新連接的情況時,Linux不至于丟失客戶端發起的連接請求。 ip_local_port_range:這個參數定義了在UDP和TCP連接中本地端口的取值范圍。 net.ipv4.tcp_rmem:這個參數定義了TCP接受緩存(用于TCP接收滑動窗口)的最小值,默認值,最大值。 net.ipv4.tcp_wmem:這個參數定義了TCP發送緩存(用于TCP發送滑動窗口)的最小值,默認值,最大值。 netdev_max_backlog:當網卡接收數據包的速度大于內核處理的速度時,會有一個隊列保存這些數據包。這個參數表示該隊列的最大值。 rmem_default:這個參數表示內核套接字接收緩存區默認的大小。 wmem_default:這個參數表示內核套接字發送緩存區默認的大小。 rmem_max:這個參數表示內核套接字接收緩存區默認的最大大小。 wmem_max:這個參數表示內核套接字發送緩存區默認的最大大小。

8.配置資源限制參數(所有機器)

在/etc/security/limits.conf文件下增加以下參數

vi /etc/security/limits.conf * soft nofile 524288 * hard nofile 524288 * soft nproc 131072 * hard nproc 131072 “*” 星號表示所有用戶 noproc 是代表最大進程數 nofile 是代表最大文件打開數 RHEL / CentOS 6.X 修改:/etc/security/limits.d/90-nproc.conf 文件的nproc [root@mdw ~]# vi /etc/security/limits.d/90-nproc.conf確保 * soft nproc 131072 RHEL / CentOS 7.X 修改: 修改:/etc/security/limits.d/20-nproc.conf 文件的nproc [root@mdw ~]# vi /etc/security/limits.d/20-nproc.conf確保 * soft nproc 131072 ulimit -u 命令顯示每個用戶可用的最大進程數 max user processes,驗證返回值為131072.

9.檢查字符集

[root@mdw greenplum-db]# echo $LANGen_US.UTF-8如果為zh_CN.UTF-8則要修改 CentOS 6.X /etc/syscconfig/i18nCentOS 7.X /etc/locale.conf

10…SSH連接閾值

Greenplum數據庫管理程序中的gpexpand‘ gpinitsystem、gpaddmirrors,使用 SSH連接來執行任務。在規模較大的Greenplum集群中,程序的ssh連接數可能會超出主機的未認證連接的最大閾值。發生這種情況時,會收到以下錯誤:ssh_exchange_identification:連接被遠程主機關閉。為避免這種情況,可以更新 /etc/ssh/sshd_config 或者 /etc/sshd_config 文件的 MaxStartups 和 MaxSessions 參數vi /etc/ssh/sshd_configMaxStartups 300:30:1000重啟sshd,使參數生效service sshd restart

11.同步集群時鐘(NTP) (此項為操作,環境已經設置好ntp)

為了保證集群各個服務的時間一致,首先在master 服務器上,編輯 /etc/ntp.conf,配置時鐘服務器為數據中心的ntp服務器。若沒有,先修改master 服務器的時間到正確的時間,再修改其他節點的 /etc/ntp.conf,讓他們跟隨master服務器的時間。[root@mdw ~]# vi /etc/ntp.conf在server 最前面加上master:把server1,2,3,4全刪改成 server xxx,可以問公司IT人員公司的時鐘IP,如果沒有就設置成server 1.cn.pool.ntp.orgsegment:server mdw prefer # 優先主節點server smdw # 其次standby 節點,若沒有standby ,可以配置成數據中心的時鐘服務器[root@mdw ~]# service ntpd restart # 修改完重啟ntp服務

12.創建gpadmin用戶(所有機器)

在每個節點上創建gpadmin用戶,用于管理和運行gp集群[root@mdw ~]# groupadd gpadmin[root@mdw ~]# useradd gpadmin -g gpadmin -s /bin/bash[root@mdw ~]# passwd gpadmin密碼:gpadmin

三、集群安裝部署

1.安裝依賴(所有機器)root用戶執行

[root@mdw ~]# yum install -y zip unzip openssh-clients ed ntp net-tools perl perl-devel perl-ExtUtils* mlocate lrzsz parted apr apr-util bzip2 krb5-devel libevent libyaml rsync

2.執行安裝程序(root用戶執行)

執行安裝腳本,默認安裝到/usr/local/ 目錄下。[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm安裝完成后可在/usr/local下看到greenplum-db-6.12.1和它的軟連接greenplum-db由于權限的問題,建議手動修改安裝路徑,放在/home/gpadmin下,執行以下語句1.進入安裝父目錄cd /usr/local2.把安裝目錄移動到/home/gpadminmv greenplum-db-6.12.1 /home/gpadmin3.刪除軟連接/bin/rm –r greenplum-db4.在/home/gpadmin下建立新的軟鏈接ln -s /home/gpadmin/greenplum-db-6.12.1 /home/gpadmin/greenplum-db5.修改greenplum_path.sh (重要)【greenplum-db-6.12.1可能不用修改】vi /home/gpadmin/greenplum-db/greenplum_path.sh把 GPHOME=/usr/local/greenplum-db-6.12.1修改為GPHOME=/home/gpadmin/greenplum-db6.把文件賦權給gpadmincd /homechown -R gpadmin:gpadmin /home/gpadmin 執行安裝腳本,默認安裝到/usr/local/ 目錄下。[root@mdw ~]# rpm -ivh greenplum-db-6.12.1-rhel7-x86_64.rpm安裝完成后可在/usr/local下看到greenplum-db-6.12.1和它的軟連接greenplum-db由于權限的問題,建議手動修改安裝路徑,放在/home/gpadmin下,執行以下語句 。。。。。6.把文件賦權給gpadmincd /homechown -R gpadmin:gpadmin /home/gpadmin

3.集群互信,免密登陸(root用戶執行)

生成密鑰 GP6.x開始gpssh-exkeys命令已經不帶自動生成密鑰了,所以需要自己手動生成 cd /home/gpadmin/greenplum-db [root@mdw greenplum-db]# ssh-keygen -t rsa 提示語不用管,一直按Enter鍵使用默認值即可

4.將本機的公鑰復制到各個節點機器的authorized_keys文件中

[root@mdw greenplum-db]# ssh-copy-id gp-sdw1[root@mdw greenplum-db]# ssh-copy-id gp-sdw2[root@mdw greenplum-db]# ssh-copy-id gp-sdw3-mdw2

5.使用gpssh-exkeys 工具,打通n-n的免密登陸

vi all_host 增加所有hostname到文件中 gp-mdw gp-sdw1 gp-sdw2 gp-sdw3-mdw2 [root@mdw greenplum-db]# source /home/gpadmin/greenplum-db/greenplum_path.sh [root@mdw greenplum-db]# gpssh-exkeys -f all_host

6.同步master 配置到各個主機

打通gpadmin 用戶免密登錄 [root@mdw greenplum-db-6.2.1]# su - gpadmin [gpadmin@mdw ~]$ source /home/gpadmin/greenplum-db/greenplum_path.sh [gpadmin@mdw ~]$ ssh-keygen -t rsa [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw1 [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw2 [gpadmin@mdw greenplum-db]$ ssh-copy-id gp-sdw3-mdw2 [gpadmin@mdw greenplum-db]$ mkdir gpconfigs [gpadmin@mdw greenplum-db]$ cd gpconfigs [gpadmin@mdw greenplum-db]$ vi all_hosts 把所有主機hostname添加進去gp-mdw gp-sdw1 gp-sdw2 gp-sdw3-mdw2[gpadmin@mdw ~]$ gpssh-exkeys -f /home/gpadmin/gpconfigs/all_hosts [gpadmin@mdw greenplum-db]$ vi /home/gpadmin/gpconfigs/seg_hosts 把所有數據節點hostname添加進去gp-sdw1 gp-sdw2 gp-sdw3-mdw2

7.批量設置greenplum在gpadmin用戶的環境變量(gpadmin用戶下)

添加gp的安裝目錄,和環境信息到用戶的環境變量中。 vi .bashrc source /home/gpadmin/greenplum-db/greenplum_path.sh

8.批量復制系統參數到其他節點(如果前面已經每臺機器設置過可以跳過)

示例: [gpadmin@mdw gpconfigs]$ exit [root@mdw ~]# source /home/gpadmin/greenplum-db/greenplum_path.sh [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/hosts root@=:/etc/hosts [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.conf root@=:/etc/security/limits.conf [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/sysctl.conf root@=:/etc/sysctl.conf [root@mdw ~]# gpscp -f /home/gpadmin/gpconfigs/seg_hosts /etc/security/limits.d/20-nproc.conf root@=:/etc/security/limits.d/20-nproc.conf [root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'sysctl -p' [root@mdw ~]# gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'reboot'

9.集群節點安裝

3.5.1 模擬gpseginstall 腳本(gpadmin用戶執行) GP6.x無gpseginstall 命令,以下模擬此命令主要過程,完成gpsegment的部署 gpadmin 用戶下執行 [root@gp-mdw gpadmin]# su - gpadmin [gpadmin@gp-mdw ~]$ cd /home/gpadmin [gpadmin@gp-mdw ~]$ tar -cf gp6.tar greenplum-db-6.12.1/ [gpadmin@gp-mdw ~]$ vi /home/gpadmin/gpconfigs/gpseginstall_hosts 添加 gp-sdw1 gp-sdw2 gp-sdw3-smdw

10.把壓縮包分發到segment上

[gpadmin@gp-mdw ~]$ gpscp -f /home/gpadmin/gpconfigs/gpseginstall_hosts gp6.tar gpadmin@=:/home/gpadmin

11.通過gpssh命令鏈接到各個segment上執行命令

[gpadmin@mdw gpconfigs]$ gpssh -f /home/gpadmin/gpconfigs/gpseginstall_hosts tar -xf gp6.tar ln -s greenplum-db-6.12.1 greenplum-db exit

12.環境變量文件分發到其他節點

[gpadmin@mdw gpconfigs]$ exit[root@mdw greenplum-db-6.2.1]# su - gpadmin[gpadmin@mdw ~]$ cd gpconfigs[gpadmin@mdw gpconfigs]$ vi seg_hosts把segment的hostname都添加到文件中gp-sdw1gp-sdw2gp-sdw3-smdw[gpadmin@mdw gpconfigs]$ gpscp -f /home/gpadmin/gpconfigs/seg_hosts /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc

13.創建集群數據目錄(root用戶執行)

1. 創建master 數據目錄 mkdir -p /data/master chown -R gpadmin:gpadmin /data source /home/gpadmin/greenplum-db/greenplum_path.sh 如果有standby節點則需要執行下面2句 gp-sdw3-mdw2 這個hostname靈活變更 gpssh -h gp-sdw3-mdw2 -e 'mkdir -p /data/master' gpssh -h gp-sdw3-mdw2 -e 'chown -R gpadmin:gpadmin /data'2. 創建segment數據目錄 source /home/gpadmin/greenplum-db/greenplum_path.sh gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p1' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/p2' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m1' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'mkdir -p /data/m2' gpssh -f /home/gpadmin/gpconfigs/seg_hosts -e 'chown -R gpadmin:gpadmin /data'

四、集群初始化

1.編寫初始化配置文件(gpadmin用戶)

拷貝配置文件模板 su - gpadmincp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config

2.根據需要修改參數

vi /home/gpadmin/gpconfigs/gpinitsystem_config 注意:To specify PORT_BASE, review the port range specified in the net.ipv4.ip_local_port_range parameter in the /etc/sysctl.conf file. 主要修改的參數:#primary的數據目錄 declare -a DATA_DIRECTORY=(/data/p1 /data/p2) #master節點的hostname MASTER_HOSTNAME=gp-mdw #master節點的主目錄 MASTER_DIRECTORY=/data/master #mirror的端口要把前面的#去掉(啟用mirror) MIRROR_PORT_BASE=7000 #mirror的數據目錄要把前面的#去掉(啟用mirror) declare -a MIRROR_DATA_DIRECTORY=(/data/m1 /data/m2)

3.集群初始化(gpadmin用戶)

執行腳本: gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config --locale=C -h /home/gpadmin/gpconfigs/gpseginstall_hosts --mirror-mode=spread注意:spread是指spread分布策略,只允許主機數>每個主機中的段實例數情況(number of hosts is greater than the number of segment instances.) 如果不指定mirror_mode,則是默認的group策略,這樣做的情況在段實例數>1時,down機之后不會導致它的鏡像全在另外一臺機器中,降低另外一臺機器的性能瓶頸。 安裝成功日志 . . . . server shutting down 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1 20201220:12:06:59:017597 gpstop:gp-mdw:gpadmin-[ERROR]:-Failed to kill processes for segment /data/master/gpseg-1: ([Errno 3] No such process) 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /data/master/gpseg-1 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Gathering information and validating the environment... 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.1.0 build commit:6788ca8c13b2bd6e8976ccffea07313cbab30560' 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232' 20201220:12:07:00:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance in admin mode 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Obtaining Segment details from master... 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Setting new master era 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Master Started... 20201220:12:07:01:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Shutting down master 20201220:12:07:02:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait... .. 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Process results... 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Successful segment starts = 6 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Failed segment starts = 0 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Successfully started 6 of 6 segment instances 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:----------------------------------------------------- 20201220:12:07:04:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Starting Master instance gp-mdw directory /data/master/gpseg-1 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Command pg_ctl reports Master gp-mdw instance active 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-No standby master configured. skipping... 20201220:12:07:05:017622 gpstart:gp-mdw:gpadmin-[INFO]:-Database successfully started 20201220:12:07:05:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode 20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances 20201220:12:07:06:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... ...... 20201220:12:07:07:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... ................................................................. 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Parallel process exit status 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as completed = 6 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as killed = 0 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Total processes marked as failed = 0 20201220:12:08:13:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------ 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Scanning utility log file for any warning messages 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Log file scan check passed 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Greenplum Database instance successfully created 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------------- 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To complete the environment configuration, please 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-update gpadmin .bashrc file with the following 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1" 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- to access the Greenplum scripts for this instance: 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20201220.log 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Review options for gpinitstandby 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:------------------------------------------------------- 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-The Master /data/master/gpseg-1/pg_hba.conf post gpinitsystem 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-has been configured to allow all hosts within this new 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-new array must be explicitly added to this file 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is 20201220:12:08:14:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-located in the /home/gpadmin/greenplum-db/docs directory 20201220:12:08:15:012260 gpinitsystem:gp-mdw:gpadmin-[INFO]:-------------------------------------------------------

4.安裝中途失敗回退

安裝中途失敗,提示使用 bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_* 回退,執行該腳本即可,例如:

20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[FATAL]:-Unknown host gpzq-sh-mb: ping: unknown host gpzq-sh-mb unknown host Script Exiting! 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[WARN]:-Run command bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20191218_203938 to remove these changes 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND 20191218:20:39:53:011405 gpinitsystem:mdw:gpadmin-[INFO]:-End Function BACKOUT_COMMAND[gpadmin@mdw gpAdminLogs]$ ls backout_gpinitsystem_gpadmin_20191218_203938 gpinitsystem_20191218.log [gpadmin@mdw gpAdminLogs]$ bash backout_gpinitsystem_gpadmin_20191218_203938 Stopping Master instance waiting for server to shut down.... done server stopped Removing Master log file Removing Master lock files Removing Master data directory files若執行后仍然未清理干凈,可執行一下語句后,再重新安裝: pg_ctl -D /data/master/gpseg-1 stop rm -f /tmp/.s.PGSQL.5432 /tmp/.s.PGSQL.5432.lock 主節點 rm -rf /data/master/gpseg* 所有數據節點 rm -rf /data/p1/gpseg-x rm -rf /data/p2/gpseg-xrm -rf /data/p1/gpseg* rm -rf /data/p2/gpseg*

5.安裝成功后設置環境變量(gpadmin用戶)

編輯gpadmin 用戶的環境變量,增加(重要) export MASTER_DATA_DIRECTORY=/data/master/gpseg-1 除此之外,通常還增加:(可以不設置) export PGPORT=5432 # 根據實際情況填寫 export PGUSER=gpadmin # 根據實際情況填寫 export PGDATABASE=gpdw # 根據實際情況填寫 前面已經添加過 source /usr/local/greenplum-db/greenplum_path.sh,此處操作如下: vi .bashrc export MASTER_DATA_DIRECTORY=/data/master/gpseg-1

6.安裝成功后配置

psql 登陸gp 并設置密碼(gpadmin用戶) psql -h hostname -p port -d database -U user -W password -h后面接對應的master或者segment主機名 -p后面接master或者segment的端口號 -d后面接數據庫名可將上述參數配置到用戶環境變量中,linux 中使用gpadmin用戶不需要密碼。psql -h 127.0.0.1 -p 5432 -d database -U gpadmin psql 登錄,并設置gpadmin用戶密碼示例: psql -d postgresalter user gpadmin encrypted password 'gpadmin';su gpadmin psql -p 5432 修改數據庫密碼 alter role gpadmin with password '123456';退出: \q顯示數據庫列表 \lList of databasesName | Owner | Encoding | Collate | Ctype | Access privileges -----------+---------+----------+---------+-------+---------------------postgres | gpadmin | UTF8 | C | C | template0 | gpadmin | UTF8 | C | C | =c/gpadmin +| | | | | gpadmin=CTc/gpadmintemplate1 | gpadmin | UTF8 | C | C | =c/gpadmin +| | | | | gpadmin=CTc/gpadmin (3 rows)

7.客戶端登陸gp

簡單介紹
客戶端認證是由一個配置文件(通常名為pg_hba.conf)控制的, 它存放在數據庫集群的數據目錄里。HBA的意思是"host-based authentication", 也就是基于主機的認證。在initdb初始化數據目錄的時候, 它會安裝一個缺省的pg_hba.conf文件。不過我們也可以把認證配置文件放在其它地方

配置 pg_hba.confvi /data/master/gpseg-1/pg_hba.conf新增一行 host all all 0.0.0.0/0 md5

8.初始化standby節點

gpinitstandby -s gp-sdw3-smdw [gpadmin@gp-mdw ~]$ gpinitstandby -s gp-sdw3-smdw 20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization... 20210311:19:25:38:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking for data directory /data/master/gpseg-1 on gp-sdw3-smdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------ 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:------------------------------------------------------ 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master hostname = gp-mdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master data directory = /data/master/gpseg-1 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum master port = 5432 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master hostname = gp-sdw3-smdw 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master port = 5432 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /data/master/gpseg-1 20210311:19:25:39:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Greenplum update system catalog = On Do you want to continue with standby master initialization? Yy|Nn (default=N): > y 20210311:19:25:42:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-The packages on gp-sdw3-smdw are consistent. 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Adding standby master to catalog... 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Database catalog updated successfully. 20210311:19:25:43:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Updating pg_hba.conf file... 20210311:19:25:45:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully. 20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Starting standby master 20210311:19:25:49:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Checking if standby master is running on host: gp-sdw3-smdw in directory: /data/master/gpseg-1 20210311:19:25:53:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files... 20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully. 20210311:19:25:54:015347 gpinitstandby:gp-mdw:gpadmin-[INFO]:-Successfully created standby master on gp-sdw3-smdw

總結

以上是生活随笔為你收集整理的Greenplum 6安装指南(CentOS 7.X)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。