日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Spark集群基于Zookeeper的HA搭建部署笔记(转)

發(fā)布時間:2025/7/14 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Spark集群基于Zookeeper的HA搭建部署笔记(转) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

原文鏈接:Spark集群基于Zookeeper的HA搭建部署筆記

1.環(huán)境介紹
(1)操作系統(tǒng)RHEL6.2-64
(2)兩個節(jié)點:spark1(192.168.232.147),spark2(192.168.232.152)
(3)兩個節(jié)點上都裝好了Hadoop 2.2集群
2.安裝Zookeeper
(1)下載Zookeeper:http://apache.claz.org/zookeeper ... keeper-3.4.5.tar.gz
(2)解壓到/root/install/目錄下
(3)創(chuàng)建兩個目錄,一個是數(shù)據(jù)目錄,一個日志目錄

(4)配置:進(jìn)到conf目錄下,把zoo_sample.cfg修改成zoo.cfg(這一步是必須的,否則zookeeper不認(rèn)識zoo_sample.cfg),并添加如下內(nèi)容

  • dataDir=/root/install/zookeeper-3.4.5/data
  • dataLogDir=/root/install/zookeeper-3.4.5/logs
  • server.1=spark1:2888:3888
  • server.2=spark2:2888:3888
  • 復(fù)制代碼

    (5)在/root/install/zookeeper-3.4.5/data目錄下創(chuàng)建myid文件,并在里面寫1

  • cd /root/install/zookeeper-3.4.5/data
  • echo 1>myid
  • 復(fù)制代碼

    (6)把/root/install/zookeeper-3.4.5整個目錄復(fù)制到其他節(jié)點

  • scp -r /root/install/zookeeper-3.4.5 root@spark2:/root/install/
  • 復(fù)制代碼

    (7)登錄到spark2節(jié)點,修改myid文件里的值,將其修改為2

  • cd /root/install/zookeeper-3.4.5/data
  • echo 2>myid
  • 復(fù)制代碼

    (8)在spark1,spark2兩個節(jié)點上分別啟動zookeeper

  • cd /root/install/zookeeper-3.4.5
  • bin/zkServer.sh start
  • 復(fù)制代碼

    (9)查看進(jìn)程進(jìn)否成在

  • [root@spark2 zookeeper-3.4.5]# bin/zkServer.sh start
  • JMX enabled by default
  • Using config: /root/install/zookeeper-3.4.5/bin/../conf/zoo.cfg
  • Starting zookeeper ... STARTED
  • [root@spark2 zookeeper-3.4.5]# jps
  • 2490 Jps
  • 2479 QuorumPeerMain
  • 復(fù)制代碼

    3.配置Spark的HA
    (1)進(jìn)到spark的配置目錄,在spark-env.sh修改如下

  • export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=spark1:2181,spark2:2181 -Dspark.deploy.zookeeper.dir=/spark"
  • export JAVA_HOME=/root/install/jdk1.7.0_21
  • #export SPARK_MASTER_IP=spark1
  • #export SPARK_MASTER_PORT=7077
  • export SPARK_WORKER_CORES=1
  • export SPARK_WORKER_INSTANCES=1
  • export SPARK_WORKER_MEMORY=1g
  • 復(fù)制代碼

    (2)把這個配置文件分發(fā)到各個節(jié)點上去

  • scp spark-env.sh root@spark2:/root/install/spark-1.0/conf/
  • 復(fù)制代碼

    (3)啟動spark集群

  • [root@spark1 spark-1.0]# sbin/start-all.sh
  • starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark1.out
  • spark1: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark1.out
  • spark2: starting org.apache.spark.deploy.worker.Worker, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-spark2.out
  • 復(fù)制代碼

    (4)進(jìn)到spark2(192.168.232.152)節(jié)點,把start-master.sh 啟動,當(dāng)spark1(192.168.232.147)掛掉時,spark2頂替當(dāng)master

  • [root@spark2 spark-1.0]# sbin/start-master.sh
  • starting org.apache.spark.deploy.master.Master, logging to /root/install/spark-1.0/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-spark2.out
  • 復(fù)制代碼

    (5)查看spark1和spark2上運行的哪些進(jìn)程

  • [root@spark1 spark-1.0]# jps
  • 5797 Worker
  • 5676 Master
  • 6287 Jps
  • 2602 QuorumPeerMain
  • [root@spark2 spark-1.0]# jps
  • 2479 QuorumPeerMain
  • 5750 Jps
  • 5534 Worker
  • 5635 Master
  • 復(fù)制代碼

    4.測試HA是否生效
    (1)先查看一下兩個節(jié)點的運行情況,現(xiàn)在spark1運行了master,spark2是待命狀態(tài)

    (2)在spark1上把master服務(wù)停掉

  • [root@spark1 spark-1.0]# sbin/stop-master.sh
  • stopping org.apache.spark.deploy.master.Master
  • [root@spark1 spark-1.0]# jps
  • 5797 Worker
  • 6373 Jps
  • 2602 QuorumPeerMain
  • 復(fù)制代碼

    (3)用瀏覽器訪問master的8080端口,看是否還活著。以下可以看出,master已經(jīng)掛掉

    (4)再用瀏覽器訪問查看spark2的狀態(tài),從下圖看出,spark2已經(jīng)被切換當(dāng)master了

    總結(jié)

    以上是生活随笔為你收集整理的Spark集群基于Zookeeper的HA搭建部署笔记(转)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。