日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

如何在CDH5上部署Dolphin Scheduler 1.3.1

發布時間:2025/3/11 编程问答 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 如何在CDH5上部署Dolphin Scheduler 1.3.1 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

本文記錄了在CDH5.16.2集群上集成Dolphin Scheduler 1.3.1的詳細流程,特別注意一下MySQL數據庫的連接串!

1

文檔編寫目的

  • 詳細記錄CDH5上Dolphin Scheduler 1.3.1的部署流程
  • 分布式部署Dolphin Scheduler

2

部署環境和依賴組件

為了適配CDH5上的Hive版本, 需要對DS進行源碼編譯部署,最后會提供編譯好的CDH5版本供各位老鐵下載

集群環境

  • CDH 5.16.2
    • HDFS和YARN都是單點
  • DS的官網
    • https://dolphinscheduler.apache.org/en-us/

DS依賴組件

  • MySQL:用于存儲dolphin scheduler的元數據,也可以使用pg,這里主要是因為CDH集群使用的是MySQL
  • Zookeeper:使用CDH集群的zk

3

Dolphin Scheduler 1.3.1集群規劃

DS服務

master.eights.com

dn1.eights.com

dn2.eights.com

api

master

worker/log

alert

4

源碼編譯

前置條件

  • maven
  • jdk
  • nvm

代碼拉取

git clone https://github.com/apache/incubator-dolphinscheduler.git

切換CDH5分支

git checkout 1.3.1-release; git checkout -b ds-1.3.1-cdh5.16.2;

修改pom

修改根root的hadoop版本,hive版本和version信息,將每一個模塊的version都調整為1.3.1-cdh5.16.2

<hadoop.version>2.6.0</hadoop.version> <hive.jdbc.version>1.1.0</hive.jdbc.version> <version>1.3.1-cdh5.16.2</version>

去除mysql包的scope

執行編譯命令進行源碼編譯

mvn -U clean package -Prelease -Dmaven.test.skip=true

編譯完成后,dolphinscheduler-dist目錄會生成

apache-dolphinscheduler-incubating-1.3.1-cdh5.16.2-dolphinscheduler-bin.tar.gz包,1.3.1版本的前后端是打在一起的,并沒有兩個包.

5

組件部署

準備工作

  • 創建部署用戶及配置SSH免密

在所有部署機器上創建部署用戶,ds在進行任務執行的時候會以sudo -u [linux-user]的方式來執行作業,這里采用dscheduler作為部署用戶。

# 添加部署用戶 useradd dscheduler; # 設置密碼 echo "dscheduler" | passwd --stdin dscheduler # 配置免密 echo 'dscheduler ALL=(ALL) NOPASSWD: NOPASSWD: ALL' >> /etc/sudoers# 切換到部署用戶并生成ssh key su dscheduler; ssh-keygen -t rsa; ssh-copy-id -i ~/.ssh/id_rsa.pub dscheduler@[hostname];

創建ds的元數據庫-MySQL

CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE USER 'dscheduler'@'%' IDENTIFIED BY 'dscheduler'; GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dscheduler'@'%' IDENTIFIED BY 'dscheduler'; flush privileges;

安裝包解壓&權限修改

上傳安裝包到集群/opt目錄,執行解壓

# 解壓安裝包 tar -zxvf apache-dolphinscheduler-incubating-1.3.1-cdh5.16.2-dolphinscheduler-bin.tar.gz -C /opt/# 重命名 mv apache-dolphinscheduler-incubating-1.3.1-cdh5.16.2-dolphinscheduler-bin ds-1.3.1-cdh5.16.2# 修改文件權限和組 chmod -R 755 ds-1.3.1-cdh5.16.2; chown -R dscheduler:dscheduler ds-1.3.1-cdh5.16.2;

初始化MySQL

修改數據庫配置

這里特別注意數據庫的連接配置

這里特別注意數據庫的連接配置

這里特別注意數據庫的連接配置

vi /opt/ds-1.3.1-cdh5.16.2/conf/datasource.properties;spring.datasource.driver-class-name=com.mysql.jdbc.Driver spring.datasource.url=jdbc:mysql://cm.eights.com:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true spring.datasource.username=dscheduler spring.datasource.password=dscheduler

在ds的安裝包目錄下執行數據庫初始化腳本

./script/create-dolphinscheduler.sh

配置ds所需的環境變量,這里記住一定要配,ds在執行任務的時候會先source dolphinscheduler_env.sh

vi /opt/ds-1.3.1-cdh5.16.2/conf/env/dolphinscheduler_env.sh# 測試集群上沒有datax和flink請忽略相關配置 export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop export HADOOP_CONF_DIR=/opt/cloudera/parcels/CDH/lib/hadoop/etc/hadoop export SPARK_HOME1=/opt/cloudera/parcels/CDH/lib/spark export SPARK_HOME2=/opt/cloudera/parcels/SPARK2/lib/spark2 export PYTHON_HOME=/usr/local/anaconda3/bin/python export JAVA_HOME=/usr/java/jdk1.8.0_131 export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive export FLINK_HOME=/opt/soft/flink export DATAX_HOME=/opt/soft/datax/bin/datax.py

編寫ds的配置文件

ds在1.3.0之前,一鍵部署的配置文件在install.sh中。1.3.0版本的install.sh腳本只是一個部署腳本,部署配置文件在conf/config/install_config.conf中。install_config.conf精簡了很多非必須的參數,如果要進一步的進行參數調整,則需要去修改conf下對應模塊的配置文件。

下面給出本次部署的集群部署配置

# NOTICE : If the following config has special characters in the variable `.*[]^${}\+?|()@#&`, Please escape, for example, `[` escape to `\[` # postgresql or mysql dbtype="mysql"# db config # db address and port dbhost="cm.eights.com:3306"# db username username="dscheduler"# database name dbname="dolphinscheduler"# db passwprd # NOTICE: if there are special characters, please use the \ to escape, for example, `[` escape to `\[` password="dscheduler"# zk cluster zkQuorum="master.eights.com:2181,dn1.eights.com:2181,cm.eights.com:2181"# Note: the target installation path for dolphinscheduler, please not config as the same as the current path (pwd) # 在每臺部署機器上 安裝ds的目錄,角色的日志文件和任務日志都在這個目錄下 installPath="/opt/ds-1.3.1-agent"# deployment user # Note: the deployment user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled, the root directory needs to be created by itself deployUser="dscheduler"# 郵件我這邊是內網郵箱,后續會出外網郵件的配置方法 # alert config # mail server host mailServerHost="xxxx"# mail server port # note: Different protocols and encryption methods correspond to different ports, when SSL/TLS is enabled, make sure the port is correct. mailServerPort="25"# sender mailSender="xxxx"# user mailUser="xxxx"# sender password # note: The mail.passwd is email service authorization code, not the email login password. mailPassword="xxxx"# TLS mail protocol support starttlsEnable="false"# SSL mail protocol support # only one of TLS and SSL can be in the true state. sslEnable="false"#note: sslTrust is the same as mailServerHost sslTrust="xxxxxx"# resource storage type:HDFS,S3,NONE resourceStorageType="HDFS"# 單點的HDFS和yarn直接進行配置即可 # if resourceStorageType is HDFS,defaultFS write namenode address,HA you need to put core-site.xml and hdfs-site.xml in the conf directory. # if S3,write S3 address,HA,for example :s3a://dolphinscheduler, # Note,s3 be sure to create the root directory /dolphinscheduler defaultFS="hdfs://master.eights.com:8020"# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore s3Endpoint="http://192.168.xx.xx:9010" s3AccessKey="xxxxxxxxxx" s3SecretKey="xxxxxxxxxx"# 這里注意,即使yarn是單點也最好把yarnHaIps配上 # if resourcemanager HA enable, please type the HA ips ; if resourcemanager is single, make this value empty yarnHaIps="master.eights.com"# if resourcemanager HA enable or not use resourcemanager, please skip this value setting; If resourcemanager is single, you only need to replace yarnIp1 to actual resourcemanager hostname. singleYarnIp="master.eights.com"# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。/dolphinscheduler is recommended resourceUploadPath="/dolphinscheduler"# who have permissions to create directory under HDFS/S3 root path # Note: if kerberos is enabled, please config hdfsRootUser= hdfsRootUser="hdfs"# kerberos config # whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore kerberosStartUp="false" # kdc krb5 config file path krb5ConfPath="$installPath/conf/krb5.conf" # keytab username keytabUserName="hdfs-mycluster@ESZ.COM" # username keytab path keytabPath="$installPath/conf/hdfs.headless.keytab"# api server port apiServerPort="12345"# install hosts # Note: install the scheduled hostname list. If it is pseudo-distributed, just write a pseudo-distributed hostname ips="master.eights.com,dn1.eights.com,dn2.eights.com"# ssh port, default 22 # Note: if ssh port is not default, modify here sshPort="22"# run master machine # Note: list of hosts hostname for deploying master masters="master.eights.com,dn1.eights.com"# run worker machine # 1.3.1把worker分組從mysql移到了zookeeper中,這個需要打worker的標簽 # 目前還不支持一個worker屬于多個分組 # note: need to write the worker group name of each worker, the default value is "default" workers="master.eights.com:default,dn2.eights.com:sqoop"# run alert machine # note: list of machine hostnames for deploying alert server alertServer="dn1.eights.com"# run api machine # note: list of machine hostnames for deploying api server apiServers="dn1.eights.com"

添加Hadoop集群配置文件

  • 如果集群未啟用HA,直接在install_config.conf文件中進行編寫
  • 如果集群啟用了HA,請將hadoop的hdfs-site.xml和core-site.xml拷貝到/conf目錄下

修改JVM參數

  • 兩個文件
    • /bin/dolphinscheduler-daemon.sh
    • /scripts/dolphinscheduler-daemon.sh
export DOLPHINSCHEDULER_OPTS="-server -Xmx16g -Xms1g -Xss512k -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=70"

一鍵部署&進程檢查&單服務啟停

# 運行部署ds腳本 sh install.sh# 進程檢查 jps

服務啟停

# 一鍵停止 sh ./bin/stop-all.sh # 一鍵開啟 sh ./bin/start-all.sh # 啟停master sh ./bin/dolphinscheduler-daemon.sh start master-server sh ./bin/dolphinscheduler-daemon.sh stop master-server # 啟停worker sh ./bin/dolphinscheduler-daemon.sh start worker-server sh ./bin/dolphinscheduler-daemon.sh stop worker-server # 啟停api-server sh ./bin/dolphinscheduler-daemon.sh start api-server sh ./bin/dolphinscheduler-daemon.sh stop api-server # 啟停logger sh ./bin/dolphinscheduler-daemon.sh start logger-server sh ./bin/dolphinscheduler-daemon.sh stop logger-server # 啟停alert sh ./bin/dolphinscheduler-daemon.sh start alert-server sh ./bin/dolphinscheduler-daemon.sh stop alert-server

前端訪問

dolphinscheduler-1.3.1前端不在需要nginx,直接使用

apiserver:12345/dolphinscheduler進行訪問

賬號 admin 密碼 dolphinscheduler123登陸

檢查worker分組

  • 可以看到1.3.1版本的worker分組是通過install_config.conf去執行的,在頁面上不能進行修改

檢查服務

編譯好的Dolphin Scheduler-1.3.1-cdh5.16.2的包

百度網盤

鏈接:

https://pan.baidu.com/s/1gEwEF2R2XJVRv76SgiW0hA

提取碼:joyq

6

總結

dolphinscheduler-1.3.1版本在部署上大幅精簡了install.sh的配置,可以讓用戶快速部署起來。不過如果做升級的老鐵就需要去conf目錄下修改對應模塊的配置文件。

前端這塊也不再需要單獨的nginx,用戶的部署體驗更佳

總結

以上是生活随笔為你收集整理的如何在CDH5上部署Dolphin Scheduler 1.3.1的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。