日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CDH5.15.0+spark1.6.0+hive1.1集群与zeppelin0.8.1+spark-notebook打通踩坑总结

發布時間:2023/12/8 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CDH5.15.0+spark1.6.0+hive1.1集群与zeppelin0.8.1+spark-notebook打通踩坑总结 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

2019獨角獸企業重金招聘Python工程師標準>>>

二進制all包多為spark2 scala2.11的所以源碼編譯本地相關版本兼容的包的及其它hadoop hive yarn 版本,源碼git下載編譯排錯見前邊文章,下為編譯合適版本后的安裝過程:

1.zeppelin081/conf/zeppelin-env.sh:

export MASTER=local[2] #yarn-client #export SCALA_HOME=/usr/share/scala export SCALA_HOME=/opt/soft/scala-2.10.5 export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive #export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2 export SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop if [ -n "$HADOOP_HOME" ]; thenexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native fi#export SPARK_CONF_DIR=/etc/spark2/conf export SPARK_CONF_DIR=/etc/spark/conf export HIVE_CONF_DIR=/etc/hive/conf export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf} HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-$SPARK_CONF_DIR/yarn-conf} HIVE_CONF_DIR=${HIVE_CONF_DIR:-/etc/hive/conf} if [ -d "$HIVE_CONF_DIR" ]; thenHADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HIVE_CONF_DIR" fi export HADOOP_CONF_DIRexport ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf#export ZEPPELIN_INTP_CLASSPATH_OVERRIDES=:/etc/hive/conf:/usr/share/java/mysql-connector-java.jar:/opt/cloudera/parcels/CDH/lib/spark/lib/spark-assembly-1.6.0-cdh5.15.0-hadoop2.6.0-cdh5.15.0.jar:/opt/cloudera/parcels/CDH/jars/*:/opt/cloudera/parcels/CDH/lib/hive/lib/*:/opt/soft/zeppelin081/interpreter/spark/spark-interpreter-0.8.1.jar

2.ln -s /etc/hive/conf/hive-site.xml conf/

3.修改conf/zeppelin-site.xml 的啟動端口號

4.bin/zeppelin-daemon.sh restart 啟動 ,自動生成相關log run 和webapp目錄

5.看日志報錯:

vi logs/zeppelin-root-master.log:

Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common
/collect/Queues

Caused by: java.lang.ClassNotFoundException: com.google.common.collect.Queues
解決:替換相關guava包? 對應CDH lib目錄相關版本

cp /opt/cloudera/parcels/CDH/lib/hive/lib/guava-14.0.1.jar lib/

還報錯要guava-21

vi logs/zeppelin-root-master.out:

MultiException[java.lang.NoClassDefFoundError: com/fasterxml/jackson/core/Ve
rsioned, java.lang.NoClassDefFoundError: org/glassfish/jersey/jackson/intern
al/jackson/jaxrs/json/JacksonJaxbJsonProvider]
解決:替換相關jackson 包對應CDH lib目錄相關版本

ls lib/|grep jackson
google-http-client-jackson-1.23.0.jar
google-http-client-jackson2-1.23.0.jar
jackson-annotations-2.8.0.jar.bak
jackson-core-2.8.10.jar.bak
jackson-core-asl-1.9.13.jar
jackson-databind-2.8.11.1.jar.bak
jackson-jaxrs-1.8.8.jar
jackson-mapper-asl-1.9.13.jar
jackson-module-jaxb-annotations-2.8.10.jar.bak
jackson-xc-1.8.8.jar
jersey-media-json-jackson-2.27.jar
[root@master zeppelin081]# cp /opt/cloudera/parcels/CDH/jars/jackson-annotations-2.1.0.jar lib/
[root@master zeppelin081]# cp /opt/cloudera/parcels/CDH/jars/jackson-core-2.1.0.jar lib/
[root@master zeppelin081]# cp /opt/cloudera/parcels/CDH/jars/jackson-databind-2.1.0.jar lib/
[root@master zeppelin081]# cp /opt/cloudera/parcels/CDH/jars/jackson-module-jaxb-annotations-2.1.0.jar lib/
試多版本都不行后查要scala版:

cp /opt/cloudera/parcels/CDH/jars/jackson*2.2.3*.jar lib/
[root@master zeppelin081]# ls lib/jackson-
jackson-annotations-2.1.0.jar.bak
jackson-annotations-2.2.2.jar.bak
jackson-annotations-2.2.3.jar
jackson-annotations-2.3.1.jar.bak
jackson-annotations-2.8.0.jar.bak
jackson-core-2.1.0.jar.bak
jackson-core-2.2.2.jar.bak
jackson-core-2.2.3.jar
jackson-core-2.8.10.jar.bak
jackson-core-asl-1.9.13.jar
jackson-databind-2.1.0.jar.bak
jackson-databind-2.2.2.jar.bak
jackson-databind-2.2.3.jar
jackson-databind-2.8.11.1.jar.bak
jackson-jaxrs-1.8.8.jar
jackson-mapper-asl-1.9.13.jar
jackson-module-jaxb-annotations-2.1.0.jar.bak
jackson-module-jaxb-annotations-2.8.10.jar.bak
jackson-module-scala_2.10-2.2.3.jar
jackson-xc-1.8.8.jar
終于搞定!!!

======然后開始打通測試各插件=======

spark interpreter:

master yarn-client

Dependencies

artifactexclude
/usr/share/java/mysql-connector-java.jar?
/opt/cloudera/parcels/CDH/lib/spark/lib/spark-assembly-1.6.0-cdh5.15.0-hadoop2.6.0-cdh5.15.0.jar
sc.getConf.toDebugString.split("\n").foreach(println) sqlContext.sql("show tables").show %sql select area,count(cid) from default.dimcity group by area

presto interpreter:(new jdbc)

default.driver?? ?com.facebook.presto.jdbc.PrestoDriver

default.url? ?jdbc:presto://master:19000/hive/

default.user? ?root

Dependencies

artifactexclude
com.facebook.presto:presto-jdbc:0.100
%presto -- SHOW SCHEMAS select area,count(cid) from default.dimcity group by area

phoenix interpreter:(new jdbc)

default.driver?? ?org.apache.phoenix.jdbc.PhoenixDriver

default.url?? ?jdbc:phoenix:master:2181:/hbase
default.user?? ?hdfs

Dependencies

artifactexclude
org.apache.phoenix:phoenix-core:4.7.0-HBase-1.1?
org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1
%phoenix -- !tables -- 不支持 -- SHOW SCHEMAS -- 不支持 select * from SYSTEM.CATALOG --dim_channels --tc_district //HBASE 表名 必須大寫才支持

hbase interpreter:

hbase.home/opt/cloudera/parcels/CDH/lib/hbase
hbase.ruby.sourceslib/ruby
zeppelin.hbase.test.modefalse

Dependencies (因zepplin自動編譯為hbase1.0,要不指定版本重編,要不加載下邊包覆蓋)

artifactexclude
/opt/cloudera/parcels/CDH/lib/hbase/lib/hbase-client-1.2.0-cdh5.15.0.jar?
/opt/cloudera/parcels/CDH/lib/hbase/lib/hbase-common-1.2.0-cdh5.15.0.jar?
/opt/cloudera/parcels/CDH/lib/hbase/lib/hbase-protocol-1.2.0-cdh5.15.0.jar
%hbase desc 'car_brand' list?

elasticsearch interpreter:? ?參http://cwiki.apachecn.org/pages/viewpage.action?pageId=10030782 默認transport 9300

elasticsearch.client.type?? ?http
elasticsearch.cluster.name?? ?tuanchees
elasticsearch.host?? ?172.16.60.182
elasticsearch.port?? ?9200

?

%elasticsearch search /

file interpreter:

hdfs.maxlength?? ?1000
hdfs.url?? ?http://master:50070/webhdfs/v1/
hdfs.user?? ?
Dependencies
artifact?? ?exclude
/opt/cloudera/parcels/CDH/jars/jersey-client-1.9.jar?? ?
/opt/cloudera/parcels/CDH/jars/jersey-core-1.9.jar?? ?
/opt/cloudera/parcels/CDH/jars/jersey-guice-1.9.jar?? ?
/opt/cloudera/parcels/CDH/jars/jersey-server-1.9.jar?? ?
/opt/cloudera/parcels/CDH/jars/jersey-json-1.9.jar

?

%file ls /

flink interpreter:

host?? ?localhost
port?? ?6123

%flink val text = benv.fromElements("In the time of chimpanzees, I was a monkey", // some lines of text to analyze "Butane in my veins and I'm out to cut the junkie", "With the plastic eyeballs, spray paint the vegetables", "Dog food stalls with the beefcake pantyhose", "Kill the headlights and put it in neutral", "Stock car flamin' with a loser in the cruise control", "Baby's in Reno with the Vitamin D", "Got a couple of couches, sleep on the love seat", "Someone came in sayin' I'm insane to complain", "About a shotgun wedding and a stain on my shirt", "Don't believe everything that you breathe", "You get a parking violation and a maggot on your sleeve", "So shave your face with some mace in the dark", "Savin' all your food stamps and burnin' down the trailer park", "Yo, cut it") val counts = text.flatMap{ _.toLowerCase.split("\\W+") }.map { (_,1) }.groupBy(0).sum(1)counts.collect().foreach(println(_))// // Streaming Example // case class WordWithCount(word: String, count: Long)// val text = env.socketTextStream(host, port, '\n')// val windowCounts = text.flatMap { w => w.split("\\s") } // .map { w => WordWithCount(w, 1) } // .keyBy("word") // .timeWindow(Time.seconds(5)) // .sum("count")// windowCounts.print() // // Batch Example // case class WordWithCount(word: String, count: Long)// val text = env.readTextFile(path)// val counts = text.flatMap { w => w.split("\\s") } // .map { w => WordWithCount(w, 1) } // .groupBy("word") // .sum("count")// counts.writeAsCsv(outputPath)

?

=========spark-notebook============

spark-notebook相對簡單下載解壓Scala [2.10.5] Spark [1.6.0] Hadoop [2.6.0] {Hive ?} {Parquet ?}

一樣連接 hive-site.xml:? ln -s /etc/hive/conf/hive-site.xml conf/

改端口:vi conf/application.ini

然后可以完全不動直接啟后在改配置,但方便重啟,寫了個腳本

bin/start.sh #!/bin/bash export MASTER=local[2] #yarn-client #export SCALA_HOME=/usr/share/scala export SCALA_HOME=/opt/soft/scala-2.10.5 export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive export SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop if [ -n "$HADOOP_HOME" ]; thenexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native fiexport SPARK_CONF_DIR=/etc/spark/conf export HIVE_CONF_DIR=/etc/hive/conf export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf} HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-$SPARK_CONF_DIR/yarn-conf} HIVE_CONF_DIR=${HIVE_CONF_DIR:-/etc/hive/conf} if [ -d "$HIVE_CONF_DIR" ]; thenHADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HIVE_CONF_DIR" fi export HADOOP_CONF_DIRworkdir=/opt/soft/spark-notebook kill -9 `cat ${workdir}/RUNNING_PID` rm -rf ${workdir}/derby.log ${workdir}/metastore_db ${workdir}/RUNNING_PID ${workdir}/bin/spark-notebook > snb.log 2>&1 &

開始一直連不上HIVE ,后來配置notebook metadata如下:(notebook metadata的配置參考http://master151:9002/assets/docs/clusters_clouds.html)

{"name": "test","user_save_timestamp": "1970-01-01T08:00:00.000Z","auto_save_timestamp": "1970-01-01T08:00:00.000Z","language_info": {"name": "scala","file_extension": "scala","codemirror_mode": "text/x-scala"},"trusted": true,"customLocalRepo": null,"customRepos": null,"customDeps": null,"customImports": ["import scala.util._","import org.apache.spark.SparkContext._"],"customArgs": null,"customSparkConf": {"spark.master": "local[2]","hive.metastore.warehouse.dir": "/user/hive/warehouse","hive.metastore.uris": "thrift://master:9083","spark.sql.hive.metastore.version": "1.1.0","spark.sql.hive.metastore.jars": "/opt/cloudera/parcels/CDH/lib/hadoop/../hive/lib/*","hive.metastore.schema.verification": "false","spark.jars": "/usr/share/java/mysql-connector-java.jar,/opt/cloudera/parcels/CDH/lib/spark/lib/spark-assembly-1.6.0-cdh5.15.0-hadoop2.6.0-cdh5.15.0.jar","spark.driver.extraClassPath": "/etc/spark/conf:/etc/spark/conf/yarn-conf:/etc/hadoop/conf:/etc/hive/conf:/opt/cloudera/parcels/CDH/lib/hadoop/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-hdfs/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-mapreduce/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-yarn/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hive/lib/*:/opt/cloudera/parcels/CDH/jars/*:/opt/soft/spark-notebook/lib/*","spark.executor.extraClassPath": "/etc/spark/conf:/etc/spark/conf/yarn-conf:/etc/hadoop/conf:/etc/hive/conf:/opt/cloudera/parcels/CDH/lib/hadoop/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-hdfs/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-mapreduce/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hadoop-yarn/*:/opt/cloudera/parcels/CDH/lib/hadoop/../hive/lib/*:/opt/cloudera/parcels/CDH/jars/*:/opt/soft/spark-notebook/lib/*"},"kernelspec": {"name": "spark","display_name": "Scala [2.10.5] Spark [1.6.0] Hadoop [2.6.0] {Hive ?} {Parquet ?}"} }

報錯:

java.lang.NoClassDefFoundError: org/apache/hadoop/hive/metastore/api/AlreadyExistsException

經查原來spark-notebook編繹時自動使用HIVE1.2 metastore,cdh使用的1.1,版本不兼容

轉載于:https://my.oschina.net/hblt147/blog/3015713

總結

以上是生活随笔為你收集整理的CDH5.15.0+spark1.6.0+hive1.1集群与zeppelin0.8.1+spark-notebook打通踩坑总结的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。