日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

spark 连接mysql 命令_spark-submit命令包括mysql连接器

發布時間:2023/12/4 数据库 41 豆豆
生活随笔 收集整理的這篇文章主要介紹了 spark 连接mysql 命令_spark-submit命令包括mysql连接器 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

我有一個scala對象文件,它在內部查詢mysql表做一個連接并將數據寫入s3,在本地測試我的代碼它運行得很好 . 但是當我將它提交到集群時,它會拋出以下錯誤:

線程“main”java.sql.SQLException中的異常:在org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils $$ anonfun的java.sql.DriverManager.getDriver(DriverManager.java:315)上沒有合適的驅動程序$ 2.apply(JdbcUtils.scala:54)位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils $$ anonfun $ 2.apply(JdbcUtils.scala:54)scala.Option.getOrElse(Option.scala: 121)org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils $ .createConnectionFactory(JdbcUtils.scala:53)at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD $ .resolveTable(JDBCRDD . scala:123)org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation . (JDBCRelation.scala:117)at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala) :53)org.apache中的org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149) . spark.sql.DataFrameReader.load(DataFrameRea der.scala:122)在QuaterlyAudit $ .main(QuaterlyAudit.scala:51)at QuaterlyAudit.main(QuaterlyAudit.scala)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl . java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.spark.deploy.SparkSubmit $ .org $在org.apache.spark.deploy.SparkSubmit $的org.apache.spark.deploy.SparkSubmit $ .doRunMain $ 1(SparkSubmit.scala:185)的apache $ spark $ deploy $ SparkSubmit $$ runMain(SparkSubmit.scala:736) . 在org.apache.spark.spark.deploy.SparkSubmit.main(SparkSubmit.scala)的org.apache.spark.deploy.SparkSubmit $ .main(SparkSubmit.scala:124)中提交(SparkSubmit.scala:210)

下面是我的sparksubmit命令:

nohup spark-submit --class QuaterlyAudit --master yarn-client --num-executors 8

--driver-memory 16g --executor-memory 20g --executor-cores 10 /mypath/campaign.jar &

我正在使用sbt,我在sbt程序集中包含mysql連接器,下面是我的build.sbt文件:

name := "mobilewalla"

version := "1.0"

scalaVersion := "2.11.8"

libraryDependencies ++= Seq("org.apache.spark" %% "spark-core" % "2.0.0" % "provided",

"org.apache.spark" %% "spark-sql" % "2.0.0" % "provided",

"org.apache.hadoop" % "hadoop-aws" % "2.6.0" intransitive(),

"mysql" % "mysql-connector-java" % "5.1.37")

assemblyMergeStrategy in assembly := {

case PathList("META-INF", xs@_*) =>

xs.map(_.toLowerCase) match {

case ("manifest.mf" :: Nil) |

("index.list" :: Nil) |

("dependencies" :: Nil) |

("license" :: Nil) |

("notice" :: Nil) => MergeStrategy.discard

case _ => MergeStrategy.first // was 'discard' previousely

}

case "reference.conf" => MergeStrategy.concat

case _ => MergeStrategy.first

}

assemblyJarName in assembly := "campaign.jar"

我也嘗試過:

nohup spark-submit --driver-class-path /mypath/mysql-connector-java-5.1.37.jar

--class QuaterlyAudit --master yarn-client --num-executors 8 --driver-memory 16g

--executor-memory 20g --executor-cores 10 /mypath/campaign.jar &

但仍然沒有運氣,我在這里失蹤了什么 .

總結

以上是生活随笔為你收集整理的spark 连接mysql 命令_spark-submit命令包括mysql连接器的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。