日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 运维知识 > 数据库 >内容正文

数据库

spark SQL Running the Thrift JDBC/ODBC server

發布時間:2024/9/30 数据库 25 豆豆
生活随笔 收集整理的這篇文章主要介紹了 spark SQL Running the Thrift JDBC/ODBC server 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Running the Thrift JDBC/ODBC server

1:運行

./sbin/start-thriftserver.sh ?--hiveconf hive.server2.thrift.port=10000 ?--hiveconf hive.server2.thrift.bind.host=feng02 --master spark://feng02:7077 --driver-class-path /home/jifeng/hadoop/spark-1.2.0-bin-2.4.1/lib/mysql-connector-java-5.1.32-bin.jar --executor-memory 1g

端口:10000

服務器:feng02

spark master:spark://feng02:7077

driver-class-path:mysql驅動包(hive配置的)


[jifeng@feng02 spark-1.2.0-bin-2.4.1]$ ./sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10000 --hiveconf hive.server2.thrift.bind.host=feng02 --master spark://feng02:7077 --driver-class-path /home/jifeng/hadoop/spark-1.2.0-bin-2.4.1/lib/mysql-connector-java-5.1.32-bin.jar starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /home/jifeng/hadoop/spark-1.2.0-bin-2.4.1/sbin/../logs/spark-jifeng-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-feng02.out

2:運行beeline

Now you can use beeline to test the Thrift JDBC/ODBC server:

./bin/beeline [jifeng@feng02 spark-1.2.0-bin-2.4.1]$ ./bin/beeline Spark assembly has been built with Hive, including Datanucleus jars on classpath Beeline version ??? by Apache Hive
3:連接server


參考:https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineExample

beeline> !connect jdbc:hive2://feng02:10000 jifeng jifeng org.apache.hive.jdbc.HiveDriver Connecting to jdbc:hive2://feng02:10000 log4j:WARN No appenders could be found for logger (org.apache.thrift.transport.TSaslTransport). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Connected to: Spark SQL (version 1.2.0) Driver: null (version null) Transaction isolation: TRANSACTION_REPEATABLE_READ
4:查詢

0: jdbc:hive2://feng02:10000> show tables; +----------------+ | result | +----------------+ | course | | hbase_table_1 | | pokes | | student | +----------------+ 4 rows selected (2.723 seconds)
0: jdbc:hive2://feng02:10000> select * from student; +-----+----------+------+ | id | name | age | +-----+----------+------+ | 1 | nick | 24 | | 2 | doping | 25 | | 3 | caizhi | 26 | | 4 | liaozhi | 27 | | 5 | wind | 30 | +-----+----------+------+ 5 rows selected (10.554 seconds) 0: jdbc:hive2://feng02:10000> select a.*,b.* from student a join course b where a.id=b.id ; +-----+----------+------+-----+-----+-----+-----+-----+ | id | name | age | id | c1 | c2 | c3 | c4 | +-----+----------+------+-----+-----+-----+-----+-----+ | 1 | nick | 24 | 1 | 英語 | 中文 | 法文 | 日文 | | 2 | doping | 25 | 2 | 中文 | 法文 | | | | 3 | caizhi | 26 | 3 | 中文 | 法文 | 日文 | | | 4 | liaozhi | 27 | 4 | 中文 | 法文 | 拉丁 | | | 5 | wind | 30 | 5 | 中文 | 法文 | 德文 | | +-----+----------+------+-----+-----+-----+-----+-----+ 5 rows selected (2.33 seconds)

4:Java JDBC連接

package demo.test;import java.sql.*;public class Pretest {public static void main( String args[] )throws SQLException , ClassNotFoundException {String jdbcdriver="org.apache.hive.jdbc.HiveDriver";String jdbcurl="jdbc:hive2://feng02:10000";String username="scott";String password="tiger"; Class.forName(jdbcdriver);Connection c = DriverManager.getConnection(jdbcurl,username,password); Statement st = c.createStatement(); print( "num should be 1 " , st.executeQuery("select * from student"));// TODO indexing}static void print( String name , ResultSet res )throws SQLException {System.out.println( name);ResultSetMetaData meta=res.getMetaData(); //System.out.println( "\t"+res.getRow()+"條記錄");String str="";for(int i=1;i<=meta.getColumnCount();i++){str+=meta.getColumnName(i)+" ";//System.out.println( meta.getColumnName(i)+" ");}System.out.println("\t"+str);str="";while ( res.next() ){for(int i=1;i<=meta.getColumnCount();i++){ str+= res.getString(i)+" "; } System.out.println("\t"+str);str="";}} }

上面是運行參數

結果顯示:

num should be 1 id name age 1 nick 24 2 doping 25 3 caizhi 26 4 liaozhi 27 5 wind 30


總結

以上是生活随笔為你收集整理的spark SQL Running the Thrift JDBC/ODBC server的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。