[Spark][Hive][Python][SQL]Spark 读取Hive表的小例子
[Spark][Hive][Python][SQL]Spark 讀取Hive表的小例子
$ cat customers.txt
1 Ali us
2 Bsb ca
3 Carls mx
$ hive
hive>
> CREATE TABLE IF NOT EXISTS customers(
> cust_id string,
> name string,
> country string
> )
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
hive> load data local inpath '/home/training/customers.txt' into table customers;
hive>exit
$pyspark
sqlContext =HiveContext(sc)
filterDF=sqlContext.sql(""" SELECT * FROM customers WHERE name LIKE "A%" """)
filterDF.limit(3).show()
+-------+----+-------+
|cust_id|name|country|
+-------+----+-------+
| 001| Ani| us|
+-------+----+-------+
轉載于:https://www.cnblogs.com/gaojian/p/7634234.html
總結
以上是生活随笔為你收集整理的[Spark][Hive][Python][SQL]Spark 读取Hive表的小例子的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: IO Streams:缓冲流
- 下一篇: websocket python爬虫_p