日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

java写spark碰到输出为[Ljava.lang.String;@889a8a8的情况

發布時間:2023/12/31 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 java写spark碰到输出为[Ljava.lang.String;@889a8a8的情况 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原始代碼如下:

import com.sun.rowset.internal.Row; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.function.*; import org.slf4j.event.Level; import scala.Tuple2; import java.util.*; import java.util.Random; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.SparkContext; import java.util.Iterator; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import java.lang.*; //import org.apache.log4j.Level; import org.apache.log4j.Logger; //import java.util.logging.Logger;public class sampling_salting {public static void main(String[] args){Logger.getLogger("org.apache.hadoop").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.apache.spark").setLevel(org.apache.log4j.Level.WARN);Logger.getLogger("org.project-spark").setLevel(org.apache.log4j.Level.WARN);//-------------------------------------讀取rdd1數據-------------------------------------------------------// SparkConf conf = new SparkConf().setMaster("local").setAppName("wordcount1"); // JavaSparkContext sc = new JavaSparkContext(conf); // sc.setLogLevel("WARN"); // // String filename = "/home/appleyuchi/桌面/spark_success/Spark數據傾斜處理/Java/sampling_salting/src/main/java/rdd1.txt"; // JavaRDD<String> input = sc.textFile(filename); // JavaRDD<String> lines = input.flatMap(func0); // // JavaPairRDD<String,Long> rdd1 = lines.mapToPair(new PairFunction<String, String,Long>() // { // public Tuple2<String, Long> call(String s) throws Exception // { // return new Tuple2<String, Long>(s,1L); // } // });SparkConf conf=new SparkConf().setMaster("local").setAppName("join");JavaSparkContext sc=new JavaSparkContext(conf);// rdd2=sc.textFile("hdfs://Desktop:9000/rdd2.csv").map(line->line.split(",")).map //List<Tuple2<String, String>>stus=Arrays.asList(new Tuple2<>("w1","1"),new Tuple2<>("w2","2"),new Tuple2<>("w3","3"),new Tuple2<>("w2","22"),new Tuple2<>("w1","11"));List<Tuple2<String, String>>scores=Arrays.asList(new Tuple2<>("w1","a1"),new Tuple2<>("w2","a2"),new Tuple2<>("w2","a22"),new Tuple2<>("w1","a11"),new Tuple2<>("w3","a3"));JavaPairRDD<String, String> stusRdd=sc.parallelizePairs(stus);JavaPairRDD<String, String> scoresRdd=sc.parallelizePairs(scores);JavaPairRDD<String, Tuple2<String, String>>result=stusRdd.join(scoresRdd);System.out.println(result.collect());JavaRDD<String[]> rdd1 = sc.textFile("hdfs://Desktop:9000/rdd1.csv").map(line -> line.split(","));// System.out.println(Arrays.toString(rdd1.collect().get(0)));System.out.println(rdd1.collect().get(0));} }

輸出結果為:

[(w2,(2,a2)), (w2,(2,a22)), (w2,(22,a2)), (w2,(22,a22)), (w3,(3,a3)), (w1,(1,a1)), (w1,(1,a11)), (w1,(11,a1)), (w1,(11,a11))]
[Ljava.lang.String;@889a8a8

解決方案:

System.out.println(Arrays.toString(rdd1.collect().get(0)));

?

總結

以上是生活随笔為你收集整理的java写spark碰到输出为[Ljava.lang.String;@889a8a8的情况的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。