日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

spark:sortByKey实现二次排序

發(fā)布時(shí)間:2024/1/17 编程问答 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 spark:sortByKey实现二次排序 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

最近在項(xiàng)目中遇到二次排序的需求,和平常開發(fā)spark的application一樣,開始查看API,編碼,調(diào)試,驗(yàn)證結(jié)果。由于之前對spark的API使用過,知道API中的sortByKey()可以自定義排序規(guī)則,通過實(shí)現(xiàn)自定義的排序規(guī)則來實(shí)現(xiàn)二次排序。?
這里為了說明問題,舉了一個(gè)簡單的例子,key是由兩部分組成的,我們這里按key的第一部分的降序排,key的第二部分升序排,具體如下:

JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);List<Integer> data = Arrays.asList(5, 1, 1, 4, 4, 2, 2);JavaRDD<Integer> javaRDD = javaSparkContext.parallelize(data);final Random random = new Random(100);JavaPairRDD javaPairRDD = javaRDD.mapToPair(new PairFunction<Integer, String, Integer>() { @Override public Tuple2<String, Integer> call(Integer integer) throws Exception { return new Tuple2<String, Integer>(Integer.toString(integer) + " " + random.nextInt(10),random.nextInt(10)); } });JavaPairRDD<String,Integer> sortByKeyRDD = javaPairRDD.sortByKey(new Comparator<String>() { @Override public int compare(String o1, String o2) { String []o1s = o1.split(" "); String []o2s = o2.split(" "); if(o1s[0].compareTo(o2s[0]) == 0) return o1s[1].compareTo(o2s[1]); else return -o1s[0].compareTo(o2s[0]); } }); System.out.println("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~" + sortByKeyRDD.collect());

上面編碼從語法上沒有什么問題,可是運(yùn)行下報(bào)了如下錯(cuò)誤:

java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:248)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:158)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:166)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:166)at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:107)at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:66)at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:41)at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:81)at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:312)at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:305)at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132)at org.apache.spark.SparkContext.clean(SparkContext.scala:1891)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1764)at org.apache.spark.SparkContext.runJob(SparkContext.scala:1779)at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:885)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)at org.apache.spark.rdd.RDD.collect(RDD.scala:884)at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:335)at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)

因此,我再次去查看相應(yīng)的spark Java API文檔,但是我沒有發(fā)現(xiàn)任何指明錯(cuò)誤的地方。好吧,那只能扒下源碼吧,在javaPairRDD中

def sortByKey(comp: Comparator[K], ascending: Boolean): JavaPairRDD[K, V] = { implicit val ordering = comp // Allow implicit conversion of Comparator to Ordering. fromRDD(new OrderedRDDFunctions[K, V, (K, V)](rdd).sortByKey(ascending)) }

其實(shí)在OrderedRDDFunctions類中有個(gè)變量ordering它是隱形的:private val ordering = implicitly[Ordering[K]]。他就是默認(rèn)的排序規(guī)則,我們自己重寫的comp就修改了默認(rèn)的排序規(guī)則。到這里還是沒有發(fā)現(xiàn)問題,但是發(fā)現(xiàn)類OrderedRDDFunctions extends Logging with Serializable,又回到上面的報(bào)錯(cuò)信息,掃描到“serializable”!!!因此,返回上述代碼,查看Comparator interface實(shí)現(xiàn),發(fā)現(xiàn)原來是它沒有extend Serializable,故只需創(chuàng)建一個(gè) serializable的comparator就可以:public interface SerializableComparator<T> extends Comparator<T>, Serializable { }.?
具體如下:

private static class Comp implements Comparator<String>,Serializable{ @Override public int compare(String o1, String o2) { String []o1s = o1.split(" "); String []o2s = o2.split(" "); if(o1s[0].compareTo(o2s[0]) == 0) return o1s[1].compareTo(o2s[1]);elsereturn -o1s[0].compareTo(o2s[0]); } } JavaPairRDD<String,Integer> sortByKeyRDD = javaPairRDD.sortByKey(new Comp());

總結(jié)下,在spark的Java API中,如果需要使用Comparator接口,須注意是否需要序列化,如sortByKey(),repartitionAndSortWithinPartitions()等都是需要序列化的。

總結(jié)

以上是生活随笔為你收集整理的spark:sortByKey实现二次排序的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。