日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

Spark入门(十五)之分组求最小值

發(fā)布時(shí)間:2023/12/3 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Spark入门(十五)之分组求最小值 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

?一、分組求最小值

計(jì)算文本里面的每個(gè)key分組求最小值,輸出結(jié)果。

?

二、maven設(shè)置

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.mk</groupId><artifactId>spark-test</artifactId><version>1.0</version><name>spark-test</name><url>http://spark.mk.com</url><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target><scala.version>2.11.1</scala.version><spark.version>2.4.4</spark.version><hadoop.version>2.6.0</hadoop.version></properties><dependencies><!-- scala依賴(lài)--><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>${scala.version}</version></dependency><!-- spark依賴(lài)--><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.11</artifactId><version>${spark.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_2.11</artifactId><version>${spark.version}</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.11</version><scope>test</scope></dependency></dependencies><build><pluginManagement><plugins><plugin><artifactId>maven-clean-plugin</artifactId><version>3.1.0</version></plugin><plugin><artifactId>maven-resources-plugin</artifactId><version>3.0.2</version></plugin><plugin><artifactId>maven-compiler-plugin</artifactId><version>3.8.0</version></plugin><plugin><artifactId>maven-surefire-plugin</artifactId><version>2.22.1</version></plugin><plugin><artifactId>maven-jar-plugin</artifactId><version>3.0.2</version></plugin></plugins></pluginManagement></build> </project>

?

三、編程代碼?

public class GroupByMinApp implements SparkConfInfo {public static void main(String[] args) {String filePath = "E:\\spark\\groubByNumber.txt";SparkSession sparkSession = new GroupByMinApp().getSparkConf("groubByNumber");JavaPairRDD<String, Integer> numbers = sparkSession.sparkContext().textFile(filePath, 4).toJavaRDD().flatMap(v -> Arrays.asList(v.split("\n")).iterator()).mapToPair(v -> {String[] data = v.split("\\s+");if (data.length != 2) {return null;}if (!data[1].matches("-?[0-9]+(.[0-9]+)?"))return null;return new Tuple2<>(data[0], Integer.valueOf(data[1]));}).filter(v -> v != null).cache();//數(shù)據(jù)量大會(huì)溢出內(nèi)存無(wú)法計(jì)算 // numbers.groupByKey() // .sortByKey(true) // .mapValues(v -> { // // Integer min = null; // Iterator<Integer> it = v.iterator(); // while (it.hasNext()) { // Integer val = it.next(); // if(min==null || min>val){ // min = val; // } // } // return min; // }) // .collect() // .forEach(v -> System.out.println(v._1 + ":" + v._2));//這種聚合數(shù)據(jù)再計(jì)算numbers.combineByKey(min -> min, // 將val映射為一個(gè)元組,作為分區(qū)內(nèi)聚合初始值(min,val) -> {if (min > val) {min = val;}return min;}, //分區(qū)內(nèi)聚合,(a, b) -> Math.min(a, b)) //分區(qū)間聚合.sortByKey(true).collect().forEach(v -> System.out.println(v._1 + ":" + v._2));sparkSession.stop();} }public interface SparkConfInfo {default SparkSession getSparkConf(String appName){SparkConf sparkConf = new SparkConf();if(System.getProperty("os.name").toLowerCase().contains("win")) {sparkConf.setMaster("local[4]");System.out.println("使用本地模擬是spark");}else{sparkConf.setMaster("spark://hadoop01:7077,hadoop02:7077,hadoop03:7077");sparkConf.set("spark.driver.host","192.168.150.1");//本地ip,必須與spark集群能夠相互訪(fǎng)問(wèn),如:同一個(gè)局域網(wǎng)sparkConf.setJars(new String[] {".\\out\\artifacts\\spark_test\\spark-test.jar"});//項(xiàng)目構(gòu)建生成的路徑}SparkSession session = SparkSession.builder().appName(appName).config(sparkConf).config(sparkConf).getOrCreate();return session;} }

groubByNumber.txt文件內(nèi)容

A 100 A 24 B 43 C 774 D 43 D 37 D 78 E 42 C 68 F 89 G 49 F 543 H 36 E 888 A 258 A 538 B 79 B 6 H 67 C 99

輸出

A:24 B:6 C:68 D:37 E:42 F:89 G:49 H:36

?

四、combineByKey方法

<C> JavaPairRDD<K, C> combineByKey(Function<V, C> createCombiner, Function2<C, V, C> mergeValue, Function2<C, C, C> mergeCombiners);


首先介紹一下上面三個(gè)參數(shù):

* Users provide three functions:
* ?- `createCombiner`, which turns a V into a C (e.g., creates a one-element list)
這個(gè)函數(shù)把當(dāng)前的值作為參數(shù),此時(shí)我們可以對(duì)其做些附加操作(類(lèi)型轉(zhuǎn)換)并把它返回?(這一步類(lèi)似于初始化操作)
* ?- `mergeValue`, to merge a V into a C (e.g., adds it to the end of a list)
該函數(shù)把元素V合并到之前的元素C(createCombiner)上 (這個(gè)操作在每個(gè)分區(qū)內(nèi)進(jìn)行)
* ?- `mergeCombiners`, to combine two C's into a single one.
該函數(shù)把2個(gè)元素C合并 (這個(gè)操作在不同分區(qū)間進(jìn)行)

?

總結(jié)

以上是生活随笔為你收集整理的Spark入门(十五)之分组求最小值的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。