日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

flink wordcount示例

發布時間:2025/1/21 编程问答 20 豆豆
生活随笔 收集整理的這篇文章主要介紹了 flink wordcount示例 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

pom


完整pom

<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.msb</groupId><artifactId>StudyFlink</artifactId><version>1.0-SNAPSHOT</version><properties><flink.version>1.9.2</flink.version><scala.version>2.11.8</scala.version><redis.version>3.2.0</redis.version><hbase.version>1.3.3</hbase.version><mysql.version>5.1.44</mysql.version></properties><dependencies><dependency><groupId>org.apache.flink</groupId><artifactId>flink-scala_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-streaming-scala_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-clients_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>${scala.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>2.6.5</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.bahir</groupId><artifactId>flink-connector-redis_2.11</artifactId><version>1.0</version></dependency><dependency><groupId>redis.clients</groupId><artifactId>jedis</artifactId><version>${redis.version}</version></dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>${mysql.version}</version></dependency><dependency><groupId>org.apache.hbase</groupId><artifactId>hbase-client</artifactId><version>${hbase.version}</version></dependency><dependency><groupId>org.apache.hbase</groupId><artifactId>hbase-common</artifactId><version>${hbase.version}</version></dependency><dependency><groupId>org.apache.hbase</groupId><artifactId>hbase-server</artifactId><version>${hbase.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-filesystem_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-statebackend-rocksdb_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-planner_2.11</artifactId><version>${flink.version}</version></dependency><dependency><groupId>org.apache.flink</groupId><artifactId>flink-table-api-scala-bridge_2.11</artifactId><version>${flink.version}</version></dependency></dependencies><build><plugins><!-- 在maven項目中既有java又有scala代碼時配置 maven-scala-plugin 插件打包時可以將兩類代碼一起打包 --><plugin><groupId>org.scala-tools</groupId><artifactId>maven-scala-plugin</artifactId><version>2.15.2</version><executions><execution><goals><goal>compile</goal><goal>testCompile</goal></goals></execution></executions></plugin><!-- maven 打jar包需要插件 --><plugin><artifactId>maven-assembly-plugin</artifactId><version>2.4</version><configuration><!-- 設置false后是去掉 MySpark-1.0-SNAPSHOT-jar-with-dependencies.jar 后的 “-jar-with-dependencies” --><!--<appendAssemblyId>false</appendAssemblyId>--><descriptorRefs><descriptorRef>jar-with-dependencies</descriptorRef></descriptorRefs></configuration><executions><execution><id>make-assembly</id><phase>package</phase><goals><goal>assembly</goal></goals></execution></executions></plugin></plugins></build> </project>

scala代碼

完整代碼

package com.zxl.streamimport org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment} import org.apache.flink.streaming.api.scala._object WordCount {def main(args: Array[String]): Unit = {//準備環境/*** createLocalEnvironment 創建一個本地執行的環境 local* createLocalEnvironmentWithWebUI 創建一個本地執行的環境 同時還開啟Web UI的查看端口 8081* getExecutionEnvironment 根據你執行的環境創建上下文,比如local cluster*/val env = StreamExecutionEnvironment.getExecutionEnvironment/*** DataStream:一組相同類型的元素 組成的數據流* 如果數據源是scoket 并行度只能是1*/val initStream:DataStream[String] = env.socketTextStream("node01",8888)val wordStream = initStream.flatMap(_.split(" ")).setParallelism(3)val pairStream = wordStream.map((_,1)).setParallelism(3)val keyByStream = pairStream.keyBy(0)val restStream = keyByStream.sum(1).setParallelism(3)restStream.print()/*** 6> (msb,1)* 1> (,,1)* 3> (hello,1)* 3> (hello,2)* 6> (msb,2)* 默認就是有狀態的計算* 6> 代表是哪一個線程處理的* 相同的數據一定是由某一個thread處理**///啟動Flink 任務env.execute("first flink job")} }

啟動測試

本地啟動

先啟動8888端口

nc -lk 8888

運行main方法

實時輸入數據,就會進行流計算

默認就是有狀態的計算:上次的計算結果給保留了。

* 6> (msb,1)* 1> (,,1)* 3> (hello,1)* 3> (hello,2)* 6> (msb,2)* 默認就是有狀態的計算* 6> 代表是哪一個線程處理的* 相同的數據一定是由某一個thread處理

線程數并不是越多越好,線程多了可能啟動線程的時間比執行計算用的時間還要多。
并行度為1,只啟東一個線程來處理:

此時前面就沒有線程號了:

集群環境運行jar

package打包


選擇這個jar包:不要選擇帶依賴的,因為集群環境中已經有這些jar包了,否則就重復了

使用命令提交任務

將jar包上傳到節點上,執行如下命令:

  • -c 指定主類
  • -d 守護進程方式運行
flink run -c 主類 -d jar包路徑

查看web ui的Running Jobs

發送數據:

點進去:

可以看到輸出:

使用web ui提交任務

可以關閉web ui提交任務:默認是true開啟的

vim conf/flink-conf.yaml web.submit.enable: false #關閉


查看日志

總結

以上是生活随笔為你收集整理的flink wordcount示例的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。