日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

java web 调用hadoop_Java及Web程序调用hadoop2.6

發布時間:2023/11/27 生活经验 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 java web 调用hadoop_Java及Web程序调用hadoop2.6 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. hadoop集群:

1.1 系統及硬件配置:

hadoop版本:2.6 ;三臺虛擬機:node101(192.168.0.101)、node102(192.168.0.102)、node103(192.168.0.103); 每臺機器2G內存、1個CPU核;

node101:?NodeManager、?NameNode、ResourceManager、DataNode;

node102: NodeManager、DataNode 、SecondaryNameNode、JobHistoryServer;

node103: NodeManager 、DataNode;

1.2 配置過程中遇到的問題:

1) NodeManager啟動不了;

最開始配置的虛擬機配置的是512M內存,所以在yarn-site.xml 中的“yarn.nodemanager.resource.memory-mb”配置為512(其默認配置是1024),查看日志,報錯:

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager from node101 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.? ? ? ? ?把它改為1024或者以上就可以正常啟動NodeManager了,我設置的是2048;

2) 任務可以提交,但是不會繼續運行

a. 由于這里每個虛擬機只配置了一個核,但是yarn-site.xml里面的“yarn.nodemanager.resource.cpu-vcores”默認配置是8,這樣在分配資源的時候會有問題,所以把這個參數配置為1;

b. 出現下面的錯誤:

is running beyond virtual memory limits. Current usage: 96.6 MB of 1.5 GB physical memory used; 1.6 GB of 1.5 GB virtual memory used. Killing container.這個應該是map、reduce、NodeManager的資源配置沒有配置好,大小配置不正確導致的,但是我改了好久,感覺應該是沒問題的,但是一直報這個錯,最后沒辦法,把這個檢查去掉了,即把yarn-site.xml 中的“yarn.nodemanager.vmem-check-enabled”配置為false;這樣就可以提交任務了。

1.3 配置文件(希望有高人可以指點下資源配置情況,可以不出現上面b的錯誤,而不是使用去掉檢查的方法):

1)hadoop-env.sh 和yarn-env.sh 中配置jdk,同時HADOOP_HEAPSIZE和YARN_HEAPSIZE配置為512;

2)hdfs-site.xml 配置數據存儲路徑和secondaryname所在節點:

dfs.namenode.name.dir

file:data/hadoop/hdfs/name

Determines where on the local filesystem the DFS name node

should store the name table(fsimage). If this is a comma-delimited list

of directories then the name table is replicated in all of the

directories, for redundancy.

dfs.datanode.data.dir

file:///data/hadoop/hdfs/data

Determines where on the local filesystem an DFS data node

should store its blocks. If this is a comma-delimited

list of directories, then data will be stored in all named

directories, typically on different devices.

Directories that do not exist are ignored.

dfs.namenode.secondary.http-address

node102:50090

3)core-site.xml 配置namenode:

fs.defaultFS

hdfs://node101:8020

4) mapred-site.xml 配置map和reduce的資源:

mapreduce.framework.name

yarn

The runtime framework for executing MapReduce jobs.

Can be one of local, classic or yarn.

mapreduce.jobhistory.address

node102:10020

MapReduce JobHistory Server IPC host:port

mapreduce.map.memory.mb

1024

mapreduce.reduce.memory.mb

1024

mapreduce.map.java.opts

-Xmx512m

mapreduce.reduce.java.opts

-Xmx512m

5)yarn-site.xml 配置resourcemanager及相關資源:

The hostname of the RM.

yarn.resourcemanager.hostname

node101

The address of the applications manager interface in the RM.

yarn.resourcemanager.address

${yarn.resourcemanager.hostname}:8032

The address of the scheduler interface.

yarn.resourcemanager.scheduler.address

${yarn.resourcemanager.hostname}:8030

The http address of the RM web application.

yarn.resourcemanager.webapp.address

${yarn.resourcemanager.hostname}:8088

The https adddress of the RM web application.

yarn.resourcemanager.webapp.https.address

${yarn.resourcemanager.hostname}:8090

yarn.resourcemanager.resource-tracker.address

${yarn.resourcemanager.hostname}:8031

The address of the RM admin interface.

yarn.resourcemanager.admin.address

${yarn.resourcemanager.hostname}:8033

List of directories to store localized files in. An

application's localized file directory will be found in:

${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.

Individual containers' work directories, called container_${contid}, will

be subdirectories of this.

yarn.nodemanager.local-dirs

/data/hadoop/yarn/local

Whether to enable log aggregation

yarn.log-aggregation-enable

true

Where to aggregate logs to.

yarn.nodemanager.remote-app-log-dir

/data/tmp/logs

Amount of physical memory, in MB, that can be allocated

for containers.

yarn.nodemanager.resource.memory-mb

2048

yarn.scheduler.minimum-allocation-mb

512

yarn.nodemanager.vmem-pmem-ratio

1.0

yarn.nodemanager.vmem-check-enabled

false

yarn.nodemanager.resource.cpu-vcores

1

the valid service name should only contain a-zA-Z0-9_ and can not start with numbers

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.nodemanager.aux-services.mapreduce.shuffle.class

org.apache.hadoop.mapred.ShuffleHandler

2. Java調用Hadoop2.6 ,運行MR程序:

需修改下面兩個地方:

1) 調用主程序的Configuration需要配置:

Configuration conf = new Configuration();

conf.setBoolean("mapreduce.app-submission.cross-platform", true);// 配置使用跨平臺提交任務

conf.set("fs.defaultFS", "hdfs://node101:8020");//指定namenode

conf.set("mapreduce.framework.name", "yarn"); // 指定使用yarn框架

conf.set("yarn.resourcemanager.address", "node101:8032"); // 指定resourcemanager

conf.set("yarn.resourcemanager.scheduler.address", "node101:8030");// 指定資源分配器2) 添加下面的類到classpath:

==

==

其他地方不用修改,這樣就可以運行;

3. Web程序調用Hadoop2.6,運行MR程序;

這個web程序調用部分和上面的java是一樣的,基本都沒有修改,所使用到的jar包也全部放在了lib下面。

最后有一點,我運行了三個map,但是三個map不是均勻分布的:

可以看到node103分配了兩個map,node101分配了1一個map;還有一次是node101分配了兩個map,node103分配了一個map;兩次node102都沒有分配到map任務,這個應該是資源管理和任務分配的地方還是有點問題的緣故。

分享,成長,快樂

轉載請注明blog地址:外鏈網址已屏蔽

總結

以上是生活随笔為你收集整理的java web 调用hadoop_Java及Web程序调用hadoop2.6的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。