日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Hadoop加速器GridGain

發布時間:2023/12/18 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Hadoop加速器GridGain 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

GridGain的Hadoop加速器

像GridGain等內存網格產品(IMDG)不僅可以作為簡單的緩存,加速Hadoop中MapReduce計算也是IMDG的一個亮點。這樣內存計算領域又多了一種思路和選擇,而不只是Spark獨霸一方的局面。關于GridGain的功能介紹請參考《開源IMDG之GridGain》。


1.安裝Hadoop 2.7.1

很早之前寫過一篇《Hadoop入門(一):Hadoop偽分布安裝》,那時用的還是0.20的版本,轉眼間都已經2.7.1了,Hadoop發展真是飛快!所以本文的前半部分重點看一下最新版2.7.1如何搭建偽分布式集群。

1.1 SSH無密碼模式

為當前用戶配置無密碼的SSH登錄,通過ssh localhost測試是否還需要輸入密碼。

[root@vm Software]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 28:58:5c:c8:0a:b3:52:83:4f:c1:9a:71:65:12:61:b1 root@BC-VM-edce4ac67d304079868c0bb265337bd4 The key's randomart image is: +--[ RSA 2048]----+ | oBBo.. | |=.*=o. | | %Eoo | |= oo . | |. . . . S | | . | | | | | | | +-----------------+[root@vm Software]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys[root@vm Software]# ssh localhost Last login: Wed Sep 9 15:43:19 2015 from localhost

1.2 環境變量

修改~/.bash_profile或/etc/profile,加入HADOOP_HOME環境變量。因為很多啟動腳本都在sbin目錄下,所以這里將sbin和bin目錄都加到PATH環境變量中。

export HADOOP_HOME=/home/hadoop-2.7.1 export PATH=$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$PATH

修改etc/hadoop/hadoop-env.sh。如果沒有配置JAVA_HOME或想為Hadoop單獨指定JDK的話就直接修改下面這一行:

export JAVA_HOME={JAVA_HOME}
Hadoop對Java的版本要求

“Hadoop requires Java 7 or a late version of Java 6. It is built and tested on both OpenJDK and Oracle (HotSpot)’s JDK/JRE”. 從官網描述能看出,用OpenJDK或Oracle的JDK或JRE運行Hadoop都是沒有問題的,版本支持6的后幾個版本以及7以上版本。但是從Hadoop 2.7版本開始,要求JDK必須是7以上版本了

1.3 core-site.xml

修改etc/hadoop/core-site.xml:

<configuration><property><name>hadoop.tmp.dir</name><value>/usr/opt/hadoop/tmp</value></property><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property> </configuration>

1.4 hdfs-site.xml

修改etc/hadoop/hdfs-site.xml:

<configuration><property><name>dfs.replication</name><value>1</value></property> </configuration>

1.5 yarn-site.xml

修改etc/hadoop/yarn-site.xml:

<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property> </configuration>

至此,一個偽分布式的Hadoop集群就配置完畢了!


2.啟動Hadoop集群

2.1 格式化NameNode

啟動Hadoop之前,一定要先格式化Namenode:

[root@vm hadoop-2.7.1]# hdfs namenode -format 15/09/09 13:03:08 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = BC-vm/192.168.1.111 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.1 STARTUP_MSG: classpath = /root/Software/hadoop-2.7.1/etc/hadoop:/root/Software/hadoop-2.7.1/share/hadoop/common/lib/commons-digester-1.8.jar:... STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z STARTUP_MSG: java = 1.7.0_71 ************************************************************/ 15/09/09 13:03:08 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 15/09/09 13:03:08 INFO namenode.NameNode: createNameNode [-format] 15/09/09 13:03:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Formatting using clusterid: CID-7fbd2609-fb3e-459d-bbcf-c24d32473ffb ... 15/09/09 13:03:09 INFO util.ExitUtil: Exiting with status 0 15/09/09 13:03:09 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at BC-vm/192.168.1.111 ************************************************************/

2.2 啟動HDFS

注意:sbin/start-all.sh中已經明確說明:“This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh”,所以不要通過它來啟動Hadoop了。啟動成功后,通過jps命名查看運行中的Java進程,應該有NameNode、SecondaryNameNode、DataNode三個。

[root@vm hadoop-2.7.1]# start-dfs.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-namenode-BC-VM-edce4ac67d304079868c0bb265337bd4.out localhost: starting datanode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-datanode-BC-VM-edce4ac67d304079868c0bb265337bd4.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-secondarynamenode-BC-VM-edce4ac67d304079868c0bb265337bd4.out[root@BC-vm hadoop-2.7.1]# jps 20128 Jps 19825 DataNode 19688 NameNode 20007 SecondaryNameNode

2.3 啟動YARN

Hadoop 2中單獨抽離出了資源管理器YARN (Yet Another Resource Negotiator),啟動YARN后能看到又多了兩個Java進程:NodeManager和ResourceManager。

[root@vm hadoop-2.7.1]# start-yarn.sh starting yarn daemons starting resourcemanager, logging to /root/Software/hadoop-2.7.1/logs/yarn-root-resourcemanager-BC-VM-edce4ac67d304079868c0bb265337bd4.out localhost: starting nodemanager, logging to /root/Software/hadoop-2.7.1/logs/yarn-root-nodemanager-BC-VM-edce4ac67d304079868c0bb265337bd4.out[root@vm hadoop-2.7.1]# jps 20212 ResourceManager 19825 DataNode 20630 Jps 19688 NameNode 20007 SecondaryNameNode 20507 NodeManager

詳細日志都在HADOOP_HOME/logs下。

3.測試MapReduce

這里仍舊以經典的WordCount為例,簡單測試一下Hadoop 2的性能。

3.1 上傳數據文件

這里還是用big.txt作為測試文件。之前我曾在《Trie的應用及拼寫檢查器的優化》使用過這個文件,感興趣的可以了解一下。此外要注意,輸出文件的文件夾不用提前創建,否則Hadoop會報錯,認為文件夾已經存在了。

[root@vm hadoop-2.7.1]# wget http://www.norvig.com/big.txt [root@vm hadoop-2.7.1]# hadoop fs -mkdir -p /test/wordcount/input [root@vm hadoop-2.7.1]# hadoop fs -put big.txt /test/wordcount/input [root@vm hadoop-2.7.1]# hadoop fs -ls /test/wordcount/input Found 1 items -rw-r--r-- 1 root supergroup 124 2015-09-09 14:21 /test/wordcount/input/big.txt

3.2 執行WordCount任務

還是老地方,WordCount任務依舊在share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar中。big.txt只有6MB多,所以執行過程還是挺快的,啟動花了大概7秒,計算花了15秒,總體大概花了22秒多。可以利用seq 150 | xargs -i cat big.txt >> bigbig.txt命令可以產生個1G左右的bigbig.txt作為測試文件,這次Hadoop花了214秒。

[root@vm hadoop-2.7.1]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /test/wordcount/input /test/wordcount/output15/09/09 15:23:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/09/09 15:23:51 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/09/09 15:23:52 INFO input.FileInputFormat: Total input paths to process : 1 15/09/09 15:23:52 INFO mapreduce.JobSubmitter: number of splits:1 15/09/09 15:23:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441775536578_0003 15/09/09 15:23:52 INFO impl.YarnClientImpl: Submitted application application_1441775536578_0003 15/09/09 15:23:52 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1441775536578_0003/ 15/09/09 15:23:52 INFO mapreduce.Job: Running job: job_1441775536578_0003 15/09/09 15:23:57 INFO mapreduce.Job: Job job_1441775536578_0003 running in uber mode : false 15/09/09 15:23:57 INFO mapreduce.Job: map 0% reduce 0% 15/09/09 15:24:05 INFO mapreduce.Job: map 100% reduce 0% 15/09/09 15:24:12 INFO mapreduce.Job: map 100% reduce 100% 15/09/09 15:24:12 INFO mapreduce.Job: Job job_1441775536578_0003 completed successfully 15/09/09 15:24:12 INFO mapreduce.Job: Counters: 49File System CountersFILE: Number of bytes read=1251830FILE: Number of bytes written=2734521...

3.3 結果驗證

下面查看一下運行結果,用sort和head命令查看Top 20的單詞有哪些,果然都是些虛詞:

[root@vm hadoop-2.7.1]# hadoop fs -cat /test/wordcount/output/part-r-00000 | sort -rn -k 2 | head -n 20 the 71744 of 39169 and 35968 to 27895 a 19811 in 19515 that 11216 was 11129 his 9561 he 9362 with 9358 is 9247 as 7333 had 7275 it 6545 by 6384 for 6358 at 6237 not 6201 The 6149

要想重復測試的話很簡單,通過 hadoop fs -rm -r /test/wordcount/output 刪掉輸出文件夾,就可以重新跑一次WordCount任務!


4.使用GridGain加速器

經過了前面的各種鋪墊,終于到了本篇的重點了。

4.1 安裝GridGain

首先下載GridGain的Hadoop Acceleration版,這是個單獨的分發版,與學習GridGain的網格特性時的fabric版不是一個。

GridGain對環境有一些要求:

  • Java 7及以上版本
  • 配置JAVA_HOME指向JDK或JRE
  • Hadoop 2.2及以上版本
  • 配置HADOOP_HOME

現在就可以執行bin/setup-hadoop.sh腳本替換Hadoop的配置文件了。

[root@vm gridgain-community-hadoop-1.3.3]# bin/setup-hadoop.sh__________ ________________ / _/ ___/ |/ / _/_ __/ __/ _/ // (7 7 // / / / / _/ /___/\___/_/|_/___/ /_/ /___/ for Apache Hadoop ver. 1.3.3#20150803-sha1:7d747d2a 2015 Copyright(C) Apache Software Foundation> IGNITE_HOME is set to '/root/Software/gridgain-community-hadoop-1.3.3'.> HADOOP_HOME is set to '/root/Software/hadoop-2.7.1'.> HADOOP_COMMON_HOME is not set, will use '/root/Software/hadoop-2.7.1/share/hadoop/common'.< Ignite JAR files are not found in Hadoop 'lib' directory. Create appropriate symbolic links? (Y/N): Y> Yes.> Creating symbolic link '/root/Software/hadoop-2.7.1/share/hadoop/common/lib/ignite-shmem-1.0.0.jar'.> Creating symbolic link '/root/Software/hadoop-2.7.1/share/hadoop/common/lib/ignite-core-1.3.3.jar'.> Creating symbolic link '/root/Software/hadoop-2.7.1/share/hadoop/common/lib/ignite-hadoop-1.3.3.jar'.< Replace 'core-site.xml' and 'mapred-site.xml' files with preconfigured templates (existing files will be backed up)? (Y/N): Y> Yes.> Replacing file '/root/Software/hadoop-2.7.1/etc/hadoop/core-site.xml'.> Replacing file '/root/Software/hadoop-2.7.1/etc/hadoop/mapred-site.xml'.> Apache Hadoop setup is complete.

替換成功之后,先啟動兩個GridGain結點:

[root@vm gridgain-community-hadoop-1.3.3]# nohup bin/ignite.sh & [root@vm gridgain-community-hadoop-1.3.3]# nohup bin/ignite.sh &

啟動Hadoop:

[root@BC-VM-edce4ac67d304079868c0bb265337bd4 hadoop-2.7.1]# start-dfs.sh 15/09/09 17:11:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. Starting namenodes on [] localhost: starting namenode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-namenode-BC-VM-edce4ac67d304079868c0bb265337bd4.out localhost: starting datanode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-datanode-BC-VM-edce4ac67d304079868c0bb265337bd4.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /root/Software/hadoop-2.7.1/logs/hadoop-root-secondarynamenode-BC-VM-edce4ac67d304079868c0bb265337bd4.out

4.2 執行測試

現在測試一下GridGain加速器,還是以前的方法執行就可以了。在我的虛擬機中測試的效果不理想,對于一兩個GB的數據,GridGain加速器不管是單結點還是雙結點,都與Hadoop的測試結果差不多,有時還要慢一些。可能是環境或者代碼實現的問題,也許要在更大的數據集上對比才會更明顯。

[root@BC-VM-edce4ac67d304079868c0bb265337bd4 hadoop-2.7.1]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /test/wordcount/input /test/wordcount/output 15/09/09 15:58:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/09/09 15:58:58 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/09/09 15:58:59 INFO input.FileInputFormat: Total input paths to process : 1 15/09/09 15:58:59 INFO mapreduce.JobSubmitter: number of splits:9 15/09/09 15:59:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441785297218_0002 15/09/09 15:59:00 INFO impl.YarnClientImpl: Submitted application application_1441785297218_0002 15/09/09 15:59:00 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1441785297218_0002/ 15/09/09 15:59:00 INFO mapreduce.Job: Running job: job_1441785297218_0002 15/09/09 15:59:07 INFO mapreduce.Job: Job job_1441785297218_0002 running in uber mode : false 15/09/09 15:59:07 INFO mapreduce.Job: map 0% reduce 0% 15/09/09 15:59:20 INFO mapreduce.Job: map 2% reduce 0% 15/09/09 15:59:23 INFO mapreduce.Job: map 3% reduce 0%... 15/09/09 16:01:24 INFO mapreduce.Job: map 96% reduce 26% 15/09/09 16:01:26 INFO mapreduce.Job: map 96% reduce 30% 15/09/09 16:01:28 INFO mapreduce.Job: map 100% reduce 30% 15/09/09 16:01:29 INFO mapreduce.Job: map 100% reduce 45% 15/09/09 16:01:31 INFO mapreduce.Job: map 100% reduce 100% 15/09/09 16:01:31 INFO mapreduce.Job: Job job_1441785297218_0002 completed successfully

總結

以上是生活随笔為你收集整理的Hadoop加速器GridGain的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。