hadoop安装部署(伪分布及集群)
hadoop安裝部署(偽分布及集群)
@(HADOOP)[hadoop]
- hadoop安裝部署偽分布及集群
- 第一部分偽分布式
- 一環(huán)境準(zhǔn)備
- 二安裝hdfs
- 三安裝YARN
- 第二部分集群安裝
- 一規(guī)劃
- 一硬件資源
- 二基本資料
- 二環(huán)境配置
- 一統(tǒng)一用戶名密碼并為jediael賦予執(zhí)行所有命令的權(quán)限
- 二創(chuàng)建目錄mntjediael
- 三修改用戶名及etchosts文件
- 一規(guī)劃
第一部分:偽分布式
一、環(huán)境準(zhǔn)備
1、安裝linux、jdk
2、下載hadoop2.6.0,并解壓
3、配置免密碼ssh
(1)檢查是否可以免密碼:
(2)若否:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys4、在/etc/profile中添加以下內(nèi)容
#hadoop setting export PATH=$PATH:/mnt/jediael/hadoop-2.6.0/bin:/mnt/jediael/hadoop-2.6.0/sbin export HADOOP_HOME=/mnt/jediael/hadoop-2.6.0 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"二、安裝hdfs
1、配置etc/hadoop/core-site.xml:
<configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property> </configuration>2、配置etc/hadoop/hdfs-site.xml:
<configuration><property><name>dfs.replication</name><value>1</value></property> </configuration>3、格式化namenode
$ bin/hdfs namenode -format4、啟動hdfs
$ sbin/start-dfs.sh5、打開頁面驗證hdfs安裝成功
http://localhost:50070/
6、運行自帶示例
(1)創(chuàng)建目錄
(2)復(fù)制文件
bin/hdfs dfs -put etc/hadoop input(3)運行示例
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+’(4)檢查輸出結(jié)果
$ bin/hdfs dfs -cat output/* 6 dfs.audit.logger 4 dfs.class 3 dfs.server.namenode. 2 dfs.period 2 dfs.audit.log.maxfilesize 2 dfs.audit.log.maxbackupindex 1 dfsmetrics.log 1 dfsadmin 1 dfs.servers 1 dfs.replication 1 dfs.file(5)關(guān)閉hdfs
$ sbin/stop-dfs.sh三、安裝YARN
1、配置etc/hadoop/mapred-site.xml
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property> </configuration>2、配置etc/hadoop/yarn-site.xml
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property> </configuration>3、啟動yarn
$ sbin/start-yarn.sh4、打開頁面檢查yarn
http://localhost:8088/
5、運行一個map-reduce job
查看結(jié)果:
$/mnt/jediael/hadoop-2.6.0/bin/hadoop fs -cat /output/*第二部分:集群安裝
一、規(guī)劃
(一)硬件資源
10.171.29.191 master
10.171.94.155 slave1
10.251.0.197 slave3
(二)基本資料
用戶: jediael
目錄:/mnt/jediael/
二、環(huán)境配置
(一)統(tǒng)一用戶名密碼,并為jediael賦予執(zhí)行所有命令的權(quán)限
#passwd # useradd jediael # passwd jediael # vi /etc/sudoers增加以下一行:
jediael ALL=(ALL) ALL(二)創(chuàng)建目錄/mnt/jediael
$sudo chown jediael:jediael /opt $ cd /opt $ sudo mkdir jediael注意:/opt必須是jediael的,否則會在format namenode時出錯。
(三)修改用戶名及/etc/hosts文件
1、修改/etc/sysconfig/network
NETWORKING=yes HOSTNAME=*******2、修改/etc/hosts
10.171.29.191 master
10.171.94.155 slave1
10.251.0.197 slave3
注 意hosts文件不能有127.0.0.1 *配置,否則會導(dǎo)致出現(xiàn)異常。org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.171.29.191:9000. Already trie
3、hostname命令
hostname ****(四)配置免密碼登錄
以上命令在master上使用jediael用戶執(zhí)行:
注意
(1)若提示.ssh目錄不存在,則表示此機器從未運行過ssh,因此運行一次即可創(chuàng)建.ssh目錄。
(2).ssh/的權(quán)限為600,authorized_keys的權(quán)限為700,權(quán)限大了小了都不行。
(五)在3臺機器上分別安裝java,并設(shè)置相關(guān)環(huán)境變量
參考http://blog.csdn.net/jediael_lu/article/details/38925871
(六)下載hadoop-2.6.0.tar.gz,并將其解壓到/mnt/jediael
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
tar -zxvf hadoop-2.6.0.tar.gz
三、修改配置文件
【3臺機器上均要執(zhí)行,一般先在一臺機器上配置完成,再用scp復(fù)制到其它機器】
(一)hadoop_env.sh
(二)修改core-site.xml
<property><name>hadoop.tmp.dir</name><value>/mnt/tmp</value><description>Abase for other temporary directories.</description></property><property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property><name>io.file.buffer.size</name><value>4096</value></property>(三)修改hdfs-site.xml
<property><name>dfs.replication</name><value>2</value></property>(四)修改mapred-site.xml
<property><name>mapreduce.framework.name</name><value>yarn</value><final>true</final></property><property><name>mapreduce.jobtracker.http.address</name><value>master:50030</value> </property> <property><name>mapreduce.jobhistory.address</name><value>master:10020</value> </property> <property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value> </property><property><name>mapred.job.tracker</name><value>http://master:9001</value></property>(五)修改yarn.xml
<property><name>yarn.resourcemanager.hostname</name><value>master</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value> </property> <property><name>yarn.resourcemanager.address</name><value>master:8032</value> </property> <property><name>yarn.resourcemanager.scheduler.address</name><value>master:8030</value> </property> <property><name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value> </property> <property><name>yarn.resourcemanager.admin.address</name><value>master:8033</value> </property> <property><name>yarn.resourcemanager.webapp.address</name><value>master:8088</value> </property>(六)修改slaves
slaves:
四、啟動并驗證
1、格式 化namenode
[jediael@master hadoop-1.2.1]$ bin/hadoop namenode -format2、啟動hadoop【此步驟只需要在master上執(zhí)行】
[jediael@master hadoop-1.2.1]$ bin/start-all.sh3、驗證1:向hdfs中寫入內(nèi)容
[jediael@master hadoop-2.6.0]$ bin/hadoop fs -ls / [jediael@master hadoop-2.6.0]$ bin/hadoop fs -mkdir /test [jediael@master hadoop-2.6.0]$ bin/hadoop fs -ls / Found 1 items drwxr-xr-x - jediael supergroup 0 2015-04-19 23:41 /test4、驗證:登錄頁面
NameNode http://ip:50070
5、查看各個主機的java進程
(1)master:
(2)slave1:
$ jps 1913 NodeManager 2673 Jps 1801 DataNode(3)slave3:
$ jps 1942 NodeManager 2252 Jps 1840 DataNode五、運行一個完整的mapreduce程序:運行自帶的wordcount程序
$ bin/hadoop fs -mkdir /input $ bin/hadoop fs -ls / Found 2 items drwxr-xr-x - jediael supergroup 0 2015-04-20 18:04 /input drwxr-xr-x - jediael supergroup 0 2015-04-19 23:41 /test$ bin/hadoop fs -copyFromLocal etc/hadoop/mapred-site.xml.template /input $ pwd /mnt/jediael/hadoop-2.6.0/share/hadoop/mapreduce$ /mnt/jediael/hadoop-2.6.0/bin/hadoop jar hadoop-mapreduce-examples-2.6.0.jar wordcount /input /output 15/04/20 18:15:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/04/20 18:15:48 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 15/04/20 18:15:48 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 15/04/20 18:15:49 INFO input.FileInputFormat: Total input paths to process : 1 15/04/20 18:15:49 INFO mapreduce.JobSubmitter: number of splits:1 15/04/20 18:15:49 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local657082309_0001 15/04/20 18:15:50 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 15/04/20 18:15:50 INFO mapreduce.Job: Running job: job_local657082309_0001 15/04/20 18:15:50 INFO mapred.LocalJobRunner: OutputCommitter set in config null 15/04/20 18:15:50 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 15/04/20 18:15:50 INFO mapred.LocalJobRunner: Waiting for map tasks 15/04/20 18:15:50 INFO mapred.LocalJobRunner: Starting task: attempt_local657082309_0001_m_000000_0 15/04/20 18:15:50 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 15/04/20 18:15:50 INFO mapred.MapTask: Processing split: hdfs://master:9000/input/mapred-site.xml.template:0+2268 15/04/20 18:15:51 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 15/04/20 18:15:51 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 15/04/20 18:15:51 INFO mapred.MapTask: soft limit at 83886080 15/04/20 18:15:51 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600 15/04/20 18:15:51 INFO mapred.MapTask: kvstart = 26214396; length = 6553600 15/04/20 18:15:51 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 15/04/20 18:15:51 INFO mapred.LocalJobRunner: 15/04/20 18:15:51 INFO mapred.MapTask: Starting flush of map output 15/04/20 18:15:51 INFO mapred.MapTask: Spilling map output 15/04/20 18:15:51 INFO mapred.MapTask: bufstart = 0; bufend = 1698; bufvoid = 104857600 15/04/20 18:15:51 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26213916(104855664); length = 481/6553600 15/04/20 18:15:51 INFO mapred.MapTask: Finished spill 0 15/04/20 18:15:51 INFO mapred.Task: Task:attempt_local657082309_0001_m_000000_0 is done. And is in the process of committing 15/04/20 18:15:51 INFO mapred.LocalJobRunner: map 15/04/20 18:15:51 INFO mapred.Task: Task 'attempt_local657082309_0001_m_000000_0' done. 15/04/20 18:15:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local657082309_0001_m_000000_0 15/04/20 18:15:51 INFO mapred.LocalJobRunner: map task executor complete. 15/04/20 18:15:51 INFO mapred.LocalJobRunner: Waiting for reduce tasks 15/04/20 18:15:51 INFO mapred.LocalJobRunner: Starting task: attempt_local657082309_0001_r_000000_0 15/04/20 18:15:51 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 15/04/20 18:15:51 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@39be5e01 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10 15/04/20 18:15:51 INFO reduce.EventFetcher: attempt_local657082309_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 15/04/20 18:15:51 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local657082309_0001_m_000000_0 decomp: 1566 len: 1570 to MEMORY 15/04/20 18:15:51 INFO reduce.InMemoryMapOutput: Read 1566 bytes from map-output for attempt_local657082309_0001_m_000000_0 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 1566, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->1566 15/04/20 18:15:51 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning 15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied. 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 15/04/20 18:15:51 INFO mapred.Merger: Merging 1 sorted segments 15/04/20 18:15:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 1560 bytes 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merged 1 segments, 1566 bytes to disk to satisfy reduce memory limit 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merging 1 files, 1570 bytes from disk 15/04/20 18:15:51 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce 15/04/20 18:15:51 INFO mapred.Merger: Merging 1 sorted segments 15/04/20 18:15:51 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 1560 bytes 15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied. 15/04/20 18:15:51 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords 15/04/20 18:15:51 INFO mapreduce.Job: Job job_local657082309_0001 running in uber mode : false 15/04/20 18:15:51 INFO mapreduce.Job: map 100% reduce 0% 15/04/20 18:15:51 INFO mapred.Task: Task:attempt_local657082309_0001_r_000000_0 is done. And is in the process of committing 15/04/20 18:15:51 INFO mapred.LocalJobRunner: 1 / 1 copied. 15/04/20 18:15:51 INFO mapred.Task: Task attempt_local657082309_0001_r_000000_0 is allowed to commit now 15/04/20 18:15:51 INFO output.FileOutputCommitter: Saved output of task 'attempt_local657082309_0001_r_000000_0' to hdfs://master:9000/output/_temporary/0/task_local657082309_0001_r_000000 15/04/20 18:15:51 INFO mapred.LocalJobRunner: reduce > reduce 15/04/20 18:15:51 INFO mapred.Task: Task 'attempt_local657082309_0001_r_000000_0' done. 15/04/20 18:15:51 INFO mapred.LocalJobRunner: Finishing task: attempt_local657082309_0001_r_000000_0 15/04/20 18:15:51 INFO mapred.LocalJobRunner: reduce task executor complete. 15/04/20 18:15:52 INFO mapreduce.Job: map 100% reduce 100% 15/04/20 18:15:52 INFO mapreduce.Job: Job job_local657082309_0001 completed successfully 15/04/20 18:15:52 INFO mapreduce.Job: Counters: 38File System CountersFILE: Number of bytes read=544164FILE: Number of bytes written=1040966FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=4536HDFS: Number of bytes written=1196HDFS: Number of read operations=15HDFS: Number of large read operations=0HDFS: Number of write operations=4Map-Reduce FrameworkMap input records=43Map output records=121Map output bytes=1698Map output materialized bytes=1570Input split bytes=114Combine input records=121Combine output records=92Reduce input groups=92Reduce shuffle bytes=1570Reduce input records=92Reduce output records=92Spilled Records=184Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=123CPU time spent (ms)=0Physical memory (bytes) snapshot=0Virtual memory (bytes) snapshot=0Total committed heap usage (bytes)=269361152Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=2268File Output Format Counters $ /mnt/jediael/hadoop-2.6.0/bin/hadoop fs -cat /output/*總結(jié)
以上是生活随笔為你收集整理的hadoop安装部署(伪分布及集群)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: hadoop中如何控制map的数量
- 下一篇: hadoop关键进程