3.6-3.8 分布式环境启动、测试
生活随笔
收集整理的這篇文章主要介紹了
3.6-3.8 分布式环境启动、测试
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
一、HDFS
1、初始化文件系統
#master上 [root@master hadoop-2.5.0]# pwd /opt/app/hadoop-2.5.0[root@master hadoop-2.5.0]# bin/hdfs namenode -format #最后幾行看到successfully,代表成功2、啟動
#啟動 [root@master hadoop-2.5.0]# sbin/start-dfs.sh集群批量命令腳本,這個腳本也可以改用循環:
vim xcall.sh
#!/bin/bash # params=$@ AP=$(which $@)echo ====== master $params ====== ssh master $APecho ====== slave1 $params ====== ssh slave1 $APecho ====== slave2 $params ====== ssh slave2 $AP添加權限、軟鏈接:
chmod +x xcall.shln -s /usr/local/hadoop_shell/xcall.sh /usr/local/bin/xcall查看啟動情況:
[root@master hadoop-2.5.0]# xcall jps ====== master jps ====== 6808 Jps 2549 DataNode 2425 NameNode====== slave1 jps ====== 5287 Jps 2324 DataNode====== slave2 jps ====== 2389 SecondaryNameNode 2327 DataNode 7120 Jps3、目錄、文件操作
創建目錄:
#創建用戶主目錄 [root@master hadoop-2.5.0]# bin/hdfs dfs -mkdir -p /user/root/#創建測試目錄 [root@master hadoop-2.5.0]# bin/hdfs dfs -mkdir -p /user/root/tmp/conf[root@master hadoop-2.5.0]# bin/hdfs dfs -ls -R /user 19/04/17 09:45:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable drwxr-xr-x - root supergroup 0 2019-04-17 09:45 /user/root drwxr-xr-x - root supergroup 0 2019-04-17 09:45 /user/root/tmp drwxr-xr-x - root supergroup 0 2019-04-17 09:45 /user/root/tmp/conf上傳測試文件:
[root@master hadoop-2.5.0]# bin.hdfs dfs -put etc/hadoop/*-site.xml /user/root/tmp/conf[root@master hadoop-2.5.0]# bin/hdfs dfs -ls -R /user/root/tmp/conf 19/04/17 10:04:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable -rw-r--r-- 3 root supergroup 1083 2019-04-17 10:02 /user/root/tmp/conf/core-site.xml -rw-r--r-- 3 root supergroup 883 2019-04-17 10:02 /user/root/tmp/conf/hdfs-site.xml -rw-r--r-- 3 root supergroup 620 2019-04-17 10:02 /user/root/tmp/conf/httpfs-site.xml -rw-r--r-- 3 root supergroup 1069 2019-04-17 10:02 /user/root/tmp/conf/mapred-site.xml -rw-r--r-- 3 root supergroup 1372 2019-04-17 10:02 /user/root/tmp/conf/yarn-site.xml二、yarn
1、啟動
#slave1 [root@slave1 hadoop-2.5.0]# sbin/yarn-daemon.sh start resourcemanager starting resourcemanager, logging to /opt/app/hadoop-2.5.0/logs/yarn-root-resourcemanager-slave1.out [root@slave1 hadoop-2.5.0]# sbin/yarn-daemon.sh start nodemanager starting nodemanager, logging to /opt/app/hadoop-2.5.0/logs/yarn-root-nodemanager-slave1.out#master [root@master hadoop-2.5.0]# sbin/yarn-daemon.sh start nodemanager starting nodemanager, logging to /opt/app/hadoop-2.5.0/logs/yarn-root-nodemanager-master.out#slave2 [root@slave2 hadoop-2.5.0]# sbin/yarn-daemon.sh start nodemanager starting nodemanager, logging to /opt/app/hadoop-2.5.0/logs/yarn-root-nodemanager-slave2.out#master上查看啟動情況 [root@master hadoop-2.5.0]# xcall jps ====== master jps ====== 2549 DataNode 2425 NameNode 7919 Jps 7750 NodeManager====== slave1 jps ====== 5644 ResourceManager 5899 NodeManager 2324 DataNode 6094 Jps====== slave2 jps ====== 7743 Jps 2389 SecondaryNameNode 7575 NodeManager 2327 DataNode也可以在master上用start-yarn.sh啟動,然后再去slave1上啟動resourcemanager;
2、測試
#創建測試目錄 [root@master hadoop-2.5.0]# bin/hdfs dfs -mkdir -p /user/root/mapreduce/wordcount/input#上傳測試文件 [root@master hadoop-2.5.0]# bin/hdfs dfs -put /opt/app/hadoop-2.5.0/wc.input /user/root/mapreduce/wordcount/input#在yarn上運行MapReduce [root@master hadoop-2.5.0]# bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount /user/root/mapreduce/wordcount/input /user/root/mapreduce/wordcount/output#查看結果 [root@master hadoop-2.5.0]# bin/hdfs dfs -text /user/root/mapreduce/wordcount/output/part-r-00000 19/04/17 10:26:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable data 1 hadoop 1 hive 1 hue 1 node 2 yarn 23、yarn web
resourcemanager在slave1上;
在瀏覽器輸入slave1的ip+port,應該就能打開web頁面;
三、集群基準測試
基本測試上面已經測試完了;
基本測試:測試集群是否可用;
基準測試:測試集群性能;
HDFS:
???? 讀數據
???? 寫數據
網上有很多例子,可以看一下;
四、集群時間同步
1、
如果考慮到去外網同步時間不安全或者不方便;
可以在內網找一臺作為時間服務器,所有的機器與這臺機器時間進行定時的同步,比如每十分鐘,同步一次時間;
2、
這里可以用master當作時間服務器;
#檢查是否安裝 [root@master hadoop-2.5.0]# rpm -qa |grep ntp ntp-4.2.6p5-15.el6.centos.x86_64 fontpackages-filesystem-1.41-1.1.el6.noarch ntpdate-4.2.6p5-15.el6.centos.x86_64#配置成時間服務器 vim /etc/ntp.conf #改三處#第一處 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap #去掉此行的注釋,并改為集群網段#第二處,注釋掉下面幾行 #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst#第三處,添加下面兩行 server 127.127.1.0 # local clock fudge 127.127.1.0 stratum 10#設置同步更新本地hwclock [root@master hadoop-2.5.0]# vim /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. SYNC_HWCLOCK=yes #添加此行 OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g"#啟動ntpd [root@master hadoop-2.5.0]# service ntpd status[root@master hadoop-2.5.0]# service ntpd start[root@master hadoop-2.5.0]#chkconfig ntpd on3、在slave上設置同步時間腳本
#salve1 [root@slave1 hadoop-2.5.0]# crontab -l #去master同步時間,10分鐘1次 0-59/10 * * * * /usr/sbin/ntpdate master#slave2 [root@slave2 hadoop-2.5.0]# crontab -l #去master同步時間,10分鐘1次 0-59/10 * * * * /usr/sbin/ntpdate master轉載于:https://www.cnblogs.com/weiyiming007/p/10722829.html
總結
以上是生活随笔為你收集整理的3.6-3.8 分布式环境启动、测试的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: docker运行随机分配端口
- 下一篇: docker下如何进入到容器中