日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

hadoop日常运维

發(fā)布時間:2024/1/23 编程问答 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 hadoop日常运维 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

hadoop日常運維

@(HADOOP)[hadoop]

(一)備份namenode的元數(shù)據(jù)

namenode中的元數(shù)據(jù)非常重要,如丟失或者損壞,則整個系統(tǒng)無法使用。因此應(yīng)該經(jīng)常對元數(shù)據(jù)進(jìn)行備份,最好是異地備份。
1、將元數(shù)據(jù)復(fù)制到遠(yuǎn)程站點
(1)以下代碼將secondary namenode中的元數(shù)據(jù)復(fù)制到一個時間命名的目錄下,然后通過scp命令遠(yuǎn)程發(fā)送到其它機(jī)器

#!/bin/bash

export dirname=/mnt/tmphadoop/dfs/namesecondary/current/date +%y%m%d%H
if [ ! -d dirname]thenmkdir{dirname}
cp /mnt/tmphadoop/dfs/namesecondary/current/* dirnamefiscp?r{dirname} slave1:/mnt/namenode_backup/
rm -r ${dirname}

(2)配置crontab,定時執(zhí)行此項工作
0 0,8,14,20 * * * bash /mnt/scripts/namenode_backup_script.sh

2、在遠(yuǎn)程站點中啟動一個本地namenode守護(hù)進(jìn)程,嘗試加載這些備份文件,以確定是否已經(jīng)進(jìn)行了正確備份。

(二)數(shù)據(jù)備份

對于重要的數(shù)據(jù),不能完全依賴HDFS,而是需要進(jìn)行備份,注意以下幾點
(1)盡量異地備份
(2)如果使用distcp備份至另一個hdfs集群,則不要使用同一版本的hadoop,避免hadoop自身導(dǎo)致數(shù)據(jù)出錯。

(三)文件系統(tǒng)檢查

定期在整個文件系統(tǒng)上運行HDFS的fsck工具,主動查找丟失或者損壞的塊。
建議每天執(zhí)行一次。

[jediael@master ~]$ hadoop fsck / ……省略輸出(若有錯誤,則在此外出現(xiàn),否則只會出現(xiàn)點,一個點表示一個文件)…… .........Status: HEALTHYTotal size: 14466494870 BTotal dirs: 502Total files: 1592 (Files currently being written: 2)Total blocks (validated): 1725 (avg. block size 8386373 B)Minimally replicated blocks: 1725 (100.0 %)Over-replicated blocks: 0 (0.0 %)Under-replicated blocks: 648 (37.565216 %)Mis-replicated blocks: 0 (0.0 %)Default replication factor: 2Average block replication: 2.0Corrupt blocks: 0Missing replicas: 760 (22.028986 %)Number of data-nodes: 2Number of racks: 1 FSCK ended at Sun Mar 01 20:17:57 CST 2015 in 608 millisecondsThe filesystem under path '/' is HEALTHY

(1)若hdfs-site.xml中的dfs.replication設(shè)置為3,而實現(xiàn)上只有2個datanode,則在執(zhí)行fsck時會出現(xiàn)以下錯誤;
/hbase/Mar0109_webpage/59ad1be6884739c29d0624d1d31a56d9/il/43e6cd4dc61b49e2a57adf0c63921c09: Under replicated blk_-4711857142889323098_6221. Target Replicas is 3 but found 2 replica(s).
注意,由于原來的dfs.replication為3,后來下線了一臺datanode,并將dfs.replication改為2,但原來已創(chuàng)建的文件也會記錄dfs.replication為3,從而出現(xiàn)以上錯誤,并導(dǎo)致 Under-replicated blocks: 648 (37.565216 %)。

(2)fsck工具還可以用來檢查一個文件包括哪些塊,以及這些塊分別在哪等

[jediael@master conf]$ hadoop fsck /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 -files -blocks -racksFSCK started by jediael from /10.171.29.191 for path /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 at Sun Mar 01 20:39:35 CST 2015 /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 21507169 bytes, 1 block(s): Under replicated blk_7117944555454804881_3655. Target Replicas is 3 but found 2 replica(s). 0. blk_7117944555454804881_3655 len=21507169 repl=2 [/default-rack/10.171.94.155:50010, /default-rack/10.251.0.197:50010]Status: HEALTHYTotal size: 21507169 BTotal dirs: 0Total files: 1Total blocks (validated): 1 (avg. block size 21507169 B)Minimally replicated blocks: 1 (100.0 %)Over-replicated blocks: 0 (0.0 %)Under-replicated blocks: 1 (100.0 %)Mis-replicated blocks: 0 (0.0 %)Default replication factor: 2Average block replication: 2.0Corrupt blocks: 0Missing replicas: 1 (50.0 %)Number of data-nodes: 2Number of racks: 1 FSCK ended at Sun Mar 01 20:39:35 CST 2015 in 0 millisecondsThe filesystem under path '/hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7' is HEALTHY

此命令的用法如下:

[jediael@master ~]$ hadoop fsck -files Usage: DFSck <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]<path> start checking from this path-move move corrupted files to /lost+found-delete delete corrupted files-files print out files being checked-openforwrite print out files opened for write-blocks print out block report-locations print out locations for every block-racks print out network topology for data-node locationsBy default fsck ignores files opened for write, use -openforwrite to report such files. They are usually tagged CORRUPT or HEALTHY depending on their block allocation status Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|jobtracker:port> specify a job tracker -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]

詳細(xì)解釋請見《hadoop權(quán)威指南》P376

(四)均衡器

隨時時間推移,各個datanode上的塊分布來越來越不均衡,這將降低MR的本地性,導(dǎo)致部分datanode相對更加繁忙。

均衡器是一個hadoop守護(hù)進(jìn)程,它將塊從忙碌的DN移動相對空閑的DN,同時堅持塊復(fù)本放置策略,將復(fù)本分散到不同的機(jī)器、機(jī)架。

建議定期執(zhí)行均衡器,如每天或者每周。

(1)通過以下命令運行均衡器

[jediael@master log]$ start-balancer.sh
starting balancer, logging to /var/log/hadoop/hadoop-jediael-balancer-master.out

查看日志如下:

[jediael@master hadoop]$ pwd /var/log/hadoop [jediael@master hadoop]$ ls hadoop-jediael-balancer-master.log hadoop-jediael-balancer-master.out [jediael@master hadoop]$ cat hadoop-jediael-balancer-master.log 2015-03-01 21:08:08,027 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.251.0.197:50010 2015-03-01 21:08:08,028 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/10.171.94.155:50010 2015-03-01 21:08:08,028 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 0 over utilized nodes: 2015-03-01 21:08:08,028 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 0 under utilized nodes:

(2)均衡器將每個DN的使用率與整個集群的使用率接近,這個“接近”是通過-threashold參數(shù)指定的,默認(rèn)是10%。
(3)不同節(jié)點之間復(fù)制數(shù)據(jù)的帶寬是受限的,默認(rèn)是1MB/s,可以通過hdfs-site.xml文件中的dfs.balance.bandwithPerSec屬性指定(單位是字節(jié))。

(五)datanode塊掃描器

每個datanode均會運行一個塊掃描器,定期檢測本節(jié)點上的所有塊,若發(fā)現(xiàn)存在錯誤(如檢驗和錯誤),則通知namenode,然后由namenode發(fā)起數(shù)據(jù)重新創(chuàng)建復(fù)本或者修復(fù)。
掃描周期由dfs.datanode.scan.period.hours指定,默認(rèn)為三周(504小時)。
通過地址以下地址查看掃描信息:
(1)http://datanote:50075/blockScannerReport
列出總體的檢測情況

Total Blocks : 1919 Verified in last hour : 4 Verified in last day : 170 Verified in last week : 535 Verified in last four weeks : 535 Verified in SCAN_PERIOD : 535 Not yet verified : 1384 Verified since restart : 559 Scans since restart : 91 Scan errors since restart : 0 Transient scan errors : 0 Current scan rate limit KBps : 1024 Progress this period : 113% Time left in cur period : 97.14%

(2)http://123.56.92.95:50075/blockScannerReport?listblocks
列出所有的塊及最新驗證狀態(tài)
blk_8482244195562050998_3796 : status : ok type : none scan time : 0 not yet verified
blk_3985450615149803606_7952 : status : ok type : none scan time : 0 not yet verified
尚未驗證的情況如上。各字段意義可參考權(quán)威指南P379

版權(quán)聲明:本文為博主原創(chuàng)文章,未經(jīng)博主允許不得轉(zhuǎn)載。

總結(jié)

以上是生活随笔為你收集整理的hadoop日常运维的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。