日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

hadoop系统 hdfs 命令行操作

發布時間:2023/11/27 生活经验 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 hadoop系统 hdfs 命令行操作 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

轉自:https://blog.csdn.net/sjhuangx/article/details/79796388

Hadoop文件系統shell命令列表:?https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html

命令查看:??hadoop fs

[hadoop@mini1 mapreduce]$ hadoop fs
Usage: hadoop fs [generic options][-appendToFile <localsrc> ... <dst>][-cat [-ignoreCrc] <src> ...][-checksum <src> ...][-chgrp [-R] GROUP PATH...][-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...][-chown [-R] [OWNER][:[GROUP]] PATH...][-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>][-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...][-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>][-createSnapshot <snapshotDir> [<snapshotName>]][-deleteSnapshot <snapshotDir> <snapshotName>][-df [-h] [<path> ...]][-du [-s] [-h] [-x] <path> ...][-expunge][-find <path> ... <expression> ...][-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>][-getfacl [-R] <path>][-getfattr [-R] {-n name | -d} [-e en] <path>][-getmerge [-nl] [-skip-empty-file] <src> <localdst>][-help [cmd ...]][-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]][-mkdir [-p] <path> ...][-moveFromLocal <localsrc> ... <dst>][-moveToLocal <src> <localdst>]

1. hdfs文件上傳

# 上傳 XXX.zip 文件到 / 目錄
hadoop fs -put XXX.zip /


2. 查看文件目錄列表

[hadoop@mini1 ~]$ hadoop fs -ls /
Found 2 items
-rw-r--r-- ? 2 hadoop supergroup ?354635831 2018-04-02 10:27 /jdk-9.0.4_linux-x64_bin.tar.gz
-rw-r--r-- ? 2 hadoop supergroup ? ? ? ?279 2018-04-02 10:30 /test.txt


3. 獲取文件 hadoop fs -get /XXX

[hadoop@mini2 ~]$ hadoop fs -get /jdk-9.0.4_linux-x64_bin.tar.gz
[hadoop@mini2 ~]$ ls
hadoop-2.9.0 ?hdfsdata ?jdk-9.0.4_linux-x64_bin.tar.gz
[hadoop@mini2 ~]$?
4. 創建目錄 hadoop fs -mkdir /XXX[hadoop@mini1 ~]$ hadoop fs -mkdir /wordCount
[hadoop@mini1 ~]$ hadoop fs -ls /
Found 3 items
-rw-r--r-- ? 2 hadoop supergroup ?354635831 2018-04-02 10:27 /jdk-9.0.4_linux-x64_bin.tar.gz
-rw-r--r-- ? 2 hadoop supergroup ? ? ? ?279 2018-04-02 10:30 /test.txt
drwxr-xr-x ? - hadoop supergroup ? ? ? ? ?0 2018-04-02 10:44 /wordCount
[hadoop@mini1 ~]$?


5. 指定jar命令??hadoop jar XXX.jar YYY /wordCount/ /wordCount/output

[hadoop@mini1 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.9.0.jar wordcount /wordCount/ /wordCount/output
18/04/02 10:48:26 INFO client.RMProxy: Connecting to ResourceManager at mini1/192.168.241.100:8032
18/04/02 10:48:27 INFO input.FileInputFormat: Total input files to process : 2
18/04/02 10:48:28 INFO mapreduce.JobSubmitter: number of splits:2
18/04/02 10:48:28 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/04/02 10:48:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1522678018314_0001
18/04/02 10:48:29 INFO impl.YarnClientImpl: Submitted application application_1522678018314_0001
18/04/02 10:48:29 INFO mapreduce.Job: The url to track the job: http://mini1:8088/proxy/application_1522678018314_0001/
18/04/02 10:48:29 INFO mapreduce.Job: Running job: job_1522678018314_0001
18/04/02 10:48:36 INFO mapreduce.Job: Job job_1522678018314_0001 running in uber mode : false
18/04/02 10:48:36 INFO mapreduce.Job: ?map 0% reduce 0%
18/04/02 10:48:48 INFO mapreduce.Job: ?map 100% reduce 0%
18/04/02 10:48:53 INFO mapreduce.Job: ?map 100% reduce 100%
18/04/02 10:48:54 INFO mapreduce.Job: Job job_1522678018314_0001 completed successfully[hadoop@mini1 mapreduce]$ hadoop fs -cat /wordCount/output/part-r-00000
Apr?? ?2
Don't?? ?1
Jan?? ?1
Mar?? ?1
a?? ?1
and?? ?1
for?? ?2
hadoop?? ?9[hadoop@mini1 mapreduce]$?


6. 設置文件副本數 setrep

hadoop fs -setrep 10 XXX.yyy

這里設置的文件副本數是指記錄在namenode中的元數據,具體的副本數量還需要看集群中國是否有足夠數量的機器存放,如果有則真實的副本數量為元數據中的數量,否則只能達到機器允許的最大數量。

總結

以上是生活随笔為你收集整理的hadoop系统 hdfs 命令行操作的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。