日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

mac安装hadoop2-client

發布時間:2024/2/28 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 mac安装hadoop2-client 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 1. 環境背景
      • 1. 公司的大數據平臺是基于CDH搭建的
      • 2. 使用了kerberos 作為權限管理
    • 2. 安裝方式
      • 1. 在其他centos服務器上的安裝測試
        • 1. 安裝配置kerberos
        • 2. 安裝client包
      • 2. 在mac上安裝hadooop2 client
        • 1. 安裝配置kerberos
        • 2. 安裝hadoop client
        • 3. 噩夢開始
        • 4. 增加debug功能
        • 5. 原來mac和centos的ticket存儲的位置不一樣
        • 6. 原來還有加密問題

1. 環境背景

1. 公司的大數據平臺是基于CDH搭建的

Hadoop的發行版本有很多,有華為發行版,Intel發行版,Cloudera發行版(CDH),MapR版本,以及HortonWorks版本等。所有發行版都是基于Apache Hadoop衍生出來的,產生這些版本的原因,是由于Apache Hadoop的開源協議決定的:任何人可以對其進行修改,并作為開源或商業產品發布和銷售。
不收費的版本主要有三個(都是國外廠商):

  • Cloudera版本(Cloudera’s Distribution Including Apache Hadoop)簡稱"CDH"。
  • Apache基金會hadoop
  • Hontonworks版本(Hortonworks Data Platform)簡稱"HDP"。
  • 我們公司的平臺主要是基于CDH搭建的,hadoop的版本是2.6

    [deploy@hbase03 ~]$ hadoop version Hadoop 2.6.0-cdh5.15.1

    服務器的系統是centos6.8

    [deploy@hbase03 ~]$ lsb_release -a LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch Distributor ID: CentOS Description: CentOS release 6.8 (Final) Release: 6.8 Codename: Final

    2. 使用了kerberos 作為權限管理

    之前沒有怎么了解過kerberos,只是大概知道類似私鑰公鑰的管理模式,在這里不展開太多,
    具體的想了解可以參考這里 and 這里
    這里只介紹一下client端需要做的設置

  • xxx.keytab文件(存儲了用戶信息以及秘鑰)
  • krb5.conf 文件(定義了kerberos的一些基本信息,比如ream域,加密方式等等)
  • xxx.keytab 一般放到 /etc/keytab/xxx.keytab
    krb5.conf 必須放到 /etc/krb5.conf

    2. 安裝方式

    我的目標是想在本地(macos)安裝一個hadoop client,能夠訪問stag環境的hadoop集群。結果沒有想到這個這么難以實現,都把我整哭了。

    1. 在其他centos服務器上的安裝測試

    實際上在這一步之前我已經在mac上折騰一波了,但是死活不行,所以又退回來在centos的服務器上裝一個試試。
    當前的centos服務器用戶是deploy

    1. 安裝配置kerberos

    安裝

    yum install krb5-workstation krb5-libs krb5-auth-dialog -y

    從hadoop的機器上copy xxx.keytab文件,krb5.conf文件,分別放到 /etc/keytab, /etc目錄

    mv deploy.keytab /etc/keytab/deploy.keytab mv krb5.conf /etc/krb5.conf

    執行kinit 產生票據

    [deploy@server-02 ~]$ kinit -kt /etc/keytab/deploy.keytab deploy [deploy@server-02 ~]$ klist Ticket cache: FILE:/tmp/krb5cc_501 Default principal: deploy@SAMPLE.COMValid starting Expires Service principal 07/31/20 10:22:32 08/01/20 10:22:32 krbtgt/SAMPLE.COM@SAMPLE.COMrenew until 07/31/20 10:22:32

    2. 安裝client包

    使用CDH安裝的好像還挺復雜的,用了很多shell做封裝,簡單看一下結構

    對應的目錄在 /home/deploy/hadoop_client_test/hadoop2-client/

    [deploy@server-02 hadoop_client_test]$ pwd /home/deploy/hadoop_client_test[deploy@server-02 hadoop_client_test]$ tree -L 3 hadoop2-client/ hadoop2-client/ │ ├── bin │ ├── hadoop │ ├── hdfs │ └── yarn.cmd ├── conf -> hadoop2-conf ├── hadoop1-conf │ ... │ └── topology.py ├── hadoop2-conf │ ├── core-site.xml │ ├── hadoop-env.sh │ ├── hdfs-site.xml │ ├── log4j.properties │ ├── ssl-client.xml │ ├── topology.map │ └── topology.py ├── lib │ ├── libexec │ ├── hadoop-config.sh │ ├── hdfs-config.sh │ ├── httpfs-config.sh │ ├── kms-config.sh │ ├── mapred-config.sh │ └── yarn-config.sh ├── sbin └── share

    為了方便理解,我把和shell運行原理沒有關系的全部都干掉了,只保留了相關的一部分來進行解釋。
    下面的解釋主要是為了搞清hdfs命令最后如何選擇hadoop的配置

    總體的調用邏輯是這樣的,當前的目錄是在 hadoop2-client
    當我運行 ./bin/hdfs的時候,hdfs是一個shell文件,他會做以下的事情

  • 設置BIN_DIR=${./bin}
  • 設置HADOOP_LIBEXEC_DIR=${./libexec},并執行export, export HADOOP_LIBEXEC_DIR=xxx ,讓子進程能夠拿到
  • 執行${HADOOP_LIBEXEC_DIR}/hdfs-config.sh, 也就是上面的 ./libexec/hdfs-config.sh
  • 沒有其他處理邏輯執行${HADOOP_LIBEXEC_DIR}/hadoop-config.sh,也就是上面的 ./libexec/hadoop-config.sh
  • 設置DEFAULT_CONF_DIR ,也就是hadoop集群相關的一些配置
  • 如果存在./conf/hadoop-env.sh 則 DEFAULT_CONF_DIR=conf 也就是上面的通過超鏈接又指向了 hadoop2-conf的 conf,在當前的配置下回選中這個
  • 反之,如果 ./conf/hadoop-env.sh不存在,會使用 ./etc/hadoop在CDH的集群上都是走的這個
  • 下面就是各種根據參數運行hdfs命令了
  • 然后執行下面的命令

    [deploy@server-02 hadoop2-client]$ ./bin/hdfs dfs -ls /Found 5 items drwxrwxrwx - deploy supergroup 0 2020-05-29 19:16 /flink drwx------ - hbase hbase 0 2020-06-30 10:23 /hbase drwxrwx--- - mapred supergroup 0 2020-07-15 10:40 /home drwxrwxrwt - hdfs supergroup 0 2020-07-28 10:04 /tmp drwxrwxrwx - hdfs supergroup 0 2020-07-28 10:04 /user

    至此,成功在centos上面裝上hadoop2的client,可以操作hdfs。

    2. 在mac上安裝hadooop2 client

    mac上的用戶是admin

    1. 安裝配置kerberos

    brew install krb5

    剩下的配置和contos上的配置保持一致。

    執行 kinit

    ? ~ kinit -kt /etc/keytab/deploy.keytab deploy ? ~ klist Ticket cache: KCM:501 Default principal: deploy@SAMPLE.COMValid starting Expires Service principal 07/31/20 15:50:54 08/01/20 15:50:54 krbtgt/SAMPLE.COM@SAMPLE.COMrenew until 07/31/20 15:50:54

    可以看到到也是正常產生了ticket

    2. 安裝hadoop client

    同樣的套路,這個也不再贅述。

    3. 噩夢開始

    在執行hdfs命令之后,噩夢開始

    ./bin/hdfs dfs -ls /

    報錯如下

    20/07/30 19:04:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 20/07/30 19:04:38 WARN security.UserGroupInformation: PriviledgedActionException as:admin (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 WARN security.UserGroupInformation: PriviledgedActionException as:admin (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 WARN security.UserGroupInformation: PriviledgedActionException as:admin (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 WARN security.UserGroupInformation: PriviledgedActionException as:admin (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 20/07/30 19:04:38 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over bj3-stag-all-hbase02.tencn/10.76.0.100:8020 after 1 fail over attempts. Trying to fail over immediately. java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "localhost/127.0.0.1"; destination host is: "bj3-stag-all-hbase02.tencn":8020;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)at org.apache.hadoop.ipc.Client.call(Client.java:1472)at org.apache.hadoop.ipc.Client.call(Client.java:1399)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) ...at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:680)at java.security.AccessController.doPrivileged(Native Method)at org.apache.hadoop.ipc.Client.call(Client.java:1438)... 28 moreCaused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]... 31 more Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)... 40 more

    只是截取了一部分異常,而且將異常棧去掉了一部分,要不然太長了。
    這個報錯顯示的很顯然是沒有可用的kerberos tgt ,也就是鑒權沒有通過。同時在報錯中可以看到as:admin字樣,讓我一度懷疑是因為我本地是admin賬戶而不是deploy賬戶?
    因為前面看一些相關資料里面hadoop好像還需要專門對keytab中的用戶進行處理,主要是理解的不到位,懷疑有可能是這個問題,參考這里有對hadoop使用keytab解析的說明,說實話這里還是理解的不到位,后來的配置也感覺keytab用戶好像和本地用戶沒有什么關系,除非是有些地方理解依然有偏差。

    這個時候心一橫,本地創建了一個deploy用戶,一堆權限問題,總算把權限問題搞定了,執行hdfs命令發現還是報一樣的錯,無非是報錯的admin換成了deploy,絕望。

    不死心,后來通過多方google,終于發現一個解決類似問題的方案。

    4. 增加debug功能

    原來hadooop可以在運行的時候打開kerberos的debug功能,這樣就可以更進一步看看問題出在哪里了

    export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true ${HADOOP_OPTS}"

    再執行同樣的命令

    ./bin/hdfs dfs -ls /

    報錯信息基本上和之前一樣,但是在最開頭報出來了這樣的信息

    Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_501 >> Acquire default native Credentials default etypes for default_tkt_enctypes: 17 16 23. >>> Found no TGT's in LSA 20/07/30 20:16:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable... ...

    這里面說了 KinitOptions cache name is /tmp/krb5cc_501
    也就是會在/tmp/krb5cc_501 文件中找對應的ticket,記得在執行klist的時候看到過類似的信息

    5. 原來mac和centos的ticket存儲的位置不一樣

    在mac上執行了klist

    ? ~ klist Ticket cache: KCM:501 Default principal: deploy@SAMPLE.COMValid starting Expires Service principal 07/31/20 15:50:54 08/01/20 15:50:54 krbtgt/SAMPLE.COM@SAMPLE.COMrenew until 07/31/20 15:50:54

    cache位置是KCM:501

    在centos再看看

    [deploy@server-02 ~]$ klist Ticket cache: FILE:/tmp/krb5cc_501 Default principal: deploy@SAMPLE.COMValid starting Expires Service principal 07/31/20 10:22:32 08/01/20 10:22:32 krbtgt/SAMPLE.COM@SAMPLE.COMrenew until 07/31/20 10:22:32 [deploy@server-02 ~]$

    這里是 FILE:/tmp/krb5cc_501,原來是這里不一致導致的。

    結合debug日志用顯示需要的是/tmp/krb5cc_501
    那么我們生成ticket的時候需要在對應的目錄才行。

    kinit -c FILE:/tmp/krb5cc_501 -kt /etc/keytab/deploy.keytab deploy? klist -c FILE:/tmp/krb5cc_501 Ticket cache: FILE:/tmp/krb5cc_501 Default principal: deploy@SAMPLE.COMValid starting Expires Service principal 07/30/20 20:50:08 07/31/20 20:50:08 krbtgt/SAMPLE.COM@SAMPLE.COMrenew until 07/30/20 20:50:08

    開心,終于算是找到一點解決問題的思路

    小心翼翼的執行

    ./bin/hdfs dfs -ls /

    果然,撲面而來的還是滿屏的報錯😞

    6. 原來還有加密問題

    這個時候因為debug模式還是生效的,在報錯的最前面看到的是這樣的信息

    Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_501 >>>DEBUG <CCacheInputStream> client principal is deploy@SAMPLE.COM >>>DEBUG <CCacheInputStream> server principal is krbtgt/SAMPLE.COM@SAMPLE.COM >>>DEBUG <CCacheInputStream> key type: 18 >>>DEBUG <CCacheInputStream> auth time: Fri Jul 31 16:51:50 CST 2020 >>>DEBUG <CCacheInputStream> start time: Fri Jul 31 16:51:50 CST 2020 >>>DEBUG <CCacheInputStream> end time: Sat Aug 01 16:51:50 CST 2020 >>>DEBUG <CCacheInputStream> renew_till time: Fri Jul 31 16:51:50 CST 2020 >>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; >>>DEBUG <CCacheInputStream> client principal is deploy@SAMPLE.COM >>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/SAMPLE.COM@SAMPLE.COM@SAMPLE.COM >>>DEBUG <CCacheInputStream> key type: 0 >>>DEBUG <CCacheInputStream> auth time: Thu Jan 01 08:00:00 CST 1970 >>>DEBUG <CCacheInputStream> start time: null >>>DEBUG <CCacheInputStream> end time: Thu Jan 01 08:00:00 CST 1970 >>>DEBUG <CCacheInputStream> renew_till time: null >>> CCacheInputStream: readFlags() >>> KrbCreds found the default ticket granting ticket in credential cache. >>> unsupported key type found the default TGT: 18 >> Acquire default native Credentials default etypes for default_tkt_enctypes: 17 16 23. >>> Found no TGT's in LSA 20/07/31 16:51:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ... ...

    這里看到前面好像是初步找到了deploy@SAMPLE.COM的principal,在執行到后面才發生報錯

    >>> unsupported key type found the default TGT: 18 >> Acquire default native Credentials default etypes for default_tkt_enctypes: 17 16 23. >>> Found no TGT's in LSA

    根據 unsupported key type found the default TGT: 18進行google,才知道這個可能是本地的jdk不支持krb5.conf中的加密設置導致,參考這里。
    根據解釋,這里的type 18, 表示的是AES256的一種加密算法。而當前的JDK并不支持這種方式。所以導致無法解密。

    調整/etc/krb5.conf的加密設置

    [libdefaults] default_realm = SAMPLE.COM dns_lookup_kdc = false dns_lookup_realm = false ticket_lifetime = 604800 renew_lifetime = 604800 forwardable = true#default_tgs_enctypes = aes256-cts aes128-cts des3-hmac-sha1 arcfour-hmac des-hmac-sha1 des-cbc-md5 des-cbc-crc #default_tkt_enctypes = aes256-cts aes128-cts des3-hmac-sha1 arcfour-hmac des-hmac-sha1 des-cbc-md5 des-cbc-crc #permitted_enctypes = aes256-cts aes128-cts des3-hmac-sha1 arcfour-hmac des-hmac-sha1 des-cbc-md5 des-cbc-crcdefault_tgs_enctypes = aes256-cts aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crc default_tkt_enctypes = arcfour-hmac-md5 aes256-cts aes128-cts des-cbc-md5 des-cbc-crc permitted_enctypes = aes256-cts aes128-cts arcfour-hmac-md5 des-cbc-md5 des-cbc-crcudp_preference_limit = 1 kdc_timeout = 3000 [realms] SAMPLE.COM = { kdc = kdc1.service.kk.srv admin_server = kdc1.service.kk.srv } [domain_realm]

    重新生成ticket

    kdestroy -c FILE:/tmp/krb5cc_501kinit -c FILE:/tmp/krb5cc_501 -kt /etc/keytab/deploy.keytab deploy

    小心翼翼的執行

    ./bin/hdfs dfs -ls /

    輸出

    Java config name: null Native config name: /etc/krb5.conf Loaded from native config >>>KinitOptions cache name is /tmp/krb5cc_501 >>>DEBUG <CCacheInputStream> client principal is deploy@SAMPLE.COM >>>DEBUG <CCacheInputStream> server principal is krbtgt/SAMPLE.COM@SAMPLE.COM >>>DEBUG <CCacheInputStream> key type: 23 >>>DEBUG <CCacheInputStream> auth time: Thu Jul 30 20:50:08 CST 2020 >>>DEBUG <CCacheInputStream> start time: Thu Jul 30 20:50:08 CST 2020 >>>DEBUG <CCacheInputStream> end time: Fri Jul 31 20:50:08 CST 2020 >>>DEBUG <CCacheInputStream> renew_till time: Thu Jul 30 20:50:08 CST 2020 >>> CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; >>>DEBUG <CCacheInputStream> client principal is deploy@SAMPLE.COM >>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/SAMPLE.COM@SAMPLE.COM@SAMPLE.COM >>>DEBUG <CCacheInputStream> key type: 0 >>>DEBUG <CCacheInputStream> auth time: Thu Jan 01 08:00:00 CST 1970 >>>DEBUG <CCacheInputStream> start time: null >>>DEBUG <CCacheInputStream> end time: Thu Jan 01 08:00:00 CST 1970 >>>DEBUG <CCacheInputStream> renew_till time: null >>> CCacheInputStream: readFlags() >>> KrbCreds found the default ticket granting ticket in credential cache. >>> Obtained TGT from LSA: Credentials:client=deploy@SAMPLE.COMserver=krbtgt/SAMPLE.COM@SAMPLE.COMauthTime=20200730125008ZstartTime=20200730125008ZendTime=20200731125008ZrenewTill=20200730125008Zflags=FORWARDABLE;RENEWABLE;INITIAL EType (skey)=23(tkt key)=18 20/07/30 20:51:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found ticket for deploy@SAMPLE.COM to go to krbtgt/SAMPLE.COM@SAMPLE.COM expiring on Fri Jul 31 20:50:08 CST 2020 Entered Krb5Context.initSecContext with state=STATE_NEW Found ticket for deploy@SAMPLE.COM to go to krbtgt/SAMPLE.COM@SAMPLE.COM expiring on Fri Jul 31 20:50:08 CST 2020 Service ticket not found in the subject >>> Credentials acquireServiceCreds: same realm default etypes for default_tgs_enctypes: 17 23. >>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType >>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType >>> KdcAccessibility: reset >>> KrbKdcReq send: kdc=kdc1.service.kk.srv TCP:88, timeout=3000, number of retries =3, #bytes=655 >>> KDCCommunication: kdc=kdc1.service.kk.srv TCP:88, timeout=3000,Attempt =1, #bytes=655 >>>DEBUG: TCPClient reading 642 bytes >>> KrbKdcReq send: #bytes read=642 >>> KdcAccessibility: remove kdc1.service.kk.srv >>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType >>> KrbApReq: APOptions are 00100000 00000000 00000000 00000000 >>> EType: sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType Krb5Context setting mySeqNumber to: 919666512 Created InitSecContextToken: 0000: 01 00 6E 82 02 32 30 82 02 2E A0 03 02 ... ...Entered Krb5Context.initSecContext with state=STATE_IN_PROCESS >>> EType: sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType Krb5Context setting peerSeqNumber to: 860175268 Krb5Context.unwrap: token=[05 04 01 ff 00 0c 00 00 00 00 00 00 33 45 3b a4 01 01 00 00 c6 aa 28 08 f7 4a 07 3a 76 ca 47 e7 ] Krb5Context.unwrap: data=[01 01 00 00 ] Krb5Context.wrap: data=[01 01 00 00 ] Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 16 b8 57 a2 01 01 00 00 fe 2c 1e ba 43 fc 1d 9f 9d 84 22 12 ] Found 5 items drwxrwxrwx - deploy supergroup 0 2020-05-29 19:16 /flink drwx------ - hbase hbase 0 2020-06-30 10:23 /hbase drwxrwx--- - mapred supergroup 0 2020-07-15 10:40 /home drwxrwxrwt - hdfs supergroup 0 2020-07-28 10:04 /tmp drwxrwxrwx - hdfs supergroup 0 2020-07-28 10:04 /user

    快要喜極而泣了🤦?♂?

    參考:
    https://community.cloudera.com/t5/Community-Articles/Connect-Hadoop-client-on-Mac-OS-X-to-Kerberized-HDP-cluster/ta-p/248917
    https://www.jianshu.com/p/cc523d5a715d

    總結

    以上是生活随笔為你收集整理的mac安装hadoop2-client的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。