ambari hive mysql_ambari方式安装hadoop的hive组件遇到的问题
最近在部署hadoop,我發現了ambari工具部署hadoop的hive 組件的一個問題,不知道其他人遇到過沒有。
問題描述:通過ambari工具搭建了hadoop2.0完全分布式集群。在測試hive的時候,按照官方文檔里的說明通過下面命令檢查根目錄的時候:總是報錯無法連接mysql。(java.sql.SQLException: Access denied foruser 'hive'@'hdb3.yc.com'(using password: YES))
[root@hdb3 bin]# /usr/lib/hive/bin/metatool -listFSRoot
報錯關鍵信息:
1
2
3
4
14/02/20 13:21:09 WARN bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
14/02/20 13:21:09 ERROR Datastore.Schema: Failed initialising database.
Unable to open a test connection to the given database. JDBC url = jdbc:mysql://hdb3.yc.com/hive?createDatabaseIfNotExist=true, username = hive. Terminating connection pool. Original Exception: ------
java.sql.SQLException: Access denied for user 'hive'@'hdb3.yc.com' (using password: YES)
從報錯信息初步判斷是沒有權限訪問mysql數據庫。但是經過測試,利用hive用戶,及密碼連接mysql服務器是正常的。而且我在安裝之前是在mysql中對hive用戶做過授權的。
1
2
3
4
5
6
7
#創建Hive帳號并授權:
mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive_passwd';
Query OK, 0 rows affected (0.00 sec)??? mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE USER 'hive'@'hdb3.yc.com' IDENTIFIED BY 'hive_passwd';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdb3.yc.com';
Query OK, 0 rows affected (0.00 sec)
測試連接是正常的:
1
2
3
4
5
6
7
8
9
10
11
[root@hda3 ~]# mysql -h hdb3.yc.com -u hive -p
Enter password:
Welcome to the MySQL monitor.? Commands end with ; or \g.
Your MySQL connection id is 556
Server version: 5.1.73 Source distribution
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
于是檢查hive的配置,在ambari管理頁面,如下,hive的配置項中輸入了正確的密碼。重啟hive服務后測試依然會報錯。
wKiom1MFm_qQBpIgAAIgb6M32ao228.jpg
后來檢查服務器上的配置文件找到的問題原由:
上面那個web頁面里配置了hive 數據庫的連接密碼,但是實際上hive服務器的配置文件里/usr/lib/hive/conf/hive-site.xml并沒有密碼的相關配置,也就是沒有下面這一段參數配置:
1
2
3
4
javax.jdo.option.ConnectionPassword
hive_passwd
手動編輯配置文件增加上這些配置,加入這段密碼的配置后,沒有重啟hive服務,再次執行下面的命令檢查根目錄,這次不報錯了。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@hdb3 bin]# /usr/lib/hive/bin/metatool -listFSRoot
Initializing HiveMetaTool..
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/02/20 14:29:22 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/02/20 14:29:23 INFO metastore.ObjectStore: ObjectStore, initialize called
14/02/20 14:29:23 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
14/02/20 14:29:24 WARN bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
14/02/20 14:29:24 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
14/02/20 14:29:24 INFO metastore.ObjectStore: Initialized ObjectStore
14/02/20 14:29:25 WARN bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
Listing FS Roots..
hdfs://hda1.yc.com:8020/apps/hive/warehouse
但是在通過ambari管理界面重啟hive服務后又會重新自動給你去掉了。
這導致在執行 /usr/lib/hive/bin/metatool -listFSRoot 命令的時候無法連接mysql數據庫。不知道這是我哪里的配置不對還是ambari的bug問題。
閱讀(1648) | 評論(0) | 轉發(0) |
總結
以上是生活随笔為你收集整理的ambari hive mysql_ambari方式安装hadoop的hive组件遇到的问题的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 美国的东西两岸地区,少有龙卷风吗
- 下一篇: date oracle 显示毫秒_ora