单机/伪分布式Hadoop2.4.1安装文档
轉載自官方文檔,最新版請見:http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/SingleCluster.html
補充:建議添加如下環境變量
#hadoop configuration
export PATH=$PATH:/home/jediael/hadoop-2.4.1/bin:/home/jediael/hadoop-2.4.1/sbin
export HADOOP_HOME=/home/jediael/hadoop-2.4.1
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
- Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
- Purpose
- Prerequisites
- Supported Platforms
- Required Software
- Installing Software
- Download
- Prepare to Start the Hadoop Cluster
- Standalone Operation
- Pseudo-Distributed Operation
- Configuration
- Setup passphraseless ssh
- Execution
- YARN on Single Node
- Fully-Distributed Operation
Purpose
This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS).
Prerequisites
Supported Platforms
- GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes.
- Windows is also a supported platform but the followings steps are for Linux only. To set up Hadoop on Windows, see?wiki page.
Required Software
Required software for Linux include:
Installing Software
If your cluster doesn't have the requisite software you will need to install it.
For example on Ubuntu Linux:
$ sudo apt-get install ssh$ sudo apt-get install rsyncDownload
To get a Hadoop distribution, download a recent stable release from one of the?Apache Download Mirrors.
Prepare to Start the Hadoop Cluster
Unpack the downloaded Hadoop distribution. In the distribution, edit the file?etc/hadoop/hadoop-env.sh?to define some parameters as follows:
# set to the root of your Java installationexport JAVA_HOME=/usr/java/latest# Assuming your installation directory is /usr/local/hadoopexport HADOOP_PREFIX=/usr/local/hadoop第二步不做好像沒影響。
Try the following command:
$ bin/hadoopThis will display the usage documentation for the hadoop script.
Now you are ready to start your Hadoop cluster in one of the three supported modes:
- Local (Standalone) Mode
- Pseudo-Distributed Mode
- Fully-Distributed Mode
Standalone Operation
By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging.
The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory.
$ mkdir input$ cp etc/hadoop/*.xml input$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'$ cat output/*Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.
Configuration
Use the following:
etc/hadoop/core-site.xml:
<configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property> </configuration>etc/hadoop/hdfs-site.xml:
<configuration><property><name>dfs.replication</name><value>1</value></property> </configuration>Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhostIf you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysExecution
The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see?YARN on Single Node.
The hadoop daemon log output is written to the?$HADOOP_LOG_DIR?directory (defaults to?$HADOOP_HOME/logs).
- NameNode -?http://localhost:50070/
Copy the output files from the distributed filesystem to the local filesystem and examine them:
$ bin/hdfs dfs -get output output$ cat output/*or
View the output files on the distributed filesystem:
$ bin/hdfs dfs -cat output/*YARN on Single Node
You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.
The following instructions assume that 1. ~ 4. steps of?the above instructions?are already executed.
etc/hadoop/mapred-site.xml:
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property> </configuration>etc/hadoop/yarn-site.xml:
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property> </configuration>- ResourceManager -?http://localhost:8088/
總結
以上是生活随笔為你收集整理的单机/伪分布式Hadoop2.4.1安装文档的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 用Apache Ivy实现项目里的依赖管
- 下一篇: Hadoop2.4.1入门实例:MaxT