CentOS 7.3+サーバーJRE1.8 + Hadoop-2.8.0

この記事は「NetkillerLinuxHandbook」からの抜粋です。仕事の関係で、ここ数年はhadoopを使用していません。今日バージョンを確認したところ、2.8に達しましたが、数年前とは大きく異なります。ちなみに、ドキュメントを更新して、過去を振り返り、新しいことを学びたいと思いました。

137.1. 単一マシンのインストール(CentOS 7+サーバーJRE1.8 + Hadoop-2.8.0)##

この章では、CentOS7での最新バージョンのHadoop2.8.0のインストールについて説明します。

hadoopユーザーを作成します。このユーザーはhadoopの開始と管理に使用されます

[ root@localhost ~]# adduser hadoop
[ root@localhost ~]# passwd hadoop
Changing password for user hadoop.
New password: 
Retype newpassword: 
passwd: all authentication tokens updated successfully.

HFDSストレージディレクトリを作成するには、btrfsサブボリューム関数を使用し、サブボリュームを作成してから/ opt / hadoopにマウントすることをお勧めします。

[ root@localhost ~]# mkdir -p /opt/hadoop/volume/{namenode,datanode}[root@localhost ~]# chown -R hadoop:hadoop /opt/hadoop/

SSHキーを作成する

[ root@localhost ~]# su - hadoop
Last login: Wed Jun 2820:53:16 EDT 2017 on pts/0
Last failed login: Wed Jun 2820:58:20 EDT 2017from localhost on ssh:notty
There were 10 failed login attempts since the last successful login.[hadoop@localhost ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key(/home/hadoop/.ssh/id_rsa): 
Enter passphrase(empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in/home/hadoop/.ssh/id_rsa.
Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:87:d5:6f:4f:b5:ac:d0:35:76:77:6e:5e:98:ae:92:2a [email protected]
The key's randomart image is:+--[ RSA 2048]----+|||.||..+=||         o  ..++B||        S ...=o=||..o.=.||... o||      E   o  .||.....|+-----------------+[hadoop@localhost ~]$ ssh-copy-id hadoop@localhost
/bin/ssh-copy-id: INFO: attempting to log inwith the newkey(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO:1key(s) remain to be installed --if you are prompted now it is to install the newkeys
hadoop@localhost's password: 

Number ofkey(s) added:1

Now try logging into the machine,with:"ssh 'hadoop@localhost'"
and check to make sure that only the key(s) you wanted were added.

137.1.1. Java環境をインストールする###

ここでは、OpenJDKまたはOracleのServerJREの使用を選択できます

[ root@localhost ~]# yum install -y java-1.8.0-openjdk			

JAVA環境変数の構成

[ root@localhost ~]# cat >>/etc/profile.d/java.sh <<EOF
################################################
### Java environment by neo<[email protected]>
################################################
export JAVA_HOME=/usr/java/defaultexport PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export JAVA_OPTS="-Xms2048m -Xmx4096m"
EOF			

Oracle Javaを使用している場合は、OracleWebサイトからダウンロードしたServerJREにアクセスして、javaクラスパスを構成してください。

			# wget --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/server-jre-8u131-linux-x64.tar.gz
wgetがサーバーにインストールされていない場合は、curlコマンドを使用してダウンロードすることもできます。
# curl -LO -H "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/server-jre-8u131-linux-x64.tar.gz?AuthParam=1498632436_9d59c44e2a4f0ca5bd359423ac5b790b			

コンパイルする必要がなく、サーバー環境でのみ使用する場合は、JDKをダウンロードする必要はありません。サーバーJREがダウンロードします。

137.1.2. Hadoop ###をインストールします

[ root@localhost ~]# cd /usr/local/src
[ root@localhost src]# wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz
[ root@localhost src]# tar zxf hadoop-2.8.0.tar.gz
[ root@localhost src]# mv hadoop-2.8.0/srv/[root@localhost src]# ln -s /srv/hadoop-2.8.0/srv/hadoop
[ root@localhost src]# chown -R hadoop:hadoop /srv/hadoop*[root@localhost src]# su - hadoop
[ hadoop@localhost hadoop]$ cd /srv/hadoop			

デフォルトでは、HadoopはJAVA_HOME設定を使用します。JREを使用してHadoopを実行する場合は、構成できます。

[ hadoop@localhost hadoop]$ cp /srv/hadoop/etc/hadoop/hadoop-env.sh{,.original}[hadoop@localhost hadoop]$ sed -i "25s:\${JAVA_HOME}:/srv/java:" hadoop-env.sh			

etc / hadoop / core-site.xml構成ファイルを編集し、HDFSアドレスを構成します

[ hadoop@localhost hadoop]$ cp /srv/hadoop/etc/hadoop/core-site.xml{,.original}[hadoop@localhost hadoop]$ cat >/srv/hadoop/etc/hadoop/core-site.xml <<EOF
<? xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000/</value></property></configuration>
EOF			

/srv/hadoop/etc/hadoop/hdfs-site.xml構成ファイルを編集して、HDFSボリュームディレクトリを構成します

[ hadoop@localhost hadoop]$ cp /srv/hadoop/etc/hadoop/hdfs-site.xml{,.original}[hadoop@localhost hadoop]$ cat >/srv/hadoop/etc/hadoop/hdfs-site.xml <<EOF
<? xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>dfs.data.dir</name><value>file:///opt/hadoop/volume/datanode</value></property><property><name>dfs.name.dir</name><value>file:///opt/hadoop/volume/namenode</value></property></configuration>
EOF			

/srv/hadoop/etc/hadoop/mapred-site.xml構成ファイルを編集します

[ hadoop@localhost hadoop]$ cp /srv/hadoop/etc/hadoop/mapred-site.xml{,.original}[hadoop@localhost hadoop]$ cat >/srv/hadoop/etc/hadoop/mapred-site.xml <<"EOF"<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property></configuration>
EOF			

/srv/hadoop/etc/hadoop/yarn-site.xml構成ファイルを編集します

[ hadoop@localhost hadoop]$ cp /srv/hadoop/etc/hadoop/yarn-site.xml{,.original}[hadoop@localhost hadoop]$ cat >/srv/hadoop/etc/hadoop/yarn-site.xml <<"EOF"<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property></configuration>
EOF			

スタンドアロン構成では、スレーブを処理してデフォルトを維持する必要はありません

[ hadoop@localhost hadoop]$ cat /srv/hadoop/etc/hadoop/slaves 
localhost			

これまでのところ、Hadoopの構成は完了しています

137.1.3. Hadoopを開始###

HDFSを初期化します

[ hadoop@localhost hadoop]$ bin/hdfs namenode -format
17 /06/2806:16:58 INFO namenode.NameNode: STARTUP_MSG:/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = localhost/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.8.0
STARTUP_MSG:   classpath = /srv/hadoop-2.8.0/etc/hadoop:/srv/hadoop-2.8.0/share/hadoop/common/lib/activation-1.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/asm-3.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/avro-1.7.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-cli-1.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-codec-1.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-digester-1.8.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-io-2.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-lang-2.6.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/commons-net-3.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/curator-client-2.7.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/curator-framework-2.7.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/gson-2.2.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/guava-11.0.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/hadoop-annotations-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/hadoop-auth-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jcip-annotations-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jersey-core-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jersey-json-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jersey-server-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jettison-1.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jetty-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jsch-0.1.51.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/json-smart-1.1.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jsp-api-2.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/junit-4.11.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/log4j-1.2.17.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/nimbus-jose-jwt-3.9.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/paranamer-2.3.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/servlet-api-2.5.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/xmlenc-0.52.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/xz-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/srv/hadoop-2.8.0/share/hadoop/common/hadoop-common-2.8.0-tests.jar:/srv/hadoop-2.8.0/share/hadoop/common/hadoop-common-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/common/hadoop-nfs-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/asm-3.2.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/okio-1.4.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-2.8.0-tests.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-client-2.8.0-tests.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-client-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.0-tests.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/activation-1.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/asm-3.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/commons-math-2.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/fst-2.24.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/guice-3.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/javax.inject-1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jettison-1.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/objenesis-2.1.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/xz-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-api-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-client-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-common-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-registry-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-common-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.0-tests.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.0.jar:/srv/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 91f2b7a13d1e97be65db92ddabc627cc29ac0009; compiled by 'jdu' on 2017-03-17T04:12Z
STARTUP_MSG:   java = 1.8.0_131
************************************************************/17/06/2806:16:58 INFO namenode.NameNode: registered UNIX signal handlers for[TERM, HUP, INT]17/06/2806:16:58 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-1c208cbf-7b56-41d5-9736-4e9a1f02aadb
17 /06/2806:16:58 INFO namenode.FSEditLog: Edit logging is async:false17/06/2806:16:58 INFO namenode.FSNamesystem: KeyProvider:null17/06/2806:16:58 INFO namenode.FSNamesystem: fsLock is fair:true17/06/2806:16:58 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled:false17/06/2806:16:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/06/2806:16:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/06/2806:16:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/06/2806:16:58 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jun 2806:16:5817/06/2806:16:58 INFO util.GSet: Computing capacity for map BlocksMap
17 /06/2806:16:58 INFO util.GSet: VM type       =64-bit
17 /06/2806:16:58 INFO util.GSet:2.0% max memory 889 MB =17.8 MB
17 /06/2806:16:58 INFO util.GSet: capacity      =2^21=2097152 entries
17 /06/2806:16:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/06/2806:16:58 INFO blockmanagement.BlockManager: defaultReplication         =317/06/2806:16:58 INFO blockmanagement.BlockManager: maxReplication             =51217/06/2806:16:58 INFO blockmanagement.BlockManager: minReplication             =117/06/2806:16:58 INFO blockmanagement.BlockManager: maxReplicationStreams      =217/06/2806:16:58 INFO blockmanagement.BlockManager: replicationRecheckInterval =300017/06/2806:16:58 INFO blockmanagement.BlockManager: encryptDataTransfer        =false17/06/2806:16:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog          =100017/06/2806:16:58 INFO namenode.FSNamesystem: fsOwner             =hadoop(auth:SIMPLE)17/06/2806:16:58 INFO namenode.FSNamesystem: supergroup          = supergroup
17 /06/2806:16:58 INFO namenode.FSNamesystem: isPermissionEnabled =true17/06/2806:16:58 INFO namenode.FSNamesystem: HA Enabled:false17/06/2806:16:58 INFO namenode.FSNamesystem: Append Enabled:true17/06/2806:16:59 INFO util.GSet: Computing capacity for map INodeMap
17 /06/2806:16:59 INFO util.GSet: VM type       =64-bit
17 /06/2806:16:59 INFO util.GSet:1.0% max memory 889 MB =8.9 MB
17 /06/2806:16:59 INFO util.GSet: capacity      =2^20=1048576 entries
17 /06/2806:16:59 INFO namenode.FSDirectory: ACLs enabled?false17/06/2806:16:59 INFO namenode.FSDirectory: XAttrs enabled?true17/06/2806:16:59 INFO namenode.NameNode: Caching file names occurring more than 10 times
17 /06/2806:16:59 INFO util.GSet: Computing capacity for map cachedBlocks
17 /06/2806:16:59 INFO util.GSet: VM type       =64-bit
17 /06/2806:16:59 INFO util.GSet:0.25% max memory 889 MB =2.2 MB
17 /06/2806:16:59 INFO util.GSet: capacity      =2^18=262144 entries
17 /06/2806:16:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct =0.999000012874603317/06/2806:16:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes =017/06/2806:16:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     =3000017/06/2806:16:59 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets =1017/06/2806:16:59 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users =1017/06/2806:16:59 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes =1,5,2517/06/2806:16:59 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17 /06/2806:16:59 INFO namenode.FSNamesystem: Retry cache will use 0.03of total heap and retry cache entry expiry time is 600000 millis
17 /06/2806:16:59 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17 /06/2806:16:59 INFO util.GSet: VM type       =64-bit
17 /06/2806:16:59 INFO util.GSet:0.029999999329447746% max memory 889 MB =273.1 KB
17 /06/2806:16:59 INFO util.GSet: capacity      =2^15=32768 entries
17 /06/2806:16:59 INFO namenode.FSImage: Allocated newBlockPoolId: BP-577822066-127.0.0.1-149864501908817/06/2806:16:59 INFO common.Storage: Storage directory /opt/hadoop/volume/namenode has been successfully formatted.17/06/2806:16:59 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/hadoop/volume/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
17 /06/2806:16:59 INFO namenode.FSImageFormatProtobuf: Image file /opt/hadoop/volume/namenode/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in0 seconds.17/06/2806:16:59 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >=017/06/2806:16:59 INFO util.ExitUtil: Exiting with status 017/06/2806:16:59 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
************************************************************/

すべてが正常であれば、実行後に最初に作成されたディレクトリを確認できます

[ hadoop@localhost hadoop]$ find /opt/hadoop/volume//opt/hadoop/volume//opt/hadoop/volume/namenode
/opt/hadoop/volume/namenode/current
/opt/hadoop/volume/namenode/current/VERSION
/opt/hadoop/volume/namenode/current/seen_txid
/opt/hadoop/volume/namenode/current/fsimage_0000000000000000000.md5
/opt/hadoop/volume/namenode/current/fsimage_0000000000000000000
/opt/hadoop/volume/datanode			

137.1.4. hadoop ###を開始および停止します

ハドゥープの開始と停止

すべてのサービスを開始/停止start-all.sh/stop-all.sh

必要に応じて手動で開始することもできます

HDFSの開始/停止:start-dfs.sh/stop-dfs.sh

[ hadoop@localhost ~]$ /srv/hadoop/sbin/start-dfs.sh 
Starting namenodes on [localhost]
localhost: starting namenode, logging to /srv/hadoop-2.8.0/logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /srv/hadoop-2.8.0/logs/hadoop-hadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is f4:b8:65:48:55:4f:a5:a7:0f:16:ad:66:ef:98:f3:1f.
Are you sure you want to continueconnecting(yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0'(ECDSA) to the list of known hosts.0.0.0.0: starting secondarynamenode, logging to /srv/hadoop-2.8.0/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out			

開始/停止糸:start-yarn.sh/stop-yarn.sh

[ hadoop@localhost ~]$ /srv/hadoop/sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /srv/hadoop-2.8.0/logs/yarn-hadoop-resourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /srv/hadoop-2.8.0/logs/yarn-hadoop-nodemanager-localhost.localdomain.out			

Hadoopスレッドを確認してください

[ hadoop@localhost ~]$ jps
25294 NodeManager
25189 ResourceManager
24770 DataNode
25665 Jps
24942 SecondaryNameNode
24634 NameNode			

ポートを確認してください

[ hadoop@localhost ~]$ ss -lnt | egrep "(9000|50070|8088|8042)"
LISTEN 0 128 127.0.0.1:9000 :
LISTEN 0 128 :50070 :
LISTEN 0 128 :::8042 :::

LISTEN 0 128 :::8088 :::*

Recommended Posts

CentOS 7.3+サーバーJRE1.8 + Hadoop-2.8.0
CentOS7はHadoop3.0.0をインストールします
CentOS7.4でHadoop-2.7.6をコンパイルします
Hadoop環境の構築(centos7)
CentOSサーバー展開(YUM)
CentOSでHadoopを構築する
CentOS7.6サーバー展開VNC
CentOS7でOpenLDAPサーバーを構築する
CentOS 7.2デプロイメールサーバー(Postfix)
CentOS7はOracleJDKとJREをインストールします
centosサーバーにvirtualboxをインストールする
CentOS7にNginxサーバーをインストールします
CentOSサーバー初期化設定の詳細な手順
CentOS 8(2)
CentOS8.1ビルドGitlabサーバーの詳細なチュートリアル
Tencent CloudCentos7インストールJavaサーバー
Centos7hadoopクラスターのインストールと構成
VirtualBoxのCentOS構成gitサーバー
Tencent CloudCentos7インストールJavaサーバー
Percona Serverデータベースのインストール(CentOS 8)
centos7でFTPサーバーを構築する
CentOS 8(1)
CentOS7はSQLServerをインストールして使用します
Centos7ビルドjavaWebサーバーtomcat
CentOSNTPサーバーのインストールと構成
CentOS7インストールgogsgitコードサーバー
CentOSサーバーの時刻を北京の時刻に変更します
Centos7はhadoop2.10高可用性(HA)を構築します
Centos7のインストールとgitlabサーバーの展開
PrometheusでCentOS7サーバーを監視する方法