Hadoop secondarynamenode两种配置方式
hadoop secondarynamenode的两种配置方式,hadoop版本是hadoop-1.0.4:
集群分配关系:- masterJobTracker&&Namenode
- node1Secondarynamenode
- node2TaskTracker&&Datanode
- node3TaskTracker&&Datanode
- node4TaskTracker&&Datanode
复制代码 配置1:
1.conf/core-site.xml:- <configuration>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/home/hadoop/hadooptmp</value>
- <description>A base for other temporary directories.</description>
- </property>
- <property>
- <name>fs.default.name</name>
- <value>hdfs://master:9000</value>
- </property>
- </configuration>
复制代码 2.conf/hadoop-env.sh:- export JAVA_HOME=/home/hadoop/jdk1.x.x_xx
复制代码 3. conf/hdfs-site.xml:- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.data.dir</name>
- <value>/home/hadoop/hadoopfs/data</value>
- </property>
- <property>
- <name>dfs.http.address</name>
- <value>master:50070</value>
- </property>
- <property>
- <name>dfs.back.http.address</name>
- <value>node1:50070</value>
- </property>
- <property>
- <name>dfs.name.dir</name>
- <value>/home/hadoop/hadoopfs/name</value>
- </property>
- <property>
- <name>fs.checkpoint.dir</name>
- <value>/home/hadoop/hadoopcheckpoint</value>
- </property>
- <property>
- <name>dfs.permissions</name>
- <value>false</value>
- </property>
- </configuration>
复制代码 4.conf/mapred-site.xml:- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>master:9001</value>
- </property>
- <property>
- <name>mapred.tasktracker.map.tasks.maximum</name>
- <value>4</value>
- </property>
- <property>
- <name>mapred.tasktracker.reduce.tasks.maximum</name>
- <value>4</value>
- </property>
- <property>
- <name>mapred.child.java.opts</name>
- <value>-Xmx1000m</value>
- </property>
- </configuration>
复制代码 5. conf/masters:6.conf/secondarynamenode(此为新建的文件)7. conf/slaves:8.bin/start-dfs.sh:- "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
- "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
- "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode
复制代码 9.bin/stop-dfs.sh:- "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode $nameStartOpt
- "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode $dataStartOpt
- "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode stop secondarynamenode
复制代码 配置2:
1.conf/core-site.xml:- <configuration>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/home/hadoop/hadooptmp</value>
- <description>A base for other temporary directories.</description>
- </property>
- <property>
- <name>fs.default.name</name>
- <value>hdfs://master:9000</value>
- </property>
- </configuration>
复制代码 2.conf/hadoop-env.sh:- export JAVA_HOME=/home/hadoop/jdk1.x.x_xx
复制代码 3. conf/hdfs-site.xml:- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.data.dir</name>
- <value>/home/hadoop/hadoopfs/data</value>
- </property>
- <property>
- <name>dfs.http.address</name>
- <value>master:50070</value>
- </property>
- <property>
- <name>dfs.back.http.address</name>
- <value>node1:50070</value>
- </property>
- <property>
- <name>dfs.name.dir</name>
- <value>/home/hadoop/hadoopfs/name</value>
- </property>
- <property>
- <name>fs.checkpoint.dir</name>
- <value>/home/hadoop/hadoopcheckpoint</value>
- </property>
- <property>
- <name>dfs.permissions</name>
- <value>false</value>
- </property>
- </configuration>
复制代码 4.conf/mapred-site.xml:- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>master:9001</value>
- </property>
- <property>
- <name>mapred.tasktracker.map.tasks.maximum</name>
- <value>4</value>
- </property>
- <property>
- <name>mapred.tasktracker.reduce.tasks.maximum</name>
- <value>4</value>
- </property>
- <property>
- <name>mapred.child.java.opts</name>
- <value>-Xmx1000m</value>
- </property>
- </configuration>
复制代码 5. conf/masters:7. conf/slaves:还有就是昨天写的secondarynamenode的使用,应该也是有拷贝secondarynamenode的文件到namenode的过程的,
因为昨天测试的时候只是在一个机器上搞,所以就少了复制这一步,今天用了集群进行测试,才发现要拷贝的。
上面的两种方式已经经过集群测试,证明可用。
分享,快乐,成长
摘自:http://blog.csdn.net/fansy1990/article/details/8990206
Hadoop secondarynamenode两种配置方式
|