【实验】Hadoop2.6.0的伪分布安装
jdk-7u79-linux-x64.gz: http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
1 设置ip地址
点击(此处)折叠或打开
- [root@test1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
- # Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
- DEVICE=eth0
- BOOTPROTO=none
- ONBOOT=yes
- HWADDR=00:0c:29:51:cc:37
- TYPE=Ethernet
- NETMASK=255.255.255.0
- IPADDR=192.168.23.131
- GATEWAY=192.168.23.1
- USERCTL=no
- IPV6INIT=no
- PEERDNS=yes
验证: ifconfig
2 关闭防火墙
执行命令 service iptables stop
验证: service iptables status
3 关闭防火墙的自动运行
执行命令 chkconfig iptables off
验证: chkconfig --list | grep iptables
4 设置主机名
执行命令
(1)hostname hadoop1
(2)vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=hadoop1
5 ip与hostname绑定
执行命令 (1)vi /etc/hosts
192.168.23.131 hadoop1.localdomain hadoop1
验证: ping hadoop1
6 设置ssh免密码登陆
执行命令
(1)ssh-keygen -t rsa
(2)cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
验证:
[root@test1 ~]# ssh hadoop1
The authenticity of host 'hadoop1 (192.168.23.131)' can't be established.
RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop1,192.168.23.131' (RSA) to the list of known hosts.
Last login: Thu Feb 11 20:54:11 2016 from 192.168.23.1
[root@hadoop1 ~]# ssh hadoop1
Last login: Thu Feb 11 20:57:56 2016 from hadoop1.localdomain
7 安装jdk http://my.oschina.net/gaowm/blog/275184
(1)执行命令
点击(此处)折叠或打开
- [root@hadoop1 java]# cd /usr/share/java
- [root@hadoop1 java]# cd
- [root@hadoop1 ~]# cd /usr/share/java
- [root@hadoop1 java]# cp /tmp/jdk-7u79-linux-x64.gz ./
- [root@hadoop1 java]# tar -xzvf jdk-7u79-linux-x64.gz
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export PATH=.:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
验证: java -version
8 安装hadoop
(1)执行命令
点击(此处)折叠或打开
- [root@hadoop1 ~]# cd /usr/local/
- [root@hadoop1 local]# cp /tmp/hadoop-2.6.0.tar.gz ./
- [root@hadoop1 local]# tar -zxvf hadoop-2.6.0.tar.gz
- [root@hadoop1 local]# mv hadoop-2.6.0 hadoop
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
(4)修改/usr/local/hadoop/etc/hadoop目录下的配置文件hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml
点击(此处)折叠或打开
- [root@hadoop1 hadoop]# vi hadoop-env.sh
- export JAVA_HOME=/usr/share/java/jdk1.7.0_79
-
- [root@hadoop1 hadoop]# vi core-site.xml
- <configuration>
- <property>
- <name>fs.default.name</name>
- <value>hdfs://hadoop1:9000</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/usr/local/hadoop/tmp</value>
- </property>
-
- </configuration>
-
-
- [root@hadoop1 hadoop]# vi hdfs-site.xml
-
- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- <property>
- <name>dfs.permissions</name>
- <value>false</value>
- </property>
-
- </configuration>
- ~
-
- [root@hadoop1 hadoop]# cp mapred-site.xml.template mapred-site.xml
- [root@hadoop1 hadoop]# vi mapred-site.xml
- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>hadoop1:9001</value>
- </property>
-
- </configuration>
(5)hadoop namenode -format
(6)start-all.sh
点击(此处)折叠或打开
- [root@hadoop1 hadoop]# cd sbin
- [root@hadoop1 sbin]# start-all.sh
- This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
- 16/02/11 21:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Starting namenodes on [hadoop1]
- hadoop1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop1.out
- The authenticity of host 'localhost (127.0.0.1)' can't be established.
- RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
- Are you sure you want to continue connecting (yes/no)? yes
- localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
- localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop1.out
- Starting secondary namenodes [0.0.0.0]
- The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
- RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
- Are you sure you want to continue connecting (yes/no)? yes
- 0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
- 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop1.out
- 16/02/11 21:41:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- starting yarn daemons
- starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop1.out
- localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop1.out
- [root@hadoop1 sbin]# jps
- 7192 SecondaryNameNode
- 7432 NodeManager
- 7468 Jps
- 6913 NameNode
- 7333 ResourceManager
- 7036 DataNode
验证: (1)执行命令jps 如果看到5个新的java进程,分别是NameNode、SecondaryNameNode、DataNode、ResourceManager、NodeManager
(2)在浏览器查看
hadoop web控制台页面的端口整理:
50070:hdfs文件管理 http://192.168.23.131:50070
8088:ResourceManager http://192.168.23.131:8088
8042:NodeManager http://192.168.23.131:8042
9 启动时没有NameNode的可能原因:
(1)没有格式化
(2)环境变量设置错误
(3)ip与hostname绑定失败
参考:
http://stark-summer.iteye.com/blog/2184123
http://www.aboutyun.com/thread-7513-1-1.html
低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Hadoop 在Windows7操作系统下使用Eclipse来搭建hadoop开发环境
网上有一些都是在Linux下使用安装Eclipse来进行hadoop应用开发,但是大部分Java程序员对linux系统不是那么熟悉,所以需要在windows下开发hadoop程序,所以经过试验,总结了下如何在windows下使用Eclipse来开发hadoop程序代码。 1、 需要下载hadoop的专门插件jar包 hadoop版本为2.3.0,hadoop集群搭建在centos6x上面,插件包下载地址为:http://download.csdn.net/detail/mchdba/8267181,jar包名字为hadoop-eclipse-plugin-2.3.0,可以适用于hadoop2x系列软件版本。 2、 把插件包放到eclipse/plugins目录下 为了以后方便,我这里把尽可能多的jar包都放进来了,如下图所示: 3、重启eclipse,配置Hadoop installation directory 如果插件安装成功,打开Windows—Preferences后,在窗口左侧会有Hadoop Map/Reduce选项,点击此选项,在窗口右侧设置Hadoop安装路径。 ...
- 下一篇
大数据技能修炼的个人道场
版权声明:本文为半吊子子全栈工匠(wireless_com,同公众号)原创文章,未经允许不得转载。 https://blog.csdn.net/wireless_com/article/details/50663978 大数据技术火热而且火爆,学习大数据的课程和资料也泛滥如潮,而大数据研发环境又不是随便就可以搭建起来的,如何有一个自己随时可用的大数据修炼道场呢? 网上有很多hadoop单机版的搭建教程,但大多是基础组件,如果想窥探Hadoop 的整个生态系统,并建立一个人的大数据环境,从而修炼大数据的各种技术,我觉得非 HDP 的Sandbox 莫属了。 HDP 的Sandbox 是一个基于虚拟机的单节点hadoop集群,相当于一个伪分布式环境,学习和使用都非常的轻松便捷。虚拟机既支持VMware也支持VirtualBox,完全可以在windows和mac 上无缝执行,需要注意的是需要64位的主机多核cpu并支持虚拟化。 以virtualbox为例,三步,只需三步,就可以搭建自己修炼大数据技术的环境了。 1)先去下载安装virtualbox, https://www.virtualbo...
相关文章
文章评论
共有0条评论来说两句吧...