首页 文章 精选 留言 我的

精选列表

搜索[部署],共10000篇文章
优秀的个人博客,低调大师

Linux下部署Hadoop伪分布模式

Hadoop版本为1.2.1 Linux使用Fedora19并使用hadoop账号安装 第一步:配置ssh本地登录证书(虽然为伪分布模式,Hadoop依然会使用SSH进行通信) [hadoop@promote ~]$ which ssh /usr/bin/ssh [hadoop@promote ~]$ which ssh-keygen /usr/bin/ssh-keygen [hadoop@promote ~]$ which sshd /usr/sbin/sshd [hadoop@promote ~]$ ssh-keygen -t rsa 然后一路回车 Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Passphrases do not match. Try again. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 2f:a9:60:c7:dc:38:8f:c7:bb:70:de:d4:39:c3:39:87 hadoop@promote.cache-dns.local The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | | | S | | o o o o + | | o B.= o E . | | . o Oo+ = | | o.=o. | +-----------------+ 最终将在/home/hadoop/.ssh/路径下生成私钥id_rsa和公钥id_rsa.pub [hadoop@promote .ssh]$ cd /home/hadoop/.ssh/ [hadoop@promote .ssh]$ ls id_rsa id_rsa.pub 修改sshd服务配置文件: [hadoop@promote .ssh]$ su root 密码: [root@promote .ssh]# vi /etc/ssh/sshd_config 启用RSA加密算法验证(去掉前面的#号) RSAAuthentication yes PubkeyAuthentication yes # The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2 # but this is overridden so installations will only check .ssh/authorized_keys AuthorizedKeysFile .ssh/authorized_keys 修改mapred-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> 修改hdfs-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> master中指定的SNN节点和slaves中指定的从节点位置均为本地 [hadoop@promote conf]$ cat masters localhost [hadoop@promote conf]$ cat slaves localhost 第五步:启动Hadoop [hadoop@promote bin]$ cd ../conf/ [hadoop@promote conf]$ cd ../bin [hadoop@promote bin]$ sh start-all.sh starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-promote.cache-dns.local.out localhost: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-promote.cache-dns.local.out localhost: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-promote.cache-dns.local.out starting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-promote.cache-dns.local.out localhost: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-promote.cache-dns.local.out 可以看到所有Hadoop守护进程均已启动 保存并退出,然后重启sshd服务 [root@promote .ssh]# service sshd restart Redirecting to /bin/systemctl restart sshd.service [root@promote .ssh]# ps -ef|grep sshd root 1995 1 0 22:33 ? 00:00:00 sshd: hadoop [priv] hadoop 2009 1995 0 22:33 ? 00:00:00 sshd: hadoop@pts/0 root 4171 1 0 23:11 ? 00:00:00 /usr/sbin/sshd -D root 4175 3397 0 23:12 pts/0 00:00:00 grep --color=auto sshd 然后切换回hadoop用户,将ssh证书公钥拷贝至/home/hadoop/.ssh/authorized_keys文件中 [root@promote .ssh]# su hadoop [hadoop@promote .ssh]$ cat id_rsa.pub >> authorized_keys 修改authorized_keys文件的权限为644(这步一定要有) [hadoop@promote .ssh]$ chmod 644 authorized_keys [hadoop@promote .ssh]$ ssh localhost The authenticity of host 'localhost (127.0.0.1)' can't be established. RSA key fingerprint is 25:1f:be:72:7b:83:8e:c7:96:b6:71:35:fc:5d:2e:7d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Last login: Thu Feb 13 23:42:43 2014 第一次登陆将会将证书内容保存在/home/hadoop/.ssh/known_hosts文件中,以后再次登陆将不需要输入密码 [hadoop@promote .ssh]$ ssh localhost Last login: Thu Feb 13 23:46:04 2014 from localhost.localdomain 至此ssh证书部分配置完成 第二步:安装JDK [hadoop@promote ~]$ java -version java version "1.7.0_25" OpenJDK Runtime Environment (fedora-2.3.10.3.fc19-i386) OpenJDK Client VM (build 23.7-b01, mixed mode) 将OpenJDK换为Oracle的Java SE [hadoop@promote .ssh]$ cd ~ [hadoop@promote ~]$ uname -i i386 在Oracle的官网下载jdk-6u45-linux-i586.bin后上传至服务器,赋予权限并进行安装,最后删除安装包 [hadoop@promote ~]$ chmod u+x jdk-6u45-linux-i586.bin [hadoop@promote ~]$ ./jdk-6u45-linux-i586.bin [hadoop@promote ~]$ rm -rf jdk-6u45-linux-i586.bin [hadoop@promote conf]$ export PATH=$PATH:/home/hadoop/jdk1.6.0_45/bin 出现以下结果说明JDK成功安装: [hadoop@promote ~]$ java -version java version "1.6.0_45" Java(TM) SE Runtime Environment (build 1.6.0_45-b06) Java HotSpot(TM) Client VM (build 20.45-b01, mixed mode, sharing) 第三步:安装Hadoop 在Hadoop官网下载hadoop-1.2.1.tar.gz并上传至服务器/home/hadoop路径下 [hadoop@promote ~]$ tar -xzf hadoop-1.2.1.tar.gz [hadoop@promote ~]$ rm -rf hadoop-1.2.1.tar.gz [hadoop@promote ~]$ cd hadoop-1.2.1/conf/ [hadoop@promote conf]$ vi hadoop-env.sh 将JAVA_HOME指向第二步安装的JDK所在目录片 # The java implementation to use. Required. export JAVA_HOME=/home/hadoop/jdk1.6.0_45 保存并退出 第四步:修改Hadoop配置文件 修改core-site.xml: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> 最新内容请见作者的GitHub页:http://qaseven.github.io/

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册