您现在的位置是:首页 > 文章详情

基于zookeeper的高可用集群

日期:2017-11-11点击:397

1.准备zookeeper服务器

#node1,node2,node3 #安装请参考http://suyanzhu.blog.51cto.com/8050189/1946580


2.准备NameNode节点

#node1,node4


3.准备JournalNode节点

#node2,node3,node4


4.准备DataNode节点

#node2,node3,node4 #启动DataNode节点命令hadoop-daemon.sh start datanode


5.修改hadoop的hdfs-site.xml配置文件

<configuration>         <property>                 <name>dfs.nameservices</name>                 <value>yunshuocluster</value>         </property>         <property>                 <name>dfs.ha.namenodes.yunshuocluster</name>                 <value>nn1,nn2</value>         </property>         <property>                 <name>dfs.namenode.rpc-address.yunshuocluster.nn1</name>                 <value>node1:8020</value>         </property>         <property>                 <name>dfs.namenode.rpc-address.yunshuocluster.nn2</name>                 <value>node4:8020</value>         </property>         <property>                 <name>dfs.namenode.http-address.yunshuocluster.nn1</name>                 <value>node1:50070</value>         </property>         <property>                 <name>dfs.namenode.http-address.yunshuocluster.nn2</name>                 <value>node4:50070</value>         </property>         <property>                 <name>dfs.namenode.shared.edits.dir</name>                 <value>qjournal://node2:8485;node3:8485;node4:8485/yunshuocluste r</value>         </property>         <property>                 <name>dfs.client.failover.proxy.provider.mycluster</name>                 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailo verProxyProvider</value>         </property>         <property>                 <name>dfs.ha.fencing.methods</name>                 <value>sshfence</value>         </property>         <property>                 <name>dfs.ha.fencing.ssh.private-key-files</name>                 <value>/root/.ssh/id_dsa</value>         </property>         <property>                 <name>dfs.journalnode.edits.dir</name>                 <value>/opt/journalnode/</value>         </property>         <property>                 <name>dfs.ha.automatic-failover.enabled</name>                 <value>true</value>         </property> </configuration>


6.修改hadoop的core-site.xml配置文件

<configuration>     <property>         <name>fs.defaultFS</name>         <value>hdfs://yunshuocluster</value>     </property>     <property>         <name>hadoop.tmp.dir</name>         <value>/opt/hadoop-2.5</value>     </property>     <property>         <name>ha.zookeeper.quorum</name>         <value>node1:2181,node2:2181,node3:2181</value>     </property> </configuration>


7.配置slaves配置文件

node2 node3 node4


8.启动zookeeper(node1,node2,node3)

zkServer.sh start


9.启动Journalnode(node2,node3,node4上分别执行下面的命令)

#启动命令 停止命令hadoop-daemon.sh stop journalnode hadoop-daemon.sh start journalnode


10.检查Journalnode,通过查看日志

cd /home/hadoop-2.5.1/logs ls tail -200 hadoop-root-journalnode-node2.log


11.格式化NameNode(两台中的一台,这里格式化node4这台NameNode节点)

hdfs namenode -format cd /opt/hadoop-2.5 #两台NameNode同步完成 scp -r /opt/hadoop-2.5/* root@node1:/opt/hadoop-2.5/


12.初始化zkfc

hdfs zkfc -formatZK


13.启动服务

start-dfs.sh #stop-dfs.sh表示停止服务


本文转自 素颜猪 51CTO博客,原文链接:http://blog.51cto.com/suyanzhu/1946843
原文链接:https://yq.aliyun.com/articles/561215
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章