Hadoop2.7实战v1.0之HDFS HA
HDFS HA实战v1.0
当前环境: hadoop+zookeeper(namenode,resourcemanager HA)
namenode | serviceId | Init status |
sht-sgmhadoopnn-01 | nn1 | active |
sht-sgmhadoopnn-02 | nn2 | standby |
参考: http://blog.csdn.net/u011414200/article/details/50336735
一.查看namenode是active还是standby
1.打开网页
2.查看zkfc日志
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 logs]# more hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.log
-
- …………………..
-
- 2016-02-28 00:24:00,692 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 active...
-
- 2016-02-28 00:24:01,762 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at sht-sgmhadoopnn-01/172.16.101.55:8020 to active state
-
-
- [root@sht-sgmhadoopnn-02 logs]# more hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.log
-
- …………………..
-
- 2016-02-28 00:24:01,186 INFO org.apache.hadoop.ha.ZKFailoverController: ZK Election indicated that NameNode at sht-sgmhadoopnn-02/172.16.101.56:8020 should become standby
-
- 2016-02-28 00:24:01,209 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at sht-sgmhadoopnn-02/172.16.101.56:8020 to standby state
3. 通过命令hdfs haadmin –getServiceState
###$HADOOP_HOME/etc/hadoop/hdfs-site.xml, dfs.ha.namenodes.[dfs.nameservices]
<!--设置NameNode IDs 此版本最大只支持两个NameNode -->
dfs.ha.namenodes.mycluster
nn1,nn2
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-02 logs]# hdfs haadmin -getServiceState nn1
-
- active
-
- [root@sht-sgmhadoopnn-02 logs]# hdfs haadmin -getServiceState nn2
-
- standby
二.基本命令
点击(此处)折叠或打开
Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
classpath prints the classpath
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies list/get/set block storage policies
version print the version
###########################################################################
[root@sht-sgmhadoopnn-02 logs]# hdfs namenode --help
Usage: java NameNode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <rollback|downgrade|started> ] |
[-finalize] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby] |
[-recover [ -force] ] |
[-metadataVersion ] ]
###########################################################################
[root@sht-sgmhadoopnn-02 logs]# hdfs haadmin --help
-help: Unknown command
Usage: haadmin
[-transitionToActive [--forceactive] <serviceId>]
[-transitionToStandby <serviceId>]
[-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]
[-getServiceState <serviceId>]
[-checkHealth <serviceId>]
[-help <command>]

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Hadoop2.7实战v1.0之添加DataNode节点后,更改文件复制策略dfs.replication
1.查看当前系统的复制策略dfs.replication为3,表示文件会备份成3份 a.通过查看hdfs-site.xml 文件 点击(此处)折叠或打开 [root@sht-sgmhadoopnn-01 ~]# cd /hadoop/hadoop-2.7.2/etc/hadoop [root@sht-sgmhadoopnn-01 hadoop]# more hdfs-site.xml <property> <name>dfs.replication</name> <value>3</value> </property> b.通过查看当前hdfs文件的复制值是多少 点击(此处)折叠或打开 [root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -ls /testdir Found 7 items -rw-r--r-- 3 root supergroup 37322672 2016-03-05 17:59 /testdir/012_HDFS.avi -rw-r--r-- 3 root su...
- 下一篇
Hadoop2.7实战v1.0之YARN HA
YARN HA实战v1.0 当前环境:hadoop+zookeeper(namenode,resourcemanager HA) resourcemanager serviceId init status sht-sgmhadoopnn-01 rm1 active sht-sgmhadoopnn-02 rm2 standby 参考: http://blog.csdn.net/u011414200/article/details/50336735 http://blog.csdn.net/u011414200/article/details/50276257 一.查看resourcemanager是active还是standby 1.打开网页 http://172.16.101.55:8088/cluster/cluster http://172.16.101.56:8088/cluster/cluster 2.查看resourcemanager日志 点击(此处)折叠或打开 [root@sht-sgmhadoopnn-01 logs]# more yarn-root-resource...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- Docker使用Oracle官方镜像安装(12C,18C,19C)
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- SpringBoot2全家桶,快速入门学习开发网站教程
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装
- MySQL8.0.19开启GTID主从同步CentOS8
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- CentOS8编译安装MySQL8.0.19
- SpringBoot2初体验,简单认识spring boot2并且搭建基础工程
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题
- Linux系统CentOS6、CentOS7手动修改IP地址