Hadoop2.7实战v1.0之动态删除DataNode(含NodeManager)节点(修改dfs.replication)
1.ActiveNameNode修改hdfs-site.xml文件
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# vi hdfs-site.xml
-
- <!--动态许可datanode连接namenode列表-->
- <property>
- <name>dfs.hosts</name>
- <value>/hadoop/hadoop-2.7.2/etc/hadoop/include_datanode</value>
- </property>
-
- <!--动态拒绝datanode连接namenode列表 -->
- <property>
- <name>dfs.hosts.exclude</name>
- <value>/hadoop/hadoop-2.7.2/etc/hadoop/exclude_datanode</value>
- </property>
###StandbyNameNode节点可以不同步,也可以同步(我采取同步)
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# scp hdfs-site.xml root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop/
- hdfs-site.xml 100% 4711 4.6KB/s 00:00
2.创建include_datanode和exclude_datanode文件
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# vi /hadoop/hadoop-2.7.2/etc/hadoop/include_datanode
- sht-sgmhadoopdn-01
- sht-sgmhadoopdn-02
- sht-sgmhadoopdn-03
- sht-sgmhadoopdn-04
#在文件中罗列出能够访问namenode的所有datanode节点
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# vi /hadoop/hadoop-2.7.2/etc/hadoop/exclude_datanode
- sht-sgmhadoopdn-04
#在文件中罗列出拒绝访问namenode的所有datanode节点
###StandbyNameNode节点可以不同步,也可以同步(我采取同步)
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# scp include_datanode exclude_datanode root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop/
3.查看当前备份系数
在我的测试环境中,目前节点为4台,备份系数为4,将备份系数从4降低到3
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# more hdfs-site.xml
- <property>
- <name>dfs.replication</name>
- <value>4</value>
- </property>
- [root@sht-sgmhadoopnn-01 hadoop]# hdfs fsck /
- 16/03/06 21:49:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Connecting to namenode via http://sht-sgmhadoopnn-01:50070/fsck?ugi=root&path=%2F
- FSCK started by root (auth:SIMPLE) from /172.16.101.55 for path / at Sun Mar 06 21:49:12 CST 2016
- ...............Status: HEALTHY
- Total size: 580152025 B
- Total dirs: 17
- Total files: 15
- Total symlinks: 0
- Total blocks (validated): 14 (avg. block size 41439430 B)
- Minimally replicated blocks: 14 (100.0 %)
- Over-replicated blocks: 0 (0.0 %)
- Under-replicated blocks: 0 (0.0 %)
- Mis-replicated blocks: 0 (0.0 %)
- Default replication factor: 3
- Average block replication: 4.0
- Corrupt blocks: 0
- Missing replicas: 0 (0.0 %)
- Number of data-nodes: 4
- Number of racks: 1
- FSCK ended at Sun Mar 06 21:49:12 CST 2016 in 8 milliseconds
###参数Default replication factor为3,而hdfs-site.xml文件中dfs.replication值为4,说明设置了,然而集群没有重启生效。
故在本次实验中只需修改hdfs-site.xml文件而不需要重启集群,和 修改参数Average block replication值从4到3(hdfs dfs -setrep -w 3 -R /)。
4.修改参数
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# more hdfs-site.xml
- <property>
- <name>dfs.replication</name>
- <value>3</value>
- </property>
- [root@sht-sgmhadoopnn-01 hadoop]# scp hdfs-site.xml root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop/
-
- [root@sht-sgmhadoopnn-01 hadoop]# hdfs dfs -setrep -w 3 -R /
###文件系统假如灰常大,建议在业务峰谷时操作这条命令,因为耗时。
遇到的疑问:
在进行文件备份系数的降低时,能够很快的进行Replication set,但是在Waiting for的过程中却很长时间没有完成。
最终只能手动Ctrl+C中断,个人猜测在这个过程中HDFS正视图对数据文件进行操作,在删除一个副本容量的数据。
因此,我们应该对dfs.replication的数值做出很好的规划,尽量避免需要降低该数值的情况出现。
###步骤4导致datanode1节点数据块删除
5.再次hdfs fsck /
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# hdfs fsck /
- 16/03/06 22:45:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Connecting to namenode via http://sht-sgmhadoopnn-01:50070/fsck?ugi=root&path=%2F
- FSCK started by root (auth:SIMPLE) from /172.16.101.55 for path / at Sun Mar 06 22:45:47 CST 2016
- ................Status: HEALTHY
- Total size: 580152087 B
- Total dirs: 17
- Total files: 16
- Total symlinks: 0
- Total blocks (validated): 15 (avg. block size 38676805 B)
- Minimally replicated blocks: 15 (100.0 %)
- Over-replicated blocks: 0 (0.0 %)
- Under-replicated blocks: 0 (0.0 %)
- Mis-replicated blocks: 0 (0.0 %)
- Default replication factor: 3
- Average block replication: 3.0
- Corrupt blocks: 0
- Missing replicas: 0 (0.0 %)
- Number of data-nodes: 4
- Number of racks: 1
- FSCK ended at Sun Mar 06 22:45:47 CST 2016 in 7 milliseconds
-
- The filesystem under path '/' is HEALTHY
- You have mail in /var/spool/mail/root
- [root@sht-sgmhadoopnn-01 hadoop]#
### Average block replication值为3.0
6.第一次动态刷新配置hdfs dfsadmin -refreshNodes
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# hdfs dfsadmin -refreshNodes
- Refresh nodes successful for sht-sgmhadoopnn-01/172.16.101.55:8020
- Refresh nodes successful for sht-sgmhadoopnn-02/172.16.101.56:8020
7.通过过hdfs dfsadmin -report或http://172.16.101.55:50070/dfshealth.html#tab-datanode
http://172.16.101.55:50070/dfshealth.html#tab-datanode
###刚开始状态为Decommission In Progress,会平衡数据的(datanode1当前数据量used:138.88kb,blocks:0, 会被复制平衡数据块)
###过一会状态为Decommissioned
需要注意的是:
在删除节点时一定要停止所有Hadoop的Job,否则程序还会向要删除的节点同步数据,这样也会导致Decommissioned的过程一直无法完成。
8.当状态为Decommissioned后,运行命令hadoop-daemon.sh stop datanode或者直接kill -9 datanode进程
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-04 sbin]# jps
- 14508 DataNode
- 11025 Jps
- 15517 NodeManager
- [root@sht-sgmhadoopdn-04 sbin]# ./hadoop-daemon.sh stop datanode
- stopping datanode
- [root@sht-sgmhadoopdn-04 sbin]# jps
- 11056 Jps
- 15517 NodeManager
- [root@sht-sgmhadoopdn-04 sbin]#
9.由于Hadoop 2.X引入了YARN框架,所以对于每个计算节点都可以通过NodeManager进行管理,同理启动NodeManager进程后,即可将其加入集群。在新增节点,运行sbin/yarn-daemon.sh start nodemanager即可,反之手动执行命令sbin/yarn-daemon.sh stop nodemanager。在ResourceManager,通过yarn node -list查看集群情况。
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-04 sbin]# ./yarn-daemon.sh stop nodemanager
- stopping nodemanager
- [root@sht-sgmhadoopdn-01 ~]# yarn node -list
- 16/03/06 23:39:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Total Nodes:4
- Node-Id Node-State Node-Http-Address Number-of-Running-Containers
- sht-sgmhadoopdn-04.telenav.cn:54705 RUNNING sht-sgmhadoopdn-04.telenav.cn:23999 0
- sht-sgmhadoopdn-03.telenav.cn:7573 RUNNING sht-sgmhadoopdn-03.telenav.cn:23999 0
- sht-sgmhadoopdn-02.telenav.cn:38316 RUNNING sht-sgmhadoopdn-02.telenav.cn:23999 0
- sht-sgmhadoopdn-01.telenav.cn:43903 RUNNING sht-sgmhadoopdn-01.telenav.cn:23999 0
- [root@sht-sgmhadoopdn-04 sbin]# jps
- 11158 Jps
- [root@sht-sgmhadoopdn-04 sbin]#
10.【注释掉】要从集群中删除的datanode机器
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# vi allow_datanode
- sht-sgmhadoopdn-01
- sht-sgmhadoopdn-02
- sht-sgmhadoopdn-03
- #sht-sgmhadoopdn-04
- [root@sht-sgmhadoopnn-01 hadoop]# vi exclude_datanode
- #sht-sgmhadoopdn-04
- [root@sht-sgmhadoopnn-01 hadoop]# scp allow_datanode exclude_datanode root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2/etc/hadoop/
###这里我并不需要也注释掉slaves文件,因为dfs.hosts级别要比slaves文件要高些,当然注释掉slaves文件中sht-sgmhadoopdn-04机器也是无可厚非的!!!
###其实主要是之前的步骤,nn-01机器操作同步给nn-02了,所以在此步骤,也应当一致
疑问:怎样清除Decommissioned datanode information?通过执行hdfs dfsadmin -refreshNodes命令还是重启集群?
疑问:怎样清除Decommissioned datanode information?通过执行hdfs dfsadmin -refreshNodes命令还是重启集群?
疑问:怎样清除Decommissioned datanode information?通过执行hdfs dfsadmin -refreshNodes命令还是重启集群?
11. 第二次动态刷新配置hdfs dfsadmin –refreshNodes(正确做法,无需重启集群,适用于生产环境)
[root@sht-sgmhadoopnn-01 hadoop]# hdfs dfsadmin -refreshNodes
### Decommissioned datanode信息清除干净!
12.通过重启集群测试(也是正确做法,需要重启集群,不适用于生产环境,需要注释掉slaves文件中不需要连接到namenode机器)
停止集群
[root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh
[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh stop resourcemanager
[root@sht-sgmhadoopnn-01 sbin]# stop-dfs.sh
重启集群
[root@sht-sgmhadoopnn-01 sbin]# start-dfs.sh
[root@sht-sgmhadoopnn-01 sbin]# start-yarn.sh
[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh start resourcemanager
### Decommissioned datanode信息清除干净!
13.运行yarn rmadmin -refreshNodes清除sht-sgmhadoopnn-04 nodemanager信息
通过命令或者web查看:
yarn node -list
http://172.16.101.55:8088/cluster/nodes
[root@sht-sgmhadoopnn-01 bin]# yarn rmadmin –refreshNodes
###刷新web
14.参数官网解释
http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
【dfs.hosts】 :
Names a file that contains a list of hosts that are permitted to connect to the namenode.
The full pathname of the file must be specified. If the value is empty, all hosts are permitted.
【dfs.hosts.exclude】 :
Names a file that contains a list of hosts that are not permitted to connect to the namenode.
The full pathname of the file must be specified. If the value is empty, no hosts are excluded.
[root@sht-sgmhadoopnn-01 hadoop]# hadoop dfsadmin -help
-refreshNodes: Updates the namenode with the set of datanodes allowed to connect to the namenode.
Namenode re-reads datanode hostnames from the file defined by
dfs.hosts, dfs.hosts.exclude configuration parameters.
Hosts defined in dfs.hosts are the datanodes that are part of
the cluster. If there are entries in dfs.hosts, only the hosts
in it are allowed to register with the namenode.
Entries in dfs.hosts.exclude are datanodes that need to be
decommissioned. Datanodes complete decommissioning when
all the replicas from them are replicated to other datanodes.
Decommissioned nodes are not automatically shutdown and
are not chosen for writing new replicas.

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Hadoop2.x运维实战之入门手册v1.0
Hadoop2.x运维实战之入门手册V1.0 0.Hadoop2.x生态圈介绍 1.常用组件介绍(体系结构+进程) 1.1HDFS 1.2MapReduce 1.3Yarn 1.4Hive 1.5Hbase 1.6Zookeeper 1.7Flume 1.8Kafka 1.9Sqoop 1.Hadoop2.6.0的伪分布环境搭建 2.Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建(HDFS,YARN HA) 3.Hadoop 2.x HDFS和YARN的启动方式 4.Hadoop2.x常用端口及定义方法 5.Hadoop2.x常用命令 5.1学会怎样查看命令帮助 5.2hadoop fs 5.3hdfs dfs 5.4hdfs dfsadmin 5.5hdfs haadmin 5.6hdfs fsck 5.7yarn rmadmin 5.8其他命令 6.HDFS HA实战 7.YARN HA实战 8.动态添加DataNode(含NodeManager)节点(不修改dfs.replication) 9.添加Da...
- 下一篇
GraphFrames简介
Databricks公司宣布推出了Apache Spark上的图处理GraphFrames库,通过和UCB和MIT合作,他们基于DataFrames构建了一个图处理库,GraphFrames受益于DataFrames的高性能和可拓展性,也能提供一个统一的图处理API接口。支持的语言包括Scala、Java、Python。 什么是GraphFrames GraphFrame支持通用的图处理,和Apache Spark的GraphX库很像,除此之外,GraphFrames基于Spark DataFrames构建,从而有以下几个优点。 Python,Java和Scala API:GraphFrames为三种语言提供了通用的API接口。首次实现了所有在GraphX中实现的算法都能在python和Java中使用。 强力的查询:GraphFrames允许用于使用简短的查询,就像和Spark SQL和DataFrame中强力的查询语句一样。 保存和载入图模型:GraphFrames完全支持DataFrame结构的数据源,允许使用熟悉的Parquet、JSON、和CSV读写图。 一个社交网络的例子 假...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- CentOS8安装MyCat,轻松搞定数据库的读写分离、垂直分库、水平分库
- Windows10,CentOS7,CentOS8安装MongoDB4.0.16
- Red5直播服务器,属于Java语言的直播服务器
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题
- Linux系统CentOS6、CentOS7手动修改IP地址
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装
- CentOS8,CentOS7,CentOS6编译安装Redis5.0.7
- CentOS6,7,8上安装Nginx,支持https2.0的开启
- CentOS关闭SELinux安全模块
- 2048小游戏-低调大师作品