惭入佳境之HADOOP的NAMENODE不能正常启动的问题解决
注意到以下错误行不?
java.io.FileNotFoundException: /app/hadoop/tmp/dfs/name/current/VERSION (Permission denied)
印象中,刚才用ROOT帐号试着启动了一下。
X,我再换回HDUSER用户就不行了。
进入VERSION目录一看,原来每次启动之后,NAME目录下的属性会变成启动用户ROOT的。
于是更改回HDUSER。
一切OK。
~~~~~~~~~~~~~~~~~~~~~~~
2013-04-11 01:52:51,606 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = Master/192.168.7.238
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013
************************************************************/
2013-04-11 01:52:53,212 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-11 01:52:53,324 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-11 01:52:53,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-11 01:52:53,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-04-11 01:52:55,098 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-11 01:52:55,408 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-04-11 01:52:55,442 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-04-11 01:52:55,771 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-04-11 01:52:55,772 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2013-04-11 01:52:55,772 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-04-11 01:52:55,772 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-04-11 01:52:56,135 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
2013-04-11 01:52:56,136 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-04-11 01:52:56,136 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-04-11 01:52:56,219 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-04-11 01:52:56,220 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-04-11 01:52:58,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-04-11 01:52:59,185 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-04-11 01:52:59,351 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.FileNotFoundException: /app/hadoop/tmp/dfs/name/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:219)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:215)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:314)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:284)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1410)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1419)
2013-04-11 01:52:59,418 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.FileNotFoundException: /app/hadoop/tmp/dfs/name/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:219)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:215)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:314)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:284)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1410)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1419)
2013-04-11 01:52:59,437 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master/192.168.7.238
************************************************************/

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
localhost: Warning: $HADOOP_HOME is deprecated.问题解决
启动HADOOP或是停止时,都会输出这个WARNING。 解决办法如下: http://thysmichels.com/2012/02/11/tips-running-hadoop-on-ubuntu/ When you get thisWarning: $HADOOP_HOME is deprecated Solution: add “export HADOOP_HOME_WARN_SUPPRESS=”TRUE”" in the hadoop-env.sh. ~~~~~~~~~~~~ 顺带介绍其它常见错误及解决方法: Tips running Hadoop onUbuntu Below is some tips when running Hadoop on Ubuntu. If you find some errors running Hadoop on Ubuntu please comment the problem and how you solved it. When you get thisWarning: $HADOOP_HOME is deprecated Solut...
- 下一篇
惭入佳境之布置双节点DATANODE及错误解决
先流一下口水~~~~ 传说中YAHOO用于HADOOP的机房: 推荐安装指南: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ 其布置多节点的思路是: 先将第一个布置好的SINGLE节点方案弄好。 然后,将第一个节点的东东复制到另一个节点。 再将第二个节点降成DATANODE节点,而第一个节点为NAMENODE节点。 这样,就很好的实现了节点扩展。 但。。。。其实,在会产生一个小小的问题,我没有弄明白: 那是否需要重新FORMAT整个HDFS系统,还是说整个HADOOP方案在节点增加之后,啥都不用动? 因为我试着按指南重新格式化了HDFS,结果,所有的DATANODE无法正常启动了。 出错信息为: 2013-04-11 05:46:48,849 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /ap...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- SpringBoot2全家桶,快速入门学习开发网站教程
- CentOS6,CentOS7官方镜像安装Oracle11G
- Docker使用Oracle官方镜像安装(12C,18C,19C)
- SpringBoot2更换Tomcat为Jetty,小型站点的福音
- CentOS关闭SELinux安全模块
- Linux系统CentOS6、CentOS7手动修改IP地址
- CentOS7安装Docker,走上虚拟化容器引擎之路
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题
- SpringBoot2初体验,简单认识spring boot2并且搭建基础工程
- SpringBoot2整合MyBatis,连接MySql数据库做增删改查操作