首页 文章 精选 留言 我的

精选列表

搜索[文档处理],共10000篇文章
优秀的个人博客,低调大师

ClusterId read in ZooKeeper is null 处理

ClusterId read in ZooKeeper is null. Re-running the program after fixing issue 1 will result in the following error in the log file (Oddly logged at INFO level) 13/12/11 09:45:33 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x207f5580 13/12/11 09:45:33 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x207f5580 connecting to ZooKeeper ensemble=localhost:218113/12/11 09:45:33 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Unable to locate a login configuration)13/12/11 09:45:33 INFO zookeeper.ClientCnxn: Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session13/12/11 09:45:33 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x142e28373f3000c, negotiated timeout = 4000013/12/11 09:45:33 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null 解决办法 The HBase clients will discover the running HBase cluster using the following two properties: hbase.zookeeper.quorum:is used to connect to the zookeeper cluster zookeeper.znode.parent. tells which znode keeps the data (and address for HMaster) for the cluster The value ofzookeeper.znode.parentinHBASE_CONF/hbase-site.xml is specified as/hbase-unsecure(see below) which is correct but for some reason (still trying to figure this out), the value being printed is/hbase.So currently I’ve overriddenthis programatically in the client program byadding the following line to the program conf.set(“zookeeper.znode.parent”, “/hbase-unsecure”);

优秀的个人博客,低调大师

hive executeTask被interrupt处理

异常信息如下: java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "hadoop008/192.168.28.77"; destination host is: "hadoop004":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) at org.apache.hadoop.ipc.Client.call(Client.java:1415) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy17.setReplication(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setReplication(ClientNamenodeProtocolTranslatorPB.java:322) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy18.setReplication(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setReplication(DFSClient.java:1768) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:465) at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:461) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.setReplication(DistributedFileSystem.java:461) at org.apache.hadoop.mapreduce.JobSubmitter.copyRemoteFiles(JobSubmitter.java:142) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:214) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:388) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:481) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:564) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:559) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:559) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:550) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1554) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1321) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1139) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:957) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:145) at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69) at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502) at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:681) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463) at org.apache.hadoop.ipc.Client.call(Client.java:1382) ... 54 more 解决办法: 为HiveOptions.callTimeout设置合适的rpc timeout 参考: https://issues.apache.org/jira/browse/FLUME-1748 http://stackoverflow.com/questions/32186212/hive-is-failing-to-write-data-in-hdfs

优秀的个人博客,低调大师

大数据处理书籍

apache yarn http://www.amazon.com/Apache-Hadoop-YARN-Processing-Addison-Wesley/dp/0321934504/ref=sr_1_1?ie=UTF8&qid=1383118090&sr=8-1&keywords=apache+yarn zookeeper http://www.amazon.com/ZooKeeper-Distributed-coordination-Flavio-Junqueira/dp/1449361307/ref=sr_1_sc_3?ie=UTF8&qid=1384052593&sr=8-3-spell&keywords=zoomkeeper Data Analysis with Open Source Tools http://www.amazon.com/Data-Analysis-Open-Source-Tools/dp/0596802358/ref=sr_1_13?ie=UTF8&qid=1384052694&sr=8-13&keywords=hbase Fast Data Processing with Spark http://www.amazon.com/Fast-Processing-Spark-Holden-Karau/dp/1782167064/ref=sr_1_2?ie=UTF8&qid=1383118177&sr=8-2&keywords=apache+spark learning spark http://www.amazon.com/Learning-Spark-Lightning-fast-data-analytics/dp/1449358624/ref=pd_sim_sbs_b_3/190-4875999-4382525 http://www.oschina.net/project/tag/191/distributed-and-grid?lang=0&os=0&sort=view&p=1 http://www.oschina.net/project/tag/144/logging 日志关联引擎: http://www.oschina.net/p/masslogprocess kibana http://product.china-pub.com/3768532#ml http://product.china-pub.com/3768650 http://product.china-pub.com/3768791

优秀的个人博客,低调大师

Hadoop Browse the filesystem 无效处理

当我们安装好并正常运行hdfs后输入http://xxxxxxxxx:50070会进入下图所示的页面。 其中Browse the filesystem是查看文件系统的入口。 但是在发现这个链接一直无效。通过Chrome的开发工具可以看这个链接访问地址是:nn_browsedfscontent.jsp 下面是nn_browsedfscontent.jsp的代码 1 <%@ page 2 contentType="text/html; charset=UTF-8" 3 import="java.io.*" 4 import="java.security.PrivilegedExceptionAction" 5 import="java.util.*" 6 import="javax.servlet.*" 7 import="javax.servlet.http.*" 8 import="org.apache.hadoop.conf.Configuration" 9 import="org.apache.hadoop.hdfs.*" 10 import="org.apache.hadoop.hdfs.server.namenode.*" 11 import="org.apache.hadoop.hdfs.server.datanode.*" 12 import="org.apache.hadoop.hdfs.protocol.*" 13 import="org.apache.hadoop.hdfs.security.token.delegation.*" 14 import="org.apache.hadoop.io.Text" 15 import="org.apache.hadoop.security.UserGroupInformation" 16 import="org.apache.hadoop.security.token.Token" 17 import="org.apache.hadoop.util.*" 18 import="java.text.DateFormat" 19 import="java.net.InetAddress" 20 import="java.net.URLEncoder" 21 %> 22 <%! 23 static String getDelegationToken(final NameNode nn, 24 HttpServletRequest request, Configuration conf) 25 throws IOException, InterruptedException { 26 final UserGroupInformation ugi = JspHelper.getUGI(request, conf); 27 Token<DelegationTokenIdentifier> token = 28 ugi.doAs( 29 new PrivilegedExceptionAction<Token<DelegationTokenIdentifier>>() 30 { 31 public Token<DelegationTokenIdentifier> run() throws IOException { 32 return nn.getDelegationToken(new Text(ugi.getUserName())); 33 } 34 }); 35 return token.encodeToUrlString(); 36 } 37 38 public void redirectToRandomDataNode( 39 NameNode nn, 40 HttpServletRequest request, 41 HttpServletResponse resp, 42 Configuration conf 43 ) throws IOException, InterruptedException { 44 String tokenString = null; 45 if (UserGroupInformation.isSecurityEnabled()) { 46 tokenString = getDelegationToken(nn, request, conf); 47 } 48 FSNamesystem fsn = nn.getNamesystem(); 49 String datanode = fsn.randomDataNode(); 50 String redirectLocation; 51 String nodeToRedirect; 52 int redirectPort; 53 if (datanode != null) { 54 redirectPort = Integer.parseInt(datanode.substring(datanode.indexOf(':') 55 + 1)); 56 nodeToRedirect = datanode.substring(0, datanode.indexOf(':')); 57 } 58 else { 59 nodeToRedirect = nn.getHttpAddress().getHostName(); 60 redirectPort = nn.getHttpAddress().getPort(); 61 } 62 String fqdn = InetAddress.getByName(nodeToRedirect).getCanonicalHostName(); 63 redirectLocation = "http://" + fqdn + ":" + redirectPort + 64 "/browseDirectory.jsp?namenodeInfoPort=" + 65 nn.getHttpAddress().getPort() + 66 "&dir=/" + 67 (tokenString == null ? "" : 68 JspHelper.getDelegationTokenUrlParam(tokenString)); 69 resp.sendRedirect(redirectLocation); 70 } 71 %> 72 73 <html> 74 75 <title></title> 76 77 <body> 78 <% 79 NameNode nn = (NameNode)application.getAttribute("name.node"); 80 Configuration conf = (Configuration) application.getAttribute(JspHelper.CURRENT_CONF); 81 redirectToRandomDataNode(nn, request, response, conf); 82 %> 83 <hr> 84 85 <h2>Local logs</h2> 86 <a href="/logs/">Log</a> directory 87 88 <% 89 out.println(ServletUtil.htmlFooter()); 90 %> 从代码中可以看出实际是跳转到一台datanode的browseDirectory.jsp,如: http://xxxxxxx:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/ 而xxxxxxx是集群中一台datanode的主机名。 那么就可以从以下两个点出发去看为什么无效: 1.50075端口是否正常 2.使用的机器是否能通过主机面访问datanode 检查之后发现自己这两个都没有配置...... 最后通过在hdfs-site.xml中添加如下配置 <property> <name>dfs.datanode.http.address</name> <value>10.0.0.234:50075</value> </property> 在hosts中添加datanode的主机命和ip修复了这个问题 记录、共勉。 如果本文对您有帮助,点一下右下角的“推荐”

优秀的个人博客,低调大师

OpenAI 上线专属翻译网页,支持图片与文档

OpenAI 近日悄然上线了名为ChatGPT Translate的新功能。该网页版界面与 Google 翻译高度相似,目前已向所有用户开放,无需付费账户即可使用。 用户可以通过chatgpt.com/translate直接访问该工具。虽然外观极简,但其功能表现远超传统翻译软件: 多模态输入:支持粘贴文本、语音输入、附加文件,甚至可以上传标志或菜单的照片。 意译优先:与传统的逐字翻译不同,该功能旨在保留原文深层含义,并支持用户实时调整语气,如“商务正式”、“学术风”或“通俗易懂”。 对话式微调:与传统翻译工具最大的区别在于,翻译完成后用户可以继续与其对话,对特定词汇或句式进行追问和微调。 目前 OpenAI 尚未明确该功能背后搭载的具体模型,引发了外界关于其是否由GPT5.2驱动的猜测。 该功能暂时仅限网页端,Android 或 iOS 客户端尚未内置专属开关。

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册