UnsupportedOperationException: Currently the writer can only accept By...
往Hive表中插入时报错:
java.lang.RuntimeException: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.io.RCFile$Writer.append(RCFile.java:880) at org.apache.hadoop.hive.ql.io.RCFileOutputFormat$2.write(RCFileOutputFormat.java:140) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:606) java.lang.RuntimeException: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.io.RCFile$Writer.append(RCFile.java:880) at org.apache.hadoop.hive.ql.io.RCFileOutputFormat$2.write(RCFileOutputFormat.java:140) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:606) java.lang.RuntimeException: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: java.lang.UnsupportedOperationException: Currently the writer can only accept BytesRefArrayWritable at org.apache.hadoop.hive.ql.io.RCFile$Writer.append(RCFile.java:880) at org.apache.hadoop.hive.ql.io.RCFileOutputFormat$2.write(RCFileOutputFormat.java:140) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:606)
貌似建表是格式的问题:http://comments.gmane.org/gmane.comp.java.hadoop.hive.user/2849
建表语句如下:
CREATE TABLE client_user_type_installtime( userkey string, mos string, type string) PARTITIONED BY ( dt string, installtime_type string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '9' LINES TERMINATED BY '10' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.RCFileOutputFormat'
修改建表语句:
CREATE TABLE client_user_type_installtime( userkey string, mos string, type string) PARTITIONED BY ( dt string, installtime_type string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
重建后插入成功。

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
浅谈医学大数据(中)
数据分析框架(传统数据分析框架,大数据分析框架) 医疗大数据有着(前一节)提到的所有特征。在医疗大数据带来各种优势的同时,大数据随之带来的各种特性使得传统的数据处理和数据分析方法及软件捉襟见肘,问题多多。在大数据时代出现之前,受限于数据量的可获得性和计算能力的有限性,传统的数据管理和分析采用着不同的思路和流程。传统上,对于问题的研究建立在假设的基础上进行验证,进而研究事物的相关因果性,希望能回答“为什么”。 而在大数据时代,海量数据的涌现提供了从不同角度更细致更全面观察研究数据的可能,从而打开了人们的好奇心,探索欲望,人们想知道到数据告诉了我什么,而不仅仅是我的猜想是否被数据验证了。人们越来越多地用大数据挖掘各种感兴趣的关联,非关联等相关性,然后再进一步比较,分析,归纳,研究(“为什么”变成一个选项而不是唯一终极目标)。大数据与传统数据思路上的不同导致了分析流程的不同,如图一所示: 图一 面对海量的数据和不同的分析思路,大数据的管理和分析与传统数据分析的差异日益加大。回答特定问题的单一预设结构化数据库明显不能完全胜任处理大数据的海量及混杂等问题。数据的混杂多样性具体可以从一些调查数据中...
- 下一篇
64位centos 下编译 hadoop 2.6.0 源码
64位os下为啥要编译hadoop就不解释了,百度一下就能知道原因,下面是步骤: 前提:编译源码所在的机器,必须能上网,否则建议不要尝试了 一. 下载必要的组件 a) 下载hadoop源码 (当前最新的稳定版是2.6.0)地址 http://mirrors.hust.edu.cn/apache/hadoop/common/stable/hadoop-2.6.0-src.tar.gz b) 下载apache-ant (centos自带的ant版本太低,编译过程中会报错)地址: http://mirrors.cnnic.cn/apache//ant/binaries/apache-ant-1.9.4-bin.zip (最新版本即可)c) 下载protobuf-2.5.0.tar.gz (这是google出品的一个数据传输格式)地址: https://developers.google.com/protocol-buffers/docs/downloads (官网地址要翻!墙!,百度上也能找到国内下载地址)注意:hadoop2.6.0必须配protobuf 2.5.0版本,版本不匹配,编译...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- Docker快速安装Oracle11G,搭建oracle11g学习环境
- CentOS8编译安装MySQL8.0.19
- CentOS关闭SELinux安全模块
- CentOS6,CentOS7官方镜像安装Oracle11G
- CentOS8,CentOS7,CentOS6编译安装Redis5.0.7
- CentOS7,8上快速安装Gitea,搭建Git服务器
- Docker使用Oracle官方镜像安装(12C,18C,19C)
- Linux系统CentOS6、CentOS7手动修改IP地址
- CentOS7安装Docker,走上虚拟化容器引擎之路
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题