首页 文章 精选 留言 我的

精选列表

搜索[加密工具],共10000篇文章
优秀的个人博客,低调大师

大数据工具篇之flume1.4-安装部署指南

一、引言 flume-ng是一个分布式、高可靠和高效的日志收集系统,flume-ng是flume的新版本的意思,其中“ng”意为new generate(新一代),目前来说,flume-ng 1.4是最新的版本。flume-ng与flume相比,发生了很大的变化,因为之前一直在flume0.9的版本,一直没有升级到flume-ng,最近因为项目需要,做了一次升级,发现了一些问题,特记录下来,分享给大家。 二、版本说明 flume-ng 1.4.0 三、安装步骤 下载、解压、安装JDK、设置环境变量部分已经有很多介绍性的问题,不做说明。需要特别说明之处的是,flume-ng不需要要zookeeper,无需设置。 四、flume-ng bug 安装完成后运行flume-ng会出现错误信息,这主要是因为shell脚本的问题,我将修改后的flume-ng完整的上传如下,其中标注:#zhangzl下面的行是需要修改的部分。完整脚本如下所示: 1 #!/bin/bash 2 # 3 # 4 # Licensed to the Apache Software Foundation (ASF) under one 5 # or more contributor license agreements. See the NOTICE file 6 # distributed with this work for additional information 7 # regarding copyright ownership. The ASF licenses this file 8 # to you under the Apache License, Version 2.0 (the 9 # "License"); you may not use this file except in compliance 10 # with the License. You may obtain a copy of the License at 11 # 12 # http://www.apache.org/licenses/LICENSE-2.0 13 # 14 # Unless required by applicable law or agreed to in writing, 15 # software distributed under the License is distributed on an 16 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 17 # KIND, either express or implied. See the License for the 18 # specific language governing permissions and limitations 19 # under the License. 20 # 21 22 ################################ 23 # constants 24 ################################ 25 26 FLUME_AGENT_CLASS="org.apache.flume.node.Application" 27 FLUME_AVRO_CLIENT_CLASS="org.apache.flume.client.avro.AvroCLIClient" 28 FLUME_VERSION_CLASS="org.apache.flume.tools.VersionInfo" 29 FLUME_TOOLS_CLASS="org.apache.flume.tools.FlumeToolsMain" 30 31 CLEAN_FLAG=1 32 ################################ 33 # functions 34 ################################ 35 36 info() { 37 if [ ${CLEAN_FLAG} -ne 0 ]; then 38 local msg=$1 39 echo "Info: $msg" >&2 40 fi 41 } 42 43 warn() { 44 if [ ${CLEAN_FLAG} -ne 0 ]; then 45 local msg=$1 46 echo "Warning: $msg" >&2 47 fi 48 } 49 50 error() { 51 local msg=$1 52 local exit_code=$2 53 54 echo "Error: $msg" >&2 55 56 if [ -n "$exit_code" ] ; then 57 exit $exit_code 58 fi 59 } 60 61 # If avail, add Hadoop paths to the FLUME_CLASSPATH and to the 62 # FLUME_JAVA_LIBRARY_PATH env vars. 63 # Requires Flume jars to already be on FLUME_CLASSPATH. 64 add_hadoop_paths() { 65 local HADOOP_IN_PATH=$(PATH="${HADOOP_HOME:-${HADOOP_PREFIX}}/bin:$PATH" \ 66 which hadoop 2>/dev/null) 67 68 if [ -f "${HADOOP_IN_PATH}" ]; then 69 info "Including Hadoop libraries found via ($HADOOP_IN_PATH) for HDFS access" 70 71 # determine hadoop java.library.path and use that for flume 72 local HADOOP_CLASSPATH="" 73 local HADOOP_JAVA_LIBRARY_PATH=$(HADOOP_CLASSPATH="$FLUME_CLASSPATH" \ 74 ${HADOOP_IN_PATH} org.apache.flume.tools.GetJavaProperty \ 75 java.library.path) 76 77 # look for the line that has the desired property value 78 # (considering extraneous output from some GC options that write to stdout) 79 # IFS = InternalFieldSeparator (set to recognize only newline char as delimiter) 80 IFS=$'\n' 81 for line in $HADOOP_JAVA_LIBRARY_PATH; do 82 #if [[ $line =~ ^java\.library\.path=(.*)$ ]]; then 83 if [[ "$line" =~ "^java\.library\.path=(.*)$" ]]; then 84 HADOOP_JAVA_LIBRARY_PATH=${BASH_REMATCH[1]} 85 break 86 fi 87 done 88 unset IFS 89 90 if [ -n "${HADOOP_JAVA_LIBRARY_PATH}" ]; then 91 FLUME_JAVA_LIBRARY_PATH="$FLUME_JAVA_LIBRARY_PATH:$HADOOP_JAVA_LIBRARY_PATH" 92 fi 93 94 # determine hadoop classpath 95 HADOOP_CLASSPATH=$($HADOOP_IN_PATH classpath) 96 97 # hack up and filter hadoop classpath 98 local ELEMENTS=$(sed -e 's/:/ /g' <<<${HADOOP_CLASSPATH}) 99 local ELEMENT 100 for ELEMENT in $ELEMENTS; do 101 local PIECE 102 for PIECE in $(echo $ELEMENT); do 103 #zhangzl 104 if [[ $PIECE =~ "slf4j-(api|log4j12).*\.jar" ]]; then 105 info "Excluding $PIECE from classpath" 106 continue 107 else 108 FLUME_CLASSPATH="$FLUME_CLASSPATH:$PIECE" 109 fi 110 done 111 done 112 113 fi 114 } 115 add_HBASE_paths() { 116 local HBASE_IN_PATH=$(PATH="${HBASE_HOME}/bin:$PATH" \ 117 which hbase 2>/dev/null) 118 119 if [ -f "${HBASE_IN_PATH}" ]; then 120 info "Including HBASE libraries found via ($HBASE_IN_PATH) for HBASE access" 121 122 # determine HBASE java.library.path and use that for flume 123 local HBASE_CLASSPATH="" 124 local HBASE_JAVA_LIBRARY_PATH=$(HBASE_CLASSPATH="$FLUME_CLASSPATH" \ 125 ${HBASE_IN_PATH} org.apache.flume.tools.GetJavaProperty \ 126 java.library.path) 127 128 # look for the line that has the desired property value 129 # (considering extraneous output from some GC options that write to stdout) 130 # IFS = InternalFieldSeparator (set to recognize only newline char as delimiter) 131 IFS=$'\n' 132 for line in $HBASE_JAVA_LIBRARY_PATH; do 133 #zhangzl 134 if [[ $line =~ "^java\.library\.path=(.*)$" ]]; then 135 HBASE_JAVA_LIBRARY_PATH=${BASH_REMATCH[1]} 136 break 137 fi 138 done 139 unset IFS 140 141 if [ -n "${HBASE_JAVA_LIBRARY_PATH}" ]; then 142 FLUME_JAVA_LIBRARY_PATH="$FLUME_JAVA_LIBRARY_PATH:$HBASE_JAVA_LIBRARY_PATH" 143 fi 144 145 # determine HBASE classpath 146 HBASE_CLASSPATH=$($HBASE_IN_PATH classpath) 147 148 # hack up and filter HBASE classpath 149 local ELEMENTS=$(sed -e 's/:/ /g' <<<${HBASE_CLASSPATH}) 150 local ELEMENT 151 for ELEMENT in $ELEMENTS; do 152 local PIECE 153 for PIECE in $(echo $ELEMENT); do 154 #zhangzl 155 if [[ $PIECE =~ "slf4j-(api|log4j12).*\.jar" ]]; then 156 info "Excluding $PIECE from classpath" 157 continue 158 else 159 FLUME_CLASSPATH="$FLUME_CLASSPATH:$PIECE" 160 fi 161 done 162 done 163 FLUME_CLASSPATH="$FLUME_CLASSPATH:$HBASE_HOME/conf" 164 165 fi 166 } 167 168 set_LD_LIBRARY_PATH(){ 169 #Append the FLUME_JAVA_LIBRARY_PATH to whatever the user may have specified in 170 #flume-env.sh 171 if [ -n "${FLUME_JAVA_LIBRARY_PATH}" ]; then 172 export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${FLUME_JAVA_LIBRARY_PATH}" 173 fi 174 } 175 176 display_help() { 177 cat <<EOF 178 Usage: $0 <command> [options]... 179 180 commands: 181 help display this help text 182 agent run a Flume agent 183 avro-client run an avro Flume client 184 version show Flume version info 185 186 global options: 187 --conf,-c <conf> use configs in <conf> directory 188 --classpath,-C <cp> append to the classpath 189 --dryrun,-d do not actually start Flume, just print the command 190 --plugins-path <dirs> colon-separated list of plugins.d directories. See the 191 plugins.d section in the user guide for more details. 192 Default: \$FLUME_HOME/plugins.d 193 -Dproperty=value sets a Java system property value 194 -Xproperty=value sets a Java -X option 195 196 agent options: 197 --conf-file,-f <file> specify a config file (required) 198 --name,-n <name> the name of this agent (required) 199 --help,-h display help text 200 201 avro-client options: 202 --rpcProps,-P <file> RPC client properties file with server connection params 203 --host,-H <host> hostname to which events will be sent 204 --port,-p <port> port of the avro source 205 --dirname <dir> directory to stream to avro source 206 --filename,-F <file> text file to stream to avro source (default: std input) 207 --headerFile,-R <file> File containing event headers as key/value pairs on each new line 208 --help,-h display help text 209 210 Either --rpcProps or both --host and --port must be specified. 211 212 Note that if <conf> directory is specified, then it is always included first 213 in the classpath. 214 215 EOF 216 } 217 218 run_flume() { 219 local FLUME_APPLICATION_CLASS 220 221 if [ "$#" -gt 0 ]; then 222 FLUME_APPLICATION_CLASS=$1 223 shift 224 else 225 error "Must specify flume application class" 1 226 fi 227 228 if [ ${CLEAN_FLAG} -ne 0 ]; then 229 set -x 230 fi 231 $EXEC $JAVA_HOME/bin/java $JAVA_OPTS -cp "$FLUME_CLASSPATH" \ 232 -Djava.library.path=$FLUME_JAVA_LIBRARY_PATH "$FLUME_APPLICATION_CLASS" $* 233 } 234 235 ################################ 236 # main 237 ################################ 238 239 # set default params 240 FLUME_CLASSPATH="" 241 FLUME_JAVA_LIBRARY_PATH="" 242 JAVA_OPTS="-Xmx20m" 243 LD_LIBRARY_PATH="" 244 245 opt_conf="" 246 opt_classpath="" 247 opt_plugins_dirs="" 248 opt_java_props="" 249 opt_dryrun="" 250 251 mode=$1 252 shift 253 254 case "$mode" in 255 help) 256 display_help 257 exit 0 258 ;; 259 agent) 260 opt_agent=1 261 ;; 262 node) 263 opt_agent=1 264 warn "The \"node\" command is deprecated. Please use \"agent\" instead." 265 ;; 266 avro-client) 267 opt_avro_client=1 268 ;; 269 tool) 270 opt_tool=1 271 ;; 272 version) 273 opt_version=1 274 CLEAN_FLAG=0 275 ;; 276 *) 277 error "Unknown or unspecified command '$mode'" 278 echo 279 display_help 280 exit 1 281 ;; 282 esac 283 284 args="" 285 while [ -n "$*" ] ; do 286 arg=$1 287 shift 288 289 case "$arg" in 290 --conf|-c) 291 [ -n "$1" ] || error "Option --conf requires an argument" 1 292 opt_conf=$1 293 shift 294 ;; 295 --classpath|-C) 296 [ -n "$1" ] || error "Option --classpath requires an argument" 1 297 opt_classpath=$1 298 shift 299 ;; 300 --dryrun|-d) 301 opt_dryrun="1" 302 ;; 303 --plugins-path) 304 opt_plugins_dirs=$1 305 shift 306 ;; 307 -D*) 308 opt_java_props="$opt_java_props $arg" 309 ;; 310 -X*) 311 opt_java_props="$opt_java_props $arg" 312 ;; 313 *) 314 args="$args $arg" 315 ;; 316 esac 317 done 318 319 # make opt_conf absolute 320 if [[ -n "$opt_conf" && -d "$opt_conf" ]]; then 321 opt_conf=$(cd $opt_conf; pwd) 322 fi 323 324 # allow users to override the default env vars via conf/flume-env.sh 325 if [ -z "$opt_conf" ]; then 326 warn "No configuration directory set! Use --conf <dir> to override." 327 elif [ -f "$opt_conf/flume-env.sh" ]; then 328 info "Sourcing environment configuration script $opt_conf/flume-env.sh" 329 source "$opt_conf/flume-env.sh" 330 fi 331 332 # append command-line java options to stock or env script JAVA_OPTS 333 if [ -n "${opt_java_props}" ]; then 334 JAVA_OPTS="${JAVA_OPTS} ${opt_java_props}" 335 fi 336 337 # prepend command-line classpath to env script classpath 338 if [ -n "${opt_classpath}" ]; then 339 if [ -n "${FLUME_CLASSPATH}" ]; then 340 FLUME_CLASSPATH="${opt_classpath}:${FLUME_CLASSPATH}" 341 else 342 FLUME_CLASSPATH="${opt_classpath}" 343 fi 344 fi 345 346 if [ -z "${FLUME_HOME}" ]; then 347 FLUME_HOME=$(cd $(dirname $0)/..; pwd) 348 fi 349 350 # prepend $FLUME_HOME/lib jars to the specified classpath (if any) 351 if [ -n "${FLUME_CLASSPATH}" ] ; then 352 FLUME_CLASSPATH="${FLUME_HOME}/lib/*:$FLUME_CLASSPATH" 353 else 354 FLUME_CLASSPATH="${FLUME_HOME}/lib/*" 355 fi 356 357 # load plugins.d directories 358 PLUGINS_DIRS="" 359 if [ -n "${opt_plugins_dirs}" ]; then 360 PLUGINS_DIRS=$(sed -e 's/:/ /g' <<<${opt_plugins_dirs}) 361 else 362 PLUGINS_DIRS="${FLUME_HOME}/plugins.d" 363 fi 364 365 unset plugin_lib plugin_libext plugin_native 366 for PLUGINS_DIR in $PLUGINS_DIRS; do 367 if [[ -d ${PLUGINS_DIR} ]]; then 368 for plugin in ${PLUGINS_DIR}/*; do 369 if [[ -d "$plugin/lib" ]]; then 370 plugin_lib="${plugin_lib}${plugin_lib+:}${plugin}/lib/*" 371 fi 372 if [[ -d "$plugin/libext" ]]; then 373 plugin_libext="${plugin_libext}${plugin_libext+:}${plugin}/libext/*" 374 fi 375 if [[ -d "$plugin/native" ]]; then 376 plugin_native="${plugin_native}${plugin_native+:}${plugin}/native" 377 fi 378 done 379 fi 380 done 381 382 if [[ -n "${plugin_lib}" ]] 383 then 384 FLUME_CLASSPATH="${FLUME_CLASSPATH}:${plugin_lib}" 385 fi 386 387 if [[ -n "${plugin_libext}" ]] 388 then 389 FLUME_CLASSPATH="${FLUME_CLASSPATH}:${plugin_libext}" 390 fi 391 392 if [[ -n "${plugin_native}" ]] 393 then 394 if [[ -n "${FLUME_JAVA_LIBRARY_PATH}" ]] 395 then 396 FLUME_JAVA_LIBRARY_PATH="${FLUME_JAVA_LIBRARY_PATH}:${plugin_native}" 397 else 398 FLUME_JAVA_LIBRARY_PATH="${plugin_native}" 399 fi 400 fi 401 402 # find java 403 if [ -z "${JAVA_HOME}" ] ; then 404 warn "JAVA_HOME is not set!" 405 # Try to use Bigtop to autodetect JAVA_HOME if it's available 406 if [ -e /usr/libexec/bigtop-detect-javahome ] ; then 407 . /usr/libexec/bigtop-detect-javahome 408 elif [ -e /usr/lib/bigtop-utils/bigtop-detect-javahome ] ; then 409 . /usr/lib/bigtop-utils/bigtop-detect-javahome 410 fi 411 412 # Using java from path if bigtop is not installed or couldn't find it 413 if [ -z "${JAVA_HOME}" ] ; then 414 JAVA_DEFAULT=$(type -p java) 415 [ -n "$JAVA_DEFAULT" ] || error "Unable to find java executable. Is it in your PATH?" 1 416 JAVA_HOME=$(cd $(dirname $JAVA_DEFAULT)/..; pwd) 417 fi 418 fi 419 420 # look for hadoop libs 421 add_hadoop_paths 422 add_HBASE_paths 423 424 # prepend conf dir to classpath 425 if [ -n "$opt_conf" ]; then 426 FLUME_CLASSPATH="$opt_conf:$FLUME_CLASSPATH" 427 fi 428 429 set_LD_LIBRARY_PATH 430 # allow dryrun 431 EXEC="exec" 432 if [ -n "${opt_dryrun}" ]; then 433 warn "Dryrun mode enabled (will not actually initiate startup)" 434 EXEC="echo" 435 fi 436 437 # finally, invoke the appropriate command 438 if [ -n "$opt_agent" ] ; then 439 run_flume $FLUME_AGENT_CLASS $args 440 elif [ -n "$opt_avro_client" ] ; then 441 run_flume $FLUME_AVRO_CLIENT_CLASS $args 442 elif [ -n "${opt_version}" ] ; then 443 run_flume $FLUME_VERSION_CLASS $args 444 elif [ -n "${opt_tool}" ] ; then 445 run_flume $FLUME_TOOLS_CLASS $args 446 else 447 error "This message should never appear" 1 448 fi 449 450 exit 0 View Code 五、测试配置文件 在conf目录下创建example-conf.properties文件,属性如下所示: 1 # Describe the source 2 a1.sources = r1 3 a1.sinks = k1 4 a1.channels = c1 5 6 # Describe/configure the source 7 a1.sources.r1.type = avro 8 a1.sources.r1.bind = localhost 9 a1.sources.r1.port = 44444 10 11 # Describe the sink 12 # 将数据输出至日志中 13 a1.sinks.k1.type = logger 14 15 16 # Use a channel which buffers events in memory 17 a1.channels.c1.type = memory 18 a1.channels.c1.capacity = 1000 19 a1.channels.c1.transactionCapacity = 100 20 21 # Bind the source and sink to the channel 22 a1.sources.r1.channels = c1 23 a1.sinks.k1.channel = c1 六、运行命令 6.1 启动代理 [hadoop@hadoop1 conf]$ flume-ng agent -n a1 -f example-conf.properties 6.2 启动avro-client客户端向agent代理发送数据-需要单独启动新的窗口 [hadoop@hadoop1 conf]$ flume-ng avro-client -H localhost -p 44444 -F file01 七、结果查看 1 14/01/16 22:26:34 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 => /127.0.0.1:44444] OPEN 2 14/01/16 22:26:34 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 => /127.0.0.1:44444] BOUND: /127.0.0.1:44444 3 14/01/16 22:26:34 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 => /127.0.0.1:44444] CONNECTED: /127.0.0.1:54289 4 14/01/16 22:26:36 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 :> /127.0.0.1:44444] DISCONNECTED 5 14/01/16 22:26:36 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 :> /127.0.0.1:44444] UNBOUND 6 14/01/16 22:26:36 INFO ipc.NettyServer: [id: 0x0100c7e4, /127.0.0.1:54289 :> /127.0.0.1:44444] CLOSED 7 14/01/16 22:26:36 INFO ipc.NettyServer: Connection to /127.0.0.1:54289 disconnected. 8 14/01/16 22:26:38 INFO sink.LoggerSink: Event: { headers:{} body: 68 65 6C 6C 6F 20 77 6F 72 6C 64 hello world } 作者:张子良 出处:http://www.cnblogs.com/hadoopdev 本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

优秀的个人博客,低调大师

大数据工具篇之Hive与MySQL整合完整教程

一、引言 Hive元数据存储可以放到RDBMS数据库中,本文以Hive与MySQL数据库的整合为目标,详细说明Hive与MySQL的整合方法。 二、安装驱动 MySQL最新的Java驱动版本为:mysql-connector-java-5.1.28-bin.jar,下载后拷贝到:Hive/Lib目录。 三、安装MySQL 3.1 版本 RHEL5+mysql-5.5.35-1.i386.rpm 3.2 顺序 MySQL-shared-compat-5.5.35-1.rhel15.i386.rpm MySQL-server-5.5.35-1.rhel5.i386.rpm MySQL-client-5.5.35-1.rhel5.i386.rpm 四、配置文件 修改Hive配置文件Hive-site.xml,修改后的结果如下所示: 1 <property> 2 <name>javax.jdo.option.ConnectionURL</name> 3 <value>jdbc:mysql://localhost:3306/hivedb?characterEncoding=UTF-8</value> 4 <description>JDBC connect string for a JDBC metastore</description> 5 </property> 6 7 <property> 8 <name>javax.jdo.option.ConnectionDriverName</name> 9 <value>com.mysql.jdbc.Driver</value> 10 <description>Driver class name for a JDBC metastore</description> 11 </property> 12 13 <property> 14 <name>javax.jdo.PersistenceManagerFactoryClass</name> 15 <value>org.datanucleus.jdo.JDOPersistenceManagerFactory</value> 16 <description>class implementing the jdo persistence</description> 17 </property> 18 19 <property> 20 <name>javax.jdo.option.DetachAllOnCommit</name> 21 <value>true</value> 22 <description>detaches all objects from session so that they can be used after transaction is committed</description> 23 </property> 24 25 <property> 26 <name>javax.jdo.option.NonTransactionalRead</name> 27 <value>true</value> 28 <description>reads outside of transactions</description> 29 </property> 30 31 <property> 32 <name>javax.jdo.option.ConnectionUserName</name> 33 <value>root</value> 34 <description>username to use against metastore database</description> 35 </property> 36 37 <property> 38 <name>javax.jdo.option.ConnectionPassword</name> 39 <value>root</value> 40 <description>password to use against metastore database</description> 41 </property> 五、结果展示 安装完成以后,通过访问mysql客户端,可以用来验证是否安装成功。注意与普通关系型数据创建表格后的区别。 mysql>show tables; 1 +----------------+ 2 | Tables_in_hive | 3 +----------------+ 4 | BUCKETING_COLS | 5 | COLUMNS | 6 | DBS | 7 | PARTITION_KEYS | 8 | SDS | 9 | SD_PARAMS | 10 | SEQUENCE_TABLE | 11 | SERDES | 12 | SERDE_PARAMS | 13 | SORT_COLS | 14 | TABLE_PARAMS | 15 | TBLS | 16 +----------------+ 六、注意事项 曾经有人单独找我咨询过这个问题:为何无法在Hive中指定使用哪个MySQL数据库?这里面有一个需要说明的地方是Hive的数据库的概念不同于RDBMS数据库,MySQL数据库的指定是基于配置文件的,但是Hive的数据库只是一个命名空间号,类似分组的概念。hive中的数据库可以在使用MySQL数据库中,通过Select * from DBS查看到。 作者:张子良 出处:http://www.cnblogs.com/hadoopdev 本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

优秀的个人博客,低调大师

大数据工具篇之Hive与HBase整合完整教程

一、引言 最近的一次培训,用户特意提到Hadoop环境下HDFS中存储的文件如何才能导入到HBase,关于这部分基于HBase Java API的写入方式,之前曾经有过技术文章共享,本文就不再说明。本文基于Hive执行HDFS批量向HBase导入数据,讲解Hive与HBase的整合问题。这方面的文章已经很多,但是由于版本差异,可操作性不大,本文采用的版本均基于以下版本说明中的版本。 二、版本说明 序号 软件 版本 1 Hive 0.10.0 2 HBase 0.94.0 3 Hadoop 1.0.1 三、配置指南 3.1 创建配置文件 cp conf/hive-default.xml.templatehive-default.xml cp conf/hive-default.xml.templatehive-site.xml 3.2 修改配置文件 基于hive-default.xml.template进行拷贝复制的hive-site.xml文件有问题,主要集中在<description></description>标签不配对的情况,需要根据错误提示进行修改,修改完成后的配置文件如下所示: 1 <?xml version="1.0"?> 2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 3 <!-- 4 Licensed to the Apache Software Foundation (ASF) under one or more 5 contributor license agreements. See the NOTICE file distributed with 6 this work for additional information regarding copyright ownership. 7 The ASF licenses this file to You under the Apache License, Version 2.0 8 (the "License"); you may not use this file except in compliance with 9 the License. You may obtain a copy of the License at 10 11 http://www.apache.org/licenses/LICENSE-2.0 12 13 Unless required by applicable law or agreed to in writing, software 14 distributed under the License is distributed on an "AS IS" BASIS, 15 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 See the License for the specific language governing permissions and 17 limitations under the License. 18 --> 19 20 <configuration> 21 22 <!-- WARNING!!! This file is provided for documentation purposes ONLY! --> 23 <!-- WARNING!!! Any changes you make to this file will be ignored by Hive. --> 24 <!-- WARNING!!! You must make your changes in hive-site.xml instead. --> 25 26 27 <!-- Hive Execution Parameters --> 28 <property> 29 <name>mapred.reduce.tasks</name> 30 <value>-1</value> 31 <description>The default number of reduce tasks per job. Typically set 32 to a prime close to the number of available hosts. Ignored when 33 mapred.job.tracker is "local". Hadoop set this to 1 by default, whereas hive uses -1 as its default value. 34 By setting this property to -1, Hive will automatically figure out what should be the number of reducers. 35 </description> 36 </property> 37 38 <property> 39 <name>hive.exec.reducers.bytes.per.reducer</name> 40 <value>1000000000</value> 41 <description>size per reducer.The default is 1G, i.e if the input size is 10G, it will use 10 reducers.</description> 42 </property> 43 44 <property> 45 <name>hive.exec.reducers.max</name> 46 <value>999</value> 47 <description>max number of reducers will be used. If the one 48 specified in the configuration parameter mapred.reduce.tasks is 49 negative, hive will use this one as the max number of reducers when 50 automatically determine number of reducers.</description> 51 </property> 52 53 <property> 54 <name>hive.cli.print.header</name> 55 <value>false</value> 56 <description>Whether to print the names of the columns in query output.</description> 57 </property> 58 59 <property> 60 <name>hive.cli.print.current.db</name> 61 <value>false</value> 62 <description>Whether to include the current database in the hive prompt.</description> 63 </property> 64 65 <property> 66 <name>hive.cli.prompt</name> 67 <value>hive</value> 68 <description>Command line prompt configuration value. Other hiveconf can be used in 69 this configuration value. Variable substitution will only be invoked at the hive 70 cli startup.</description> 71 </property> 72 73 <property> 74 <name>hive.exec.scratchdir</name> 75 <value>/tmp/hive-${user.name}</value> 76 <description>Scratch space for Hive jobs</description> 77 </property> 78 79 <property> 80 <name>hive.exec.local.scratchdir</name> 81 <value>/tmp/${user.name}</value> 82 <description>Local scratch space for Hive jobs</description> 83 </property> 84 85 <property> 86 <name>hive.test.mode</name> 87 <value>false</value> 88 <description>whether hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename</description> 89 </property> 90 91 <property> 92 <name>hive.test.mode.prefix</name> 93 <value>test_</value> 94 <description>if hive is running in test mode, prefixes the output table by this string</description> 95 </property> 96 97 <!-- If the input table is not bucketed, the denominator of the tablesample is determinied by the parameter below --> 98 <!-- For example, the following query: --> 99 <!-- INSERT OVERWRITE TABLE dest --> 100 <!-- SELECT col1 from src --> 101 <!-- would be converted to --> 102 <!-- INSERT OVERWRITE TABLE test_dest --> 103 <!-- SELECT col1 from src TABLESAMPLE (BUCKET 1 out of 32 on rand(1)) --> 104 <property> 105 <name>hive.test.mode.samplefreq</name> 106 <value>32</value> 107 <description>if hive is running in test mode and table is not bucketed, sampling frequency</description> 108 </property> 109 110 <property> 111 <name>hive.test.mode.nosamplelist</name> 112 <value></value> 113 <description>if hive is running in test mode, dont sample the above comma seperated list of tables</description> 114 </property> 115 116 <property> 117 <name>hive.metastore.uris</name> 118 <value></value> 119 <description>Thrift uri for the remote metastore. Used by metastore client to connect to remote metastore.</description> 120 </property> 121 122 <property> 123 <name>javax.jdo.option.ConnectionURL</name> 124 <value>jdbc:derby:;databaseName=metastore_db;create=true</value> 125 <description>JDBC connect string for a JDBC metastore</description> 126 </property> 127 128 <property> 129 <name>javax.jdo.option.ConnectionDriverName</name> 130 <value>org.apache.derby.jdbc.EmbeddedDriver</value> 131 <description>Driver class name for a JDBC metastore</description> 132 </property> 133 134 <property> 135 <name>javax.jdo.PersistenceManagerFactoryClass</name> 136 <value>org.datanucleus.jdo.JDOPersistenceManagerFactory</value> 137 <description>class implementing the jdo persistence</description> 138 </property> 139 140 <property> 141 <name>javax.jdo.option.DetachAllOnCommit</name> 142 <value>true</value> 143 <description>detaches all objects from session so that they can be used after transaction is committed</description> 144 </property> 145 146 <property> 147 <name>javax.jdo.option.NonTransactionalRead</name> 148 <value>true</value> 149 <description>reads outside of transactions</description> 150 </property> 151 152 <property> 153 <name>javax.jdo.option.ConnectionUserName</name> 154 <value>APP</value> 155 <description>username to use against metastore database</description> 156 </property> 157 158 <property> 159 <name>javax.jdo.option.ConnectionPassword</name> 160 <value>mine</value> 161 <description>password to use against metastore database</description> 162 </property> 163 164 <property> 165 <name>javax.jdo.option.Multithreaded</name> 166 <value>true</value> 167 <description>Set this to true if multiple threads access metastore through JDO concurrently.</description> 168 </property> 169 170 <property> 171 <name>datanucleus.connectionPoolingType</name> 172 <value>DBCP</value> 173 <description>Uses a DBCP connection pool for JDBC metastore</description> 174 </property> 175 176 <property> 177 <name>datanucleus.validateTables</name> 178 <value>false</value> 179 <description>validates existing schema against code. turn this on if you want to verify existing schema </description> 180 </property> 181 182 <property> 183 <name>datanucleus.validateColumns</name> 184 <value>false</value> 185 <description>validates existing schema against code. turn this on if you want to verify existing schema </description> 186 </property> 187 188 <property> 189 <name>datanucleus.validateConstraints</name> 190 <value>false</value> 191 <description>validates existing schema against code. turn this on if you want to verify existing schema </description> 192 </property> 193 194 <property> 195 <name>datanucleus.storeManagerType</name> 196 <value>rdbms</value> 197 <description>metadata store type</description> 198 </property> 199 200 <property> 201 <name>datanucleus.autoCreateSchema</name> 202 <value>true</value> 203 <description>creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once</description> 204 </property> 205 206 <property> 207 <name>datanucleus.autoStartMechanismMode</name> 208 <value>checked</value> 209 <description>throw exception if metadata tables are incorrect</description> 210 </property> 211 212 <property> 213 <name>datanucleus.transactionIsolation</name> 214 <value>read-committed</value> 215 <description>Default transaction isolation level for identity generation. </description> 216 </property> 217 218 <property> 219 <name>datanucleus.cache.level2</name> 220 <value>false</value> 221 <description>Use a level 2 cache. Turn this off if metadata is changed independently of hive metastore server</description> 222 </property> 223 224 <property> 225 <name>datanucleus.cache.level2.type</name> 226 <value>SOFT</value> 227 <description>SOFT=soft reference based cache, WEAK=weak reference based cache.</description> 228 </property> 229 230 <property> 231 <name>datanucleus.identifierFactory</name> 232 <value>datanucleus</value> 233 <description>Name of the identifier factory to use when generating table/column names etc. 'datanucleus' is used for backward compatibility</description> 234 </property> 235 236 <property> 237 <name>datanucleus.plugin.pluginRegistryBundleCheck</name> 238 <value>LOG</value> 239 <description>Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]</description> 240 </property> 241 242 <property> 243 <name>hive.metastore.warehouse.dir</name> 244 <value>/user/hive/warehouse</value> 245 <description>location of default database for the warehouse</description> 246 </property> 247 248 <property> 249 <name>hive.metastore.execute.setugi</name> 250 <value>false</value> 251 <description>In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.</description> 252 </property> 253 254 <property> 255 <name>hive.metastore.event.listeners</name> 256 <value></value> 257 <description>list of comma seperated listeners for metastore events.</description> 258 </property> 259 260 <property> 261 <name>hive.metastore.partition.inherit.table.properties</name> 262 <value></value> 263 <description>list of comma seperated keys occurring in table properties which will get inherited to newly created partitions. * implies all the keys will get inherited.</description> 264 </property> 265 266 <property> 267 <name>hive.metadata.export.location</name> 268 <value></value> 269 <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, it is the location to which the metadata will be exported. The default is an empty string, which results in the metadata being exported to the current user's home directory on HDFS.</description> 270 </property> 271 272 <property> 273 <name>hive.metadata.move.exported.metadata.to.trash</name> 274 <value></value> 275 <description>When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.</description> 276 </property> 277 278 <property> 279 <name>hive.metastore.partition.name.whitelist.pattern</name> 280 <value></value> 281 <description>Partition names will be checked against this regex pattern and rejected if not matched. To use, enable hive.metastore.pre.event.listeners=org.apache.hadoop.hive.metastore.PartitionNameWhitelistPreEventListener Listener will not register if this property value is empty.</description> 282 </property> 283 284 <property> 285 <name>hive.metastore.end.function.listeners</name> 286 <value></value> 287 <description>list of comma separated listeners for the end of metastore functions.</description> 288 </property> 289 290 <property> 291 <name>hive.metastore.event.expiry.duration</name> 292 <value>0</value> 293 <description>Duration after which events expire from events table (in seconds)</description> 294 </property> 295 296 <property> 297 <name>hive.metastore.event.clean.freq</name> 298 <value>0</value> 299 <description>Frequency at which timer task runs to purge expired events in metastore(in seconds).</description> 300 </property> 301 302 <property> 303 <name>hive.metastore.connect.retries</name> 304 <value>5</value> 305 <description>Number of retries while opening a connection to metastore</description> 306 </property> 307 308 <property> 309 <name>hive.metastore.failure.retries</name> 310 <value>3</value> 311 <description>Number of retries upon failure of Thrift metastore calls</description> 312 </property> 313 314 <property> 315 <name>hive.metastore.client.connect.retry.delay</name> 316 <value>1</value> 317 <description>Number of seconds for the client to wait between consecutive connection attempts</description> 318 </property> 319 320 <property> 321 <name>hive.metastore.client.socket.timeout</name> 322 <value>20</value> 323 <description>MetaStore Client socket timeout in seconds</description> 324 </property> 325 326 <property> 327 <name>hive.metastore.rawstore.impl</name> 328 <value>org.apache.hadoop.hive.metastore.ObjectStore</value> 329 <description>Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. This class is used to store and retrieval of raw metadata objects such as table, database</description> 330 </property> 331 332 <property> 333 <name>hive.metastore.batch.retrieve.max</name> 334 <value>300</value> 335 <description>Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. The higher the number, the less the number of round trips is needed to the Hive metastore server, but it may also cause higher memory requirement at the client side.</description> 336 </property> 337 338 <property> 339 <name>hive.metastore.batch.retrieve.table.partition.max</name> 340 <value>1000</value> 341 <description>Maximum number of table partitions that metastore internally retrieves in one batch.</description> 342 </property> 343 344 <property> 345 <name>hive.default.fileformat</name> 346 <value>TextFile</value> 347 <description>Default file format for CREATE TABLE statement. Options are TextFile and SequenceFile. Users can explicitly say CREATE TABLE ... STORED AS &lt;TEXTFILE|SEQUENCEFILE&gt; to override</description> 348 </property> 349 350 <property> 351 <name>hive.fileformat.check</name> 352 <value>true</value> 353 <description>Whether to check file format or not when loading data files</description> 354 </property> 355 356 <property> 357 <name>hive.map.aggr</name> 358 <value>true</value> 359 <description>Whether to use map-side aggregation in Hive Group By queries</description> 360 </property> 361 362 <property> 363 <name>hive.groupby.skewindata</name> 364 <value>false</value> 365 <description>Whether there is skew in data to optimize group by queries</description> 366 </property> 367 368 <property> 369 <name>hive.groupby.mapaggr.checkinterval</name> 370 <value>100000</value> 371 <description>Number of rows after which size of the grouping keys/aggregation classes is performed</description> 372 </property> 373 374 <property> 375 <name>hive.mapred.local.mem</name> 376 <value>0</value> 377 <description>For local mode, memory of the mappers/reducers</description> 378 </property> 379 380 <property> 381 <name>hive.mapjoin.followby.map.aggr.hash.percentmemory</name> 382 <value>0.3</value> 383 <description>Portion of total memory to be used by map-side grup aggregation hash table, when this group by is followed by map join</description> 384 </property> 385 386 <property> 387 <name>hive.map.aggr.hash.force.flush.memory.threshold</name> 388 <value>0.9</value> 389 <description>The max memory to be used by map-side grup aggregation hash table, if the memory usage is higher than this number, force to flush data</description> 390 </property> 391 392 <property> 393 <name>hive.map.aggr.hash.percentmemory</name> 394 <value>0.5</value> 395 <description>Portion of total memory to be used by map-side grup aggregation hash table</description> 396 </property> 397 398 <property> 399 <name>hive.map.aggr.hash.min.reduction</name> 400 <value>0.5</value> 401 <description>Hash aggregation will be turned off if the ratio between hash 402 table size and input rows is bigger than this number. Set to 1 to make sure 403 hash aggregation is never turned off.</description> 404 </property> 405 406 <property> 407 <name>hive.optimize.cp</name> 408 <value>true</value> 409 <description>Whether to enable column pruner</description> 410 </property> 411 412 <property> 413 <name>hive.optimize.index.filter</name> 414 <value>false</value> 415 <description>Whether to enable automatic use of indexes</description> 416 </property> 417 418 <property> 419 <name>hive.optimize.index.groupby</name> 420 <value>false</value> 421 <description>Whether to enable optimization of group-by queries using Aggregate indexes.</description> 422 </property> 423 424 <property> 425 <name>hive.optimize.ppd</name> 426 <value>true</value> 427 <description>Whether to enable predicate pushdown</description> 428 </property> 429 430 <property> 431 <name>hive.optimize.ppd.storage</name> 432 <value>true</value> 433 <description>Whether to push predicates down into storage handlers. Ignored when hive.optimize.ppd is false.</description> 434 </property> 435 436 <property> 437 <name>hive.ppd.recognizetransivity</name> 438 <value>true</value> 439 <description>Whether to transitively replicate predicate filters over equijoin conditions.</description> 440 </property> 441 442 <property> 443 <name>hive.optimize.groupby</name> 444 <value>true</value> 445 <description>Whether to enable the bucketed group by from bucketed partitions/tables.</description> 446 </property> 447 448 <property> 449 <name>hive.optimize.skewjoin.compiletime</name> 450 <value>false</value> 451 <description>Whether to create a separate plan for skewed keys for the tables in the join. 452 This is based on the skewed keys stored in the metadata. At compile time, the plan is broken 453 into different joins: one for the skewed keys, and the other for the remaining keys. And then, 454 a union is performed for the 2 joins generated above. So unless the same skewed key is present 455 in both the joined tables, the join for the skewed key will be performed as a map-side join. 456 457 The main difference between this paramater and hive.optimize.skewjoin is that this parameter 458 uses the skew information stored in the metastore to optimize the plan at compile time itself. 459 If there is no skew information in the metadata, this parameter will not have any affect. 460 Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true. 461 Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing 462 so for backward compatibility. 463 464 If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime 465 would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op. 466 </description> 467 </property> 468 469 <property> 470 <name>hive.optimize.union.remove</name> 471 <value>false</value> 472 <description> 473 Whether to remove the union and push the operators between union and the filesink above 474 union. This avoids an extra scan of the output by union. This is independently useful for union 475 queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an 476 extra union is inserted. 477 478 The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true. 479 If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the 480 number of reducers are few, so the number of files anyway are small. However, with this optimization, 481 we are increasing the number of files possibly by a big margin. So, we merge aggresively. 482 </description> 483 </property> 484 485 <property> 486 <name>hive.mapred.supports.subdirectories</name> 487 <value>false</value> 488 <description>Whether the version of hadoop which is running supports sub-directories for tables/partitions. 489 Many hive optimizations can be applied if the hadoop version supports sub-directories for 490 tables/partitions. It was added by MAPREDUCE-1501 491 </description> 492 </property> 493 494 <property> 495 <name>hive.multigroupby.singlemr</name> 496 <value>false</value> 497 <description>Whether to optimize multi group by query to generate single M/R 498 job plan. If the multi group by query has common group by keys, it will be 499 optimized to generate single M/R job.</description> 500 </property> 501 502 <property> 503 <name>hive.map.groupby.sorted</name> 504 <value>false</value> 505 <description>If the bucketing/sorting properties of the table exactly match the grouping key, whether to 506 perform the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this 507 is that it limits the number of mappers to the number of files. 508 </description> 509 </property> 510 511 <property> 512 <name>hive.join.emit.interval</name> 513 <value>1000</value> 514 <description>How many rows in the right-most join operand Hive should buffer before emitting the join result. </description> 515 </property> 516 517 <property> 518 <name>hive.join.cache.size</name> 519 <value>25000</value> 520 <description>How many rows in the joining tables (except the streaming table) should be cached in memory. </description> 521 </property> 522 523 <property> 524 <name>hive.mapjoin.bucket.cache.size</name> 525 <value>100</value> 526 <description>How many values in each keys in the map-joined table should be cached in memory. </description> 527 </property> 528 529 <property> 530 <name>hive.mapjoin.cache.numrows</name> 531 <value>25000</value> 532 <description>How many rows should be cached by jdbm for map join. </description> 533 </property> 534 535 <property> 536 <name>hive.optimize.skewjoin</name> 537 <value>false</value> 538 <description>Whether to enable skew join optimization. 539 The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of 540 processing those keys, store them temporarily in a hdfs directory. In a follow-up map-reduce 541 job, process those skewed keys. The same key need not be skewed for all the tables, and so, 542 the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a 543 map-join. 544 </description> 545 </property> 546 547 <property> 548 <name>hive.exec.list.bucketing.default.dir</name> 549 <value>HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME</value> 550 <description>Default directory name used in list bucketing. 551 List bucketing feature will create sub-directory for each skewed-value and a default directory 552 for non-skewed value. This config specifies the default name for the default directory. 553 Sub-directory is created by list bucketing DML and under partition directory. User doesn't need 554 to know how to construct the canonical path. It just gives user choice if they want to change 555 the default directory name. 556 For example, there are 2 skewed column c1 and c2. 2 skewed value: (1,a) and (2,b). subdirectory: 557 <partition-dir>/c1=1/c2=a/</partition-dir> 558 <partition-dir>/c1=2/c2=b/</partition-dir> 559 <partition-dir>/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME/</partition-dir> 560 Note: This config won't impact users if they don't list bucketing. 561 </description> 562 </property> 563 564 <property> 565 <name>hive.skewjoin.key</name> 566 <value>100000</value> 567 <description>Determine if we get a skew key in join. If we see more 568 than the specified number of rows with the same key in join operator, 569 we think the key as a skew join key. </description> 570 </property> 571 572 <property> 573 <name>hive.skewjoin.mapjoin.map.tasks</name> 574 <value>10000</value> 575 <description> Determine the number of map task used in the follow up map join job 576 for a skew join. It should be used together with hive.skewjoin.mapjoin.min.split 577 to perform a fine grained control.</description> 578 </property> 579 580 <property> 581 <name>hive.skewjoin.mapjoin.min.split</name> 582 <value>33554432</value> 583 <description> Determine the number of map task at most used in the follow up map join job 584 for a skew join by specifying the minimum split size. It should be used together with 585 hive.skewjoin.mapjoin.map.tasks to perform a fine grained control.</description> 586 </property> 587 588 <property> 589 <name>hive.mapred.mode</name> 590 <value>nonstrict</value> 591 <description>The mode in which the hive operations are being performed. 592 In strict mode, some risky queries are not allowed to run. They include: 593 Cartesian Product. 594 No partition being picked up for a query. 595 Comparing bigints and strings. 596 Comparing bigints and doubles. 597 Orderby without limit. 598 </description> 599 </property> 600 601 <property> 602 <name>hive.enforce.bucketmapjoin</name> 603 <value>false</value> 604 <description>If the user asked for bucketed map-side join, and it cannot be performed, 605 should the query fail or not ? For eg, if the buckets in the tables being joined are 606 not a multiple of each other, bucketed map-side join cannot be performed, and the 607 query will fail if hive.enforce.bucketmapjoin is set to true. 608 </description> 609 </property> 610 611 <property> 612 <name>hive.exec.script.maxerrsize</name> 613 <value>100000</value> 614 <description>Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). This prevents runaway scripts from filling logs partitions to capacity </description> 615 </property> 616 617 <property> 618 <name>hive.exec.script.allow.partial.consumption</name> 619 <value>false</value> 620 <description> When enabled, this option allows a user script to exit successfully without consuming all the data from the standard input. 621 </description> 622 </property> 623 624 <property> 625 <name>hive.script.operator.id.env.var</name> 626 <value>HIVE_SCRIPT_OPERATOR_ID</value> 627 <description> Name of the environment variable that holds the unique script operator ID in the user's transform function (the custom mapper/reducer that the user has specified in the query) 628 </description> 629 </property> 630 631 <property> 632 <name>hive.script.operator.truncate.env</name> 633 <value>false</value> 634 <description>Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits)</description> 635 </property> 636 637 <property> 638 <name>hive.exec.compress.output</name> 639 <value>false</value> 640 <description> This controls whether the final outputs of a query (to a local/hdfs file or a hive table) is compressed. The compression codec and other options are determined from hadoop config variables mapred.output.compress* </description> 641 </property> 642 643 <property> 644 <name>hive.exec.compress.intermediate</name> 645 <value>false</value> 646 <description> This controls whether intermediate files produced by hive between multiple map-reduce jobs are compressed. The compression codec and other options are determined from hadoop config variables mapred.output.compress* </description> 647 </property> 648 649 <property> 650 <name>hive.exec.parallel</name> 651 <value>false</value> 652 <description>Whether to execute jobs in parallel</description> 653 </property> 654 655 <property> 656 <name>hive.exec.parallel.thread.number</name> 657 <value>8</value> 658 <description>How many jobs at most can be executed in parallel</description> 659 </property> 660 661 <property> 662 <name>hive.exec.rowoffset</name> 663 <value>false</value> 664 <description>Whether to provide the row offset virtual column</description> 665 </property> 666 667 <property> 668 <name>hive.task.progress</name> 669 <value>false</value> 670 <description>Whether Hive should periodically update task progress counters during execution. Enabling this allows task progress to be monitored more closely in the job tracker, but may impose a performance penalty. This flag is automatically set to true for jobs with hive.exec.dynamic.partition set to true.</description> 671 </property> 672 673 <property> 674 <name>hive.hwi.war.file</name> 675 <value>lib/hive-hwi-0.10.0.war</value> 676 <description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. </description> 677 </property> 678 679 <property> 680 <name>hive.hwi.listen.host</name> 681 <value>0.0.0.0</value> 682 <description>This is the host address the Hive Web Interface will listen on</description> 683 </property> 684 685 <property> 686 <name>hive.hwi.listen.port</name> 687 <value>9999</value> 688 <description>This is the port the Hive Web Interface will listen on</description> 689 </property> 690 691 <property> 692 <name>hive.exec.pre.hooks</name> 693 <value></value> 694 <description>Comma-separated list of pre-execution hooks to be invoked for each statement. A pre-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.</description> 695 </property> 696 697 <property> 698 <name>hive.exec.post.hooks</name> 699 <value></value> 700 <description>Comma-separated list of post-execution hooks to be invoked for each statement. A post-execution hook is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.</description> 701 </property> 702 703 <property> 704 <name>hive.exec.failure.hooks</name> 705 <value></value> 706 <description>Comma-separated list of on-failure hooks to be invoked for each statement. An on-failure hook is specified as the name of Java class which implements the org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.</description> 707 </property> 708 709 <property> 710 <name>hive.client.stats.publishers</name> 711 <value></value> 712 <description>Comma-separated list of statistics publishers to be invoked on counters on each job. A client stats publisher is specified as the name of a Java class which implements the org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.</description> 713 </property> 714 715 <property> 716 <name>hive.client.stats.counters</name> 717 <value></value> 718 <description>Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). Non-display names should be used</description> 719 </property> 720 721 <property> 722 <name>hive.merge.mapfiles</name> 723 <value>true</value> 724 <description>Merge small files at the end of a map-only job</description> 725 </property> 726 727 <property> 728 <name>hive.merge.mapredfiles</name> 729 <value>false</value> 730 <description>Merge small files at the end of a map-reduce job</description> 731 </property> 732 733 <property> 734 <name>hive.mergejob.maponly</name> 735 <value>true</value> 736 <description>Try to generate a map-only job for merging files if CombineHiveInputFormat is supported.</description> 737 </property> 738 739 <property> 740 <name>hive.heartbeat.interval</name> 741 <value>1000</value> 742 <description>Send a heartbeat after this interval - used by mapjoin and filter operators</description> 743 </property> 744 745 <property> 746 <name>hive.merge.size.per.task</name> 747 <value>256000000</value> 748 <description>Size of merged files at the end of the job</description> 749 </property> 750 751 <property> 752 <name>hive.merge.smallfiles.avgsize</name> 753 <value>16000000</value> 754 <description>When the average output file size of a job is less than this number, Hive will start an additional map-reduce job to merge the output files into bigger files. This is only done for map-only jobs if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true.</description> 755 </property> 756 757 <property> 758 <name>hive.mapjoin.smalltable.filesize</name> 759 <value>25000000</value> 760 <description>The threshold for the input file size of the small tables; if the file size is smaller than this threshold, it will try to convert the common join into map join</description> 761 </property> 762 763 <property> 764 <name>hive.mapjoin.localtask.max.memory.usage</name> 765 <value>0.90</value> 766 <description>This number means how much memory the local task can take to hold the key/value into in-memory hash table; If the local task's memory usage is more than this number, the local task will be abort by themself. It means the data of small table is too large to be hold in the memory.</description> 767 </property> 768 769 <property> 770 <name>hive.mapjoin.followby.gby.localtask.max.memory.usage</name> 771 <value>0.55</value> 772 <description>This number means how much memory the local task can take to hold the key/value into in-memory hash table when this map join followed by a group by; If the local task's memory usage is more than this number, the local task will be abort by themself. It means the data of small table is too large to be hold in the memory.</description> 773 </property> 774 775 <property> 776 <name>hive.mapjoin.check.memory.rows</name> 777 <value>100000</value> 778 <description>The number means after how many rows processed it needs to check the memory usage</description> 779 </property> 780 781 <property> 782 <name>hive.auto.convert.join</name> 783 <value>false</value> 784 <description>Whether Hive enable the optimization about converting common join into mapjoin based on the input file size</description> 785 </property> 786 787 788 <property> 789 <name>hive.script.auto.progress</name> 790 <value>false</value> 791 <description>Whether Hive Tranform/Map/Reduce Clause should automatically send progress information to TaskTracker to avoid the task getting killed because of inactivity. Hive sends progress information when the script is outputting to stderr. This option removes the need of periodically producing stderr messages, but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker. </description> 792 </property> 793 794 <property> 795 <name>hive.script.serde</name> 796 <value>org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe</value> 797 <description>The default serde for trasmitting input data to and reading output data from the user scripts. </description> 798 </property> 799 800 <property> 801 <name>hive.binary.record.max.length</name> 802 <value>1000</value> 803 <description>Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. 804 The last record before the end of stream can have less than hive.binary.record.max.length bytes</description> 805 </property> 806 807 808 <property> 809 <name>hive.script.recordreader</name> 810 <value>org.apache.hadoop.hive.ql.exec.TextRecordReader</value> 811 <description>The default record reader for reading data from the user scripts. </description> 812 </property> 813 814 <property> 815 <name>hive.script.recordwriter</name> 816 <value>org.apache.hadoop.hive.ql.exec.TextRecordWriter</value> 817 <description>The default record writer for writing data to the user scripts. </description> 818 </property> 819 820 <property> 821 <name>hive.input.format</name> 822 <value>org.apache.hadoop.hive.ql.io.CombineHiveInputFormat</value> 823 <description>The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat.</description> 824 </property> 825 826 <property> 827 <name>hive.udtf.auto.progress</name> 828 <value>false</value> 829 <description>Whether Hive should automatically send progress information to TaskTracker when using UDTF's to prevent the task getting killed because of inactivity. Users should be cautious because this may prevent TaskTracker from killing tasks with infinte loops. </description> 830 </property> 831 832 <property> 833 <name>hive.mapred.reduce.tasks.speculative.execution</name> 834 <value>true</value> 835 <description>Whether speculative execution for reducers should be turned on. </description> 836 </property> 837 838 <property> 839 <name>hive.exec.counters.pull.interval</name> 840 <value>1000</value> 841 <description>The interval with which to poll the JobTracker for the counters the running job. The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.</description> 842 </property> 843 844 <property> 845 <name>hive.querylog.location</name> 846 <value>/tmp/${user.name}</value> 847 <description> 848 Location of Hive run time structured log file 849 </description> 850 </property> 851 852 <property> 853 <name>hive.querylog.enable.plan.progress</name> 854 <value>true</value> 855 <description> 856 Whether to log the plan's progress every time a job's progress is checked. 857 These logs are written to the location specified by hive.querylog.location 858 </description> 859 </property> 860 861 <property> 862 <name>hive.querylog.plan.progress.interval</name> 863 <value>60000</value> 864 <description> 865 The interval to wait between logging the plan's progress in milliseconds. 866 If there is a whole number percentage change in the progress of the mappers or the reducers, 867 the progress is logged regardless of this value. 868 The actual interval will be the ceiling of (this value divided by the value of 869 hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval 870 I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be 871 logged less frequently than specified. 872 This only has an effect if hive.querylog.enable.plan.progress is set to true. 873 </description> 874 </property> 875 876 <property> 877 <name>hive.enforce.bucketing</name> 878 <value>false</value> 879 <description>Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced. </description> 880 </property> 881 882 <property> 883 <name>hive.enforce.sorting</name> 884 <value>false</value> 885 <description>Whether sorting is enforced. If true, while inserting into the table, sorting is enforced. </description> 886 </property> 887 888 <property> 889 <name>hive.enforce.sortmergebucketmapjoin</name> 890 <value>false</value> 891 <description>If the user asked for sort-merge bucketed map-side join, and it cannot be performed, 892 should the query fail or not ? 893 </description> 894 </property> 895 896 <property> 897 <name>hive.metastore.ds.connection.url.hook</name> 898 <value></value> 899 <description>Name of the hook to use for retriving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used </description> 900 </property> 901 902 <property> 903 <name>hive.metastore.ds.retry.attempts</name> 904 <value>1</value> 905 <description>The number of times to retry a metastore call if there were a connection error</description> 906 </property> 907 908 <property> 909 <name>hive.metastore.ds.retry.interval</name> 910 <value>1000</value> 911 <description>The number of miliseconds between metastore retry attempts</description> 912 </property> 913 914 <property> 915 <name>hive.metastore.server.min.threads</name> 916 <value>200</value> 917 <description>Minimum number of worker threads in the Thrift server's pool.</description> 918 </property> 919 920 <property> 921 <name>hive.metastore.server.max.threads</name> 922 <value>100000</value> 923 <description>Maximum number of worker threads in the Thrift server's pool.</description> 924 </property> 925 926 <property> 927 <name>hive.metastore.server.tcp.keepalive</name> 928 <value>true</value> 929 <description>Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.</description> 930 </property> 931 932 <property> 933 <name>hive.metastore.sasl.enabled</name> 934 <value>false</value> 935 <description>If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos.</description> 936 </property> 937 938 <property> 939 <name>hive.metastore.thrift.framed.transport.enabled</name> 940 <value>false</value> 941 <description>If true, the metastore thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.</description> 942 </property> 943 944 <property> 945 <name>hive.metastore.kerberos.keytab.file</name> 946 <value></value> 947 <description>The path to the Kerberos Keytab file containing the metastore thrift server's service principal.</description> 948 </property> 949 950 <property> 951 <name>hive.metastore.kerberos.principal</name> 952 <value>hive-metastore/_HOST@EXAMPLE.COM</value> 953 <description>The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name.</description> 954 </property> 955 956 <property> 957 <name>hive.cluster.delegation.token.store.class</name> 958 <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value> 959 <description>The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.</description> 960 </property> 961 962 <property> 963 <name>hive.cluster.delegation.token.store.zookeeper.connectString</name> 964 <value>localhost:2181</value> 965 <description>The ZooKeeper token store connect string.</description> 966 </property> 967 968 <property> 969 <name>hive.cluster.delegation.token.store.zookeeper.znode</name> 970 <value>/hive/cluster/delegation</value> 971 <description>The root path for token store data.</description> 972 </property> 973 974 <property> 975 <name>hive.cluster.delegation.token.store.zookeeper.acl</name> 976 <value>sasl:hive/host1@EXAMPLE.COM:cdrwa,sasl:hive/host2@EXAMPLE.COM:cdrwa</value> 977 <description>ACL for token store entries. List comma separated all server principals for the cluster.</description> 978 </property> 979 980 <property> 981 <name>hive.metastore.cache.pinobjtypes</name> 982 <value>Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order</value> 983 <description>List of comma separated metastore object types that should be pinned in the cache</description> 984 </property> 985 986 <property> 987 <name>hive.optimize.reducededuplication</name> 988 <value>true</value> 989 <description>Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. This should always be set to true. Since it is a new feature, it has been made configurable.</description> 990 </property> 991 992 <property> 993 <name>hive.exec.dynamic.partition</name> 994 <value>true</value> 995 <description>Whether or not to allow dynamic partitions in DML/DDL.</description> 996 </property> 997 998 <property> 999 <name>hive.exec.dynamic.partition.mode</name> 1000 <value>strict</value> 1001 <description>In strict mode, the user must specify at least one static partition in case the user accidentally overwrites all partitions.</description> 1002 </property> 1003 1004 <property> 1005 <name>hive.exec.max.dynamic.partitions</name> 1006 <value>1000</value> 1007 <description>Maximum number of dynamic partitions allowed to be created in total.</description> 1008 </property> 1009 1010 <property> 1011 <name>hive.exec.max.dynamic.partitions.pernode</name> 1012 <value>100</value> 1013 <description>Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.</description> 1014 </property> 1015 1016 <property> 1017 <name>hive.exec.max.created.files</name> 1018 <value>100000</value> 1019 <description>Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.</description> 1020 </property> 1021 1022 <property> 1023 <name>hive.exec.default.partition.name</name> 1024 <value>__HIVE_DEFAULT_PARTITION__</value> 1025 <description>The default partition name in case the dynamic partition column value is null/empty string or anyother values that cannot be escaped. This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.</description> 1026 </property> 1027 1028 <property> 1029 <name>hive.stats.dbclass</name> 1030 <value>jdbc:derby</value> 1031 <description>The default database that stores temporary hive statistics.</description> 1032 </property> 1033 1034 <property> 1035 <name>hive.stats.autogather</name> 1036 <value>true</value> 1037 <description>A flag to gather statistics automatically during the INSERT OVERWRITE command.</description> 1038 </property> 1039 1040 <property> 1041 <name>hive.stats.jdbcdriver</name> 1042 <value>org.apache.derby.jdbc.EmbeddedDriver</value> 1043 <description>The JDBC driver for the database that stores temporary hive statistics.</description> 1044 </property> 1045 1046 <property> 1047 <name>hive.stats.dbconnectionstring</name> 1048 <value>jdbc:derby:;databaseName=TempStatsStore;create=true</value> 1049 <description>The default connection string for the database that stores temporary hive statistics.</description> 1050 </property> 1051 1052 <property> 1053 <name>hive.stats.default.publisher</name> 1054 <value></value> 1055 <description>The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is not JDBC or HBase.</description> 1056 </property> 1057 1058 <property> 1059 <name>hive.stats.default.aggregator</name> 1060 <value></value> 1061 <description>The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is not JDBC or HBase.</description> 1062 </property> 1063 1064 <property> 1065 <name>hive.stats.jdbc.timeout</name> 1066 <value>30</value> 1067 <description>Timeout value (number of seconds) used by JDBC connection and statements.</description> 1068 </property> 1069 1070 <property> 1071 <name>hive.stats.retries.max</name> 1072 <value>0</value> 1073 <description>Maximum number of retries when stats publisher/aggregator got an exception updating intermediate database. Default is no tries on failures.</description> 1074 </property> 1075 1076 <property> 1077 <name>hive.stats.retries.wait</name> 1078 <value>3000</value> 1079 <description>The base waiting window (in milliseconds) before the next retry. The actual wait time is calculated by baseWindow * failues + baseWindow * (failure + 1) * (random number between [0.0,1.0]).</description> 1080 </property> 1081 1082 <property> 1083 <name>hive.stats.reliable</name> 1084 <value>false</value> 1085 <description>Whether queries will fail because stats cannot be collected completely accurately. 1086 If this is set to true, reading/writing from/into a partition may fail becuase the stats 1087 could not be computed accurately. 1088 </description> 1089 </property> 1090 1091 <property> 1092 <name>hive.stats.collect.tablekeys</name> 1093 <value>false</value> 1094 <description>Whether join and group by keys on tables are derived and maintained in the QueryPlan. 1095 This is useful to identify how tables are accessed and to determine if they should be bucketed. 1096 </description> 1097 </property> 1098 1099 <property> 1100 <name>hive.stats.ndv.error</name> 1101 <value>20.0</value> 1102 <description>Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost.A lower value for error indicates higher accuracy and a higher compute cost. 1103 </description> 1104 </property> 1105 1106 <property> 1107 <name>hive.support.concurrency</name> 1108 <value>false</value> 1109 <description>Whether hive supports concurrency or not. A zookeeper instance must be up and running for the default hive lock manager to support read-write locks.</description> 1110 </property> 1111 1112 <property> 1113 <name>hive.lock.numretries</name> 1114 <value>100</value> 1115 <description>The number of times you want to try to get all the locks</description> 1116 </property> 1117 1118 <property> 1119 <name>hive.unlock.numretries</name> 1120 <value>10</value> 1121 <description>The number of times you want to retry to do one unlock</description> 1122 </property> 1123 1124 <property> 1125 <name>hive.lock.sleep.between.retries</name> 1126 <value>60</value> 1127 <description>The sleep time (in seconds) between various retries</description> 1128 </property> 1129 1130 <property> 1131 <name>hive.zookeeper.quorum</name> 1132 <value></value> 1133 <description>The list of zookeeper servers to talk to. This is only needed for read/write locks.</description> 1134 </property> 1135 1136 <property> 1137 <name>hive.zookeeper.client.port</name> 1138 <value>2181</value> 1139 <description>The port of zookeeper servers to talk to. This is only needed for read/write locks.</description> 1140 </property> 1141 1142 <property> 1143 <name>hive.zookeeper.session.timeout</name> 1144 <value>600000</value> 1145 <description>Zookeeper client's session timeout. The client is disconnected, and as a result, all locks released, if a heartbeat is not sent in the timeout.</description> 1146 </property> 1147 1148 <property> 1149 <name>hive.zookeeper.namespace</name> 1150 <value>hive_zookeeper_namespace</value> 1151 <description>The parent node under which all zookeeper nodes are created.</description> 1152 </property> 1153 1154 <property> 1155 <name>hive.zookeeper.clean.extra.nodes</name> 1156 <value>false</value> 1157 <description>Clean extra nodes at the end of the session.</description> 1158 </property> 1159 1160 <property> 1161 <name>fs.har.impl</name> 1162 <value>org.apache.hadoop.hive.shims.HiveHarFileSystem</value> 1163 <description>The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop vers less than 0.20</description> 1164 </property> 1165 1166 <property> 1167 <name>hive.archive.enabled</name> 1168 <value>false</value> 1169 <description>Whether archiving operations are permitted</description> 1170 </property> 1171 1172 <property> 1173 <name>hive.fetch.output.serde</name> 1174 <value>org.apache.hadoop.hive.serde2.DelimitedJSONSerDe</value> 1175 <description>The serde used by FetchTask to serialize the fetch output.</description> 1176 </property> 1177 1178 <property> 1179 <name>hive.exec.mode.local.auto</name> 1180 <value>false</value> 1181 <description> Let hive determine whether to run in local mode automatically </description> 1182 </property> 1183 1184 <property> 1185 <name>hive.exec.drop.ignorenonexistent</name> 1186 <value>true</value> 1187 <description> 1188 Do not report an error if DROP TABLE/VIEW specifies a non-existent table/view 1189 </description> 1190 </property> 1191 1192 <property> 1193 <name>hive.exec.show.job.failure.debug.info</name> 1194 <value>true</value> 1195 <description> 1196 If a job fails, whether to provide a link in the CLI to the task with the 1197 most failures, along with debugging hints if applicable. 1198 </description> 1199 </property> 1200 1201 <property> 1202 <name>hive.auto.progress.timeout</name> 1203 <value>0</value> 1204 <description> 1205 How long to run autoprogressor for the script/UDTF operators (in seconds). 1206 Set to 0 for forever. 1207 </description> 1208 </property> 1209 1210 <!-- HBase Storage Handler Parameters --> 1211 1212 <property> 1213 <name>hive.hbase.wal.enabled</name> 1214 <value>true</value> 1215 <description>Whether writes to HBase should be forced to the write-ahead log. Disabling this improves HBase write performance at the risk of lost writes in case of a crash.</description> 1216 </property> 1217 1218 <property> 1219 <name>hive.table.parameters.default</name> 1220 <value></value> 1221 <description>Default property values for newly created tables</description> 1222 </property> 1223 1224 <property> 1225 <name>hive.entity.separator</name> 1226 <value>@</value> 1227 <description>Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname</description> 1228 </property> 1229 1230 <property> 1231 <name>hive.ddl.createtablelike.properties.whitelist</name> 1232 <value></value> 1233 <description>Table Properties to copy over when executing a Create Table Like.</description> 1234 </property> 1235 1236 <property> 1237 <name>hive.variable.substitute</name> 1238 <value>true</value> 1239 <description>This enables substitution using syntax like ${var} ${system:var} and ${env:var}.</description> 1240 </property> 1241 1242 <property> 1243 <name>hive.variable.substitute.depth</name> 1244 <value>40</value> 1245 <description>The maximum replacements the substitution engine will do.</description> 1246 </property> 1247 1248 <property> 1249 <name>hive.conf.validation</name> 1250 <value>true</value> 1251 <description>Eables type checking for registered hive configurations</description> 1252 </property> 1253 1254 <property> 1255 <name>hive.security.authorization.enabled</name> 1256 <value>false</value> 1257 <description>enable or disable the hive client authorization</description> 1258 </property> 1259 1260 <property> 1261 <name>hive.security.authorization.manager</name> 1262 <value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider</value> 1263 <description>the hive client authorization manager class name. 1264 The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. 1265 </description> 1266 </property> 1267 1268 <property> 1269 <name>hive.security.metastore.authorization.manager</name> 1270 <value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider</value> 1271 <description>authorization manager class name to be used in the metastore for authorization. 1272 The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. 1273 </description> 1274 </property> 1275 1276 <property> 1277 <name>hive.security.metastore.authorization.manager</name> 1278 <value>org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider</value> 1279 <description>authorization manager class name to be used in the metastore for authorization. 1280 The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. 1281 </description> 1282 </property> 1283 1284 <property> 1285 <name>hive.security.authenticator.manager</name> 1286 <value>org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator</value> 1287 <description>hive client authenticator manager class name. 1288 The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.</description> 1289 </property> 1290 1291 <property> 1292 <name>hive.security.metastore.authenticator.manager</name> 1293 <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value> 1294 <description>authenticator manager class name to be used in the metastore for authentication. 1295 The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.</description> 1296 </property> 1297 1298 <property> 1299 <name>hive.security.metastore.authenticator.manager</name> 1300 <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value> 1301 <description>authenticator manager class name to be used in the metastore for authentication. 1302 The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.</description> 1303 </property> 1304 1305 <property> 1306 <name>hive.security.authorization.createtable.user.grants</name> 1307 <value></value> 1308 <description>the privileges automatically granted to some users whenever a table gets created. 1309 An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY, 1310 and grant create privilege to userZ whenever a new table created.</description> 1311 </property> 1312 1313 <property> 1314 <name>hive.security.authorization.createtable.group.grants</name> 1315 <value></value> 1316 <description>the privileges automatically granted to some groups whenever a table gets created. 1317 An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY, 1318 and grant create privilege to groupZ whenever a new table created.</description> 1319 </property> 1320 1321 <property> 1322 <name>hive.security.authorization.createtable.role.grants</name> 1323 <value></value> 1324 <description>the privileges automatically granted to some roles whenever a table gets created. 1325 An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY, 1326 and grant create privilege to roleZ whenever a new table created.</description> 1327 </property> 1328 1329 <property> 1330 <name>hive.security.authorization.createtable.owner.grants</name> 1331 <value></value> 1332 <description>the privileges automatically granted to the owner whenever a table gets created. 1333 An example like "select,drop" will grant select and drop privilege to the owner of the table</description> 1334 </property> 1335 1336 <property> 1337 <name>hive.metastore.authorization.storage.checks</name> 1338 <value>false</value> 1339 <description>Should the metastore do authorization checks against the underlying storage 1340 for operations like drop-partition (disallow the drop-partition if the user in 1341 question doesn't have permissions to delete the corresponding directory 1342 on the storage).</description> 1343 </property> 1344 1345 <property> 1346 <name>hive.error.on.empty.partition</name> 1347 <value>false</value> 1348 <description>Whether to throw an excpetion if dynamic partition insert generates empty results.</description> 1349 </property> 1350 1351 <property> 1352 <name>hive.index.compact.file.ignore.hdfs</name> 1353 <value>false</value> 1354 <description>True the hdfs location stored in the index file will be igbored at runtime. 1355 If the data got moved or the name of the cluster got changed, the index data should still be usable.</description> 1356 </property> 1357 1358 <property> 1359 <name>hive.optimize.index.filter.compact.minsize</name> 1360 <value>5368709120</value> 1361 <description>Minimum size (in bytes) of the inputs on which a compact index is automatically used.</description> 1362 </property> 1363 1364 <property> 1365 <name>hive.optimize.index.filter.compact.maxsize</name> 1366 <value>-1</value> 1367 <description>Maximum size (in bytes) of the inputs on which a compact index is automatically used. 1368 A negative number is equivalent to infinity.</description> 1369 </property> 1370 1371 <property> 1372 <name>hive.index.compact.query.max.size</name> 1373 <value>10737418240</value> 1374 <description>The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity.</description> 1375 </property> 1376 1377 <property> 1378 <name>hive.index.compact.query.max.entries</name> 1379 <value>10000000</value> 1380 <description>The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity.</description> 1381 </property> 1382 1383 <property> 1384 <name>hive.index.compact.binary.search</name> 1385 <value>true</value> 1386 <description>Whether or not to use a binary search to find the entries in an index table that match the filter, where possible</description> 1387 </property> 1388 1389 <property> 1390 <name>hive.exim.uri.scheme.whitelist</name> 1391 <value>hdfs,pfile</value> 1392 <description>A comma separated list of acceptable URI schemes for import and export.</description> 1393 </property> 1394 1395 <property> 1396 <name>hive.lock.mapred.only.operation</name> 1397 <value>false</value> 1398 <description>This param is to control whether or not only do lock on queries 1399 that need to execute at least one mapred job.</description> 1400 </property> 1401 1402 <property> 1403 <name>hive.limit.row.max.size</name> 1404 <value>100000</value> 1405 <description>When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee 1406 each row to have at least.</description> 1407 </property> 1408 1409 <property> 1410 <name>hive.limit.optimize.limit.file</name> 1411 <value>10</value> 1412 <description>When trying a smaller subset of data for simple LIMIT, maximum number of files we can 1413 sample.</description> 1414 </property> 1415 1416 <property> 1417 <name>hive.limit.optimize.enable</name> 1418 <value>false</value> 1419 <description>Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first.</description> 1420 </property> 1421 1422 <property> 1423 <name>hive.limit.optimize.fetch.max</name> 1424 <value>50000</value> 1425 <description>Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. 1426 Insert queries are not restricted by this limit.</description> 1427 </property> 1428 1429 <property> 1430 <name>hive.rework.mapredwork</name> 1431 <value>false</value> 1432 <description>should rework the mapred work or not. 1433 This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time.</description> 1434 </property> 1435 1436 <property> 1437 <name>hive.exec.concatenate.check.index</name> 1438 <value>true</value> 1439 <description>If this sets to true, hive will throw error when doing 1440 'alter table tbl_name [partSpec] concatenate' on a table/partition 1441 that has indexes on it. The reason the user want to set this to true 1442 is because it can help user to avoid handling all index drop, recreation, 1443 rebuild work. This is very helpful for tables with thousands of partitions.</description> 1444 </property> 1445 1446 <property> 1447 <name>hive.sample.seednumber</name> 1448 <value>0</value> 1449 <description>A number used to percentage sampling. By changing this number, user will change the subsets 1450 of data sampled.</description> 1451 </property> 1452 1453 <property> 1454 <name>hive.io.exception.handlers</name> 1455 <value></value> 1456 <description>A list of io exception handler class names. This is used 1457 to construct a list exception handlers to handle exceptions thrown 1458 by record readers</description> 1459 </property> 1460 1461 <property> 1462 <name>hive.autogen.columnalias.prefix.label</name> 1463 <value>_c</value> 1464 <description>String used as a prefix when auto generating column alias. 1465 By default the prefix label will be appended with a column position number to form the column alias. Auto generation would happen if an aggregate function is used in a select clause without an explicit alias.</description> 1466 </property> 1467 1468 <property> 1469 <name>hive.autogen.columnalias.prefix.includefuncname</name> 1470 <value>false</value> 1471 <description>Whether to include function name in the column alias auto generated by hive.</description> 1472 </property> 1473 1474 <property> 1475 <name>hive.exec.perf.logger</name> 1476 <value>org.apache.hadoop.hive.ql.log.PerfLogger</value> 1477 <description>The class responsible logging client side performance metrics. Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger</description> 1478 </property> 1479 1480 <property> 1481 <name>hive.start.cleanup.scratchdir</name> 1482 <value>false</value> 1483 <description>To cleanup the hive scratchdir while starting the hive server</description> 1484 </property> 1485 1486 <property> 1487 <name>hive.output.file.extension</name> 1488 <value></value> 1489 <description>String used as a file extension for output files. If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise.</description> 1490 </property> 1491 1492 <property> 1493 <name>hive.insert.into.multilevel.dirs</name> 1494 <value>false</value> 1495 <description>Where to insert into multilevel directories like 1496 "insert directory '/HIVEFT25686/chinna/' from table"</description> 1497 </property> 1498 1499 <property> 1500 <name>hive.warehouse.subdir.inherit.perms</name> 1501 <value>false</value> 1502 <description>Set this to true if the the table directories should inherit the 1503 permission of the warehouse or database directory instead of being created 1504 with the permissions derived from dfs umask</description> 1505 </property> 1506 1507 <property> 1508 <name>hive.exec.job.debug.capture.stacktraces</name> 1509 <value>true</value> 1510 <description>Whether or not stack traces parsed from the task logs of a sampled failed task for 1511 each failed job should be stored in the SessionState 1512 </description> 1513 </property> 1514 1515 <property> 1516 <name>hive.exec.driver.run.hooks</name> 1517 <value></value> 1518 <description>A comma separated list of hooks which implement HiveDriverRunHook and will be run at the 1519 beginning and end of Driver.run, these will be run in the order specified 1520 </description> 1521 </property> 1522 1523 <property> 1524 <name>hive.ddl.output.format</name> 1525 <value>text</value> 1526 <description> 1527 The data format to use for DDL output. One of "text" (for human 1528 readable text) or "json" (for a json object). 1529 </description> 1530 </property> 1531 1532 <property> 1533 <name>hive.transform.escape.input</name> 1534 <value>false</value> 1535 <description> 1536 This adds an option to escape special chars (newlines, carriage returns and 1537 tabs) when they are passed to the user script. This is useful if the hive tables 1538 can contain data that contains special characters. 1539 </description> 1540 </property> 1541 1542 <property> 1543 <name>hive.exec.rcfile.use.explicit.header</name> 1544 <value>true</value> 1545 <description> 1546 If this is set the header for RC Files will simply be RCF. If this is not 1547 set the header will be that borrowed from sequence files, e.g. SEQ- followed 1548 by the input and output RC File formats. 1549 </description> 1550 </property> 1551 1552 <property> 1553 <name>hive.multi.insert.move.tasks.share.dependencies</name> 1554 <value>false</value> 1555 <description> 1556 If this is set all move tasks for tables/partitions (not directories) at the end of a 1557 multi-insert query will only begin once the dependencies for all these move tasks have been 1558 met. 1559 Advantages: If concurrency is enabled, the locks will only be released once the query has 1560 finished, so with this config enabled, the time when the table/partition is 1561 generated will be much closer to when the lock on it is released. 1562 Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which 1563 are produced by this query and finish earlier will be available for querying 1564 much earlier. Since the locks are only released once the query finishes, this 1565 does not apply if concurrency is enabled. 1566 </description> 1567 </property> 1568 1569 <property> 1570 <name>hive.fetch.task.conversion</name> 1571 <value>minimal</value> 1572 <description> 1573 Some select queries can be converted to single FETCH task minimizing latency. 1574 Currently the query should be single sourced not having any subquery and should not have 1575 any aggregations or distincts (which incurrs RS), lateral views and joins. 1576 1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only 1577 2. more : SELECT, FILTER, LIMIT only (+TABLESAMPLE, virtual columns) 1578 </description> 1579 </property> 1580 1581 <property> 1582 <name>hive.hmshandler.retry.attempts</name> 1583 <value>1</value> 1584 <description>The number of times to retry a HMSHandler call if there were a connection error</description> 1585 </property> 1586 1587 <property> 1588 <name>hive.hmshandler.retry.interval</name> 1589 <value>1000</value> 1590 <description>The number of miliseconds between HMSHandler retry attempts</description> 1591 </property> 1592 1593 1594 <property> 1595 <name>hive.server.read.socket.timeout</name> 1596 <value>10</value> 1597 <description>Timeout for the HiveServer to close the connection if no response from the client in N seconds, defaults to 10 seconds.</description> 1598 </property> 1599 1600 <property> 1601 <name>hive.server.tcp.keepalive</name> 1602 <value>true</value> 1603 <description>Whether to enable TCP keepalive for the Hive server. Keepalive will prevent accumulation of half-open connections.</description> 1604 </property> 1605 <!--zhangziliang--> 1606 <property> 1607 <name>hive.aux.jars.path</name> 1608 <value>file:///home/hadoop/source/hive/lib/hive-hbase-handler-0.10.0.jar,file:///home/hadoop/source/hive/lib/hbase-0.94.0.jar,file:///home/hadoop/source/hive/lib/zookeeper-3.4.3.jar</value> 1609 </property> 1610 </configuration> View Code 3.3 新增配置属性-hive.aux.jars.path 1 <property> 2 <name>hive.aux.jars.path</name> 3 <value>file:///home/hadoop/source/hive/lib/hive-hbase-handler-0.10.0.jar,file:///home/hadoop/source/hive/lib/hbase-0.94.0.jar,file:///home/hadoop/source/hive/lib/zookeeper-3.4.3.jar</value> 4 </property> 3.4 拷贝Jar包到Hive/lib目录 hbase-0.94.0.jar,zookeeper-3.4.3.jar 四、测试脚本-创建HBase能够识别的数据表 1 CREATE TABLE hbase_table_1(key int, value string) 2 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 3 WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") 4 TBLPROPERTIES ("hbase.table.name" = "xyz"); 五、异常解决 5.1 错误提示 java.lang.NoClassDefFoundError: com/google/protobuf/Message at org.apache.hadoop.hbase.io.HbaseObjectWritable.(HbaseObjectWritable.java … 5.2 解决方案 将$HBASE_HOME/lib/protobuf-java-2.4.0a.jar 拷贝到 $HIVE_HOME/lib/. 六、运行效果 [hadoop@hadoop1 lib]$ hive -hiveconf hbase.zookeeper.quorum=hadoop1 WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files. Logging initialized using configuration in jar:file:/home/hadoop/source/hive/lib/hive-common-0.10.0.jar!/hive-log4j.properties Hive history file=/tmp/hadoop/hive_job_log_hadoop_201401012315_758621762.txt hive> CREATE TABLE hbase_table_1(key int, value string) > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") > TBLPROPERTIES ("hbase.table.name" = "xyz"); OK Time taken: 23.246 seconds hive> show tables; OK hbase_table_1 Time taken: 1.346 seconds 作者:张子良 出处:http://www.cnblogs.com/hadoopdev 本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

资源下载

更多资源
Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册