首页 文章 精选 留言 我的

精选列表

搜索[配置],共10040篇文章
优秀的个人博客,低调大师

hive配置

vi ./bin/hive-config.sh export JAVA_HOME=/usr/local/jdk/jdk1.8.0 export HADOOP_HOME=/usr/local/hadoop export HIVE_HOME=/usr/local/hive/apache-hive-1.2.1 cp hive-default.xml.template hive-site.xml <property> <name>hive.metastore.warehouse.dir</name> <value>/usr/local/hive/apache-hive-1.2.1/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.exec.scratchdir</name> <value>/usr/local/hive/apache-hive-1.2.1/tmp</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}. </description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/usr/local/hive/apache-hive-1.2.1/tmp</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/usr/local/hive/apache-hive-1.2.1/resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> cp hive-log4j.properties.template hive-log4j.properties vi hive-log4j.properties #log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter 本文转自 skinglzw 51CTO博客,原文链接:http://blog.51cto.com/skinglzw/1870609,如需转载请自行联系原作者

优秀的个人博客,低调大师

HUE配置

Where is my hue.ini? CDHpackage: /etc/hue/conf/hue.ini A tarballrelease: /usr/share/desktop/conf/hue.ini Developmentversion: desktop/conf/pseudo-distributed.ini Cloudera Manager: CMgeneratesall the hue.ini for you, so no hassle/var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk ‘{print $9}’`/hue.ini [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost To point to another server, just replaced the host value by ‘hiveserver.ent.com’: [beeswax] # Host where HiveServer2 is running. hive_server_host=hiveserver.ent.com Note:Any line starting with a # is considered as a comment so is not used. Note:The list of mis-configured services are listed on the/about/admin_wizardpage. Note:After each change in the ini file, Hue should be restarted to pick it up. Note:In some cases, as explained inhow to configure Hadoop for Hue documentation, the API of these services needs to be turned on and Hue set as proxy user. Here are the main sections that you will need to update in order to have each service accessible in Hue: HDFS This is required forlisting or creating files. Replace localhost by the real address of the NameNode (usually http://localhost:50070). Enter this in hdfs-site.xmlto enable WebHDFS in the NameNode and DataNodes: << code="">property> << code="">name>dfs.webhdfs.enabledname> << code="">value>truevalue> property> Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add tocore-site.xml: << code="">property> << code="">name>hadoop.proxyuser.hue.hostsname> << code="">value>*value> property> << code="">property> << code="">name>hadoop.proxyuser.hue.groupsname> << code="">value>*value> property> Then, if the Namenode is on another host than Hue, don’t forget to update in the hue.ini: [hadoop] `hdfs_clusters` [`default`] # Enter the filesystem uri fs_defaultfs=hdfs://localhost:8020 # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. webhdfs_url=http://localhost:50070/webhdfs/v1 YARN The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let youlist and kill running applicationsand get their logs. [hadoop] `yarn_clusters` [`default`] # Enter the host on which you are running the ResourceManager resourcemanager_host=localhost # Whether to submit jobs to this cluster submit_to=True # URL of the ResourceManager API resourcemanager_api_url=http://localhost:8088 # URL of the ProxyServer API proxy_api_url=http://localhost:8088 # URL of the HistoryServer API history_server_api_url=http://localhost:19888 Hive Here we need a running HiveServer2 in order tosend SQL queries. [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost Note:If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too: [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost # Hive configuration directory, where hive-site.xml is located hive_conf_dir=/etc/hive/conf Solr Search We just need to specify the address of a Solr Cloud (or non Cloud Solr), theninteractive dashboardscapabilities are unleashed! [search] # URL of the Solr Server solr_url=http://localhost:8983/solr/ Oozie An Oozie server should be up and running beforesubmitting or monitoring workflows. [liboozie] # The URL where the Oozie service runs on. oozie_url=http://localhost:11000/oozie HBase The HBase app works with a HBase Thrift Server version 1. It lets youbrowse, query and edit HBase tables. [hbase] # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'. hbase_clusters=(Cluster|localhost:9090) 本文转自 yntmdr 51CTO博客,原文链接:http://blog.51cto.com/yntmdr/1743223,如需转载请自行联系原作者

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册