首页 文章 精选 留言 我的

精选列表

搜索[容器配置],共10035篇文章
优秀的个人博客,低调大师

hive配置

vi ./bin/hive-config.sh export JAVA_HOME=/usr/local/jdk/jdk1.8.0 export HADOOP_HOME=/usr/local/hadoop export HIVE_HOME=/usr/local/hive/apache-hive-1.2.1 cp hive-default.xml.template hive-site.xml <property> <name>hive.metastore.warehouse.dir</name> <value>/usr/local/hive/apache-hive-1.2.1/warehouse</value> <description>location of default database for the warehouse</description> </property> <property> <name>hive.exec.scratchdir</name> <value>/usr/local/hive/apache-hive-1.2.1/tmp</value> <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}. </description> </property> <property> <name>hive.exec.local.scratchdir</name> <value>/usr/local/hive/apache-hive-1.2.1/tmp</value> <description>Local scratch space for Hive jobs</description> </property> <property> <name>hive.downloaded.resources.dir</name> <value>/usr/local/hive/apache-hive-1.2.1/resources</value> <description>Temporary local directory for added resources in the remote file system.</description> </property> cp hive-log4j.properties.template hive-log4j.properties vi hive-log4j.properties #log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter 本文转自 skinglzw 51CTO博客,原文链接:http://blog.51cto.com/skinglzw/1870609,如需转载请自行联系原作者

优秀的个人博客,低调大师

HUE配置

Where is my hue.ini? CDHpackage: /etc/hue/conf/hue.ini A tarballrelease: /usr/share/desktop/conf/hue.ini Developmentversion: desktop/conf/pseudo-distributed.ini Cloudera Manager: CMgeneratesall the hue.ini for you, so no hassle/var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk ‘{print $9}’`/hue.ini [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost To point to another server, just replaced the host value by ‘hiveserver.ent.com’: [beeswax] # Host where HiveServer2 is running. hive_server_host=hiveserver.ent.com Note:Any line starting with a # is considered as a comment so is not used. Note:The list of mis-configured services are listed on the/about/admin_wizardpage. Note:After each change in the ini file, Hue should be restarted to pick it up. Note:In some cases, as explained inhow to configure Hadoop for Hue documentation, the API of these services needs to be turned on and Hue set as proxy user. Here are the main sections that you will need to update in order to have each service accessible in Hue: HDFS This is required forlisting or creating files. Replace localhost by the real address of the NameNode (usually http://localhost:50070). Enter this in hdfs-site.xmlto enable WebHDFS in the NameNode and DataNodes: << code="">property> << code="">name>dfs.webhdfs.enabledname> << code="">value>truevalue> property> Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add tocore-site.xml: << code="">property> << code="">name>hadoop.proxyuser.hue.hostsname> << code="">value>*value> property> << code="">property> << code="">name>hadoop.proxyuser.hue.groupsname> << code="">value>*value> property> Then, if the Namenode is on another host than Hue, don’t forget to update in the hue.ini: [hadoop] `hdfs_clusters` [`default`] # Enter the filesystem uri fs_defaultfs=hdfs://localhost:8020 # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. webhdfs_url=http://localhost:50070/webhdfs/v1 YARN The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let youlist and kill running applicationsand get their logs. [hadoop] `yarn_clusters` [`default`] # Enter the host on which you are running the ResourceManager resourcemanager_host=localhost # Whether to submit jobs to this cluster submit_to=True # URL of the ResourceManager API resourcemanager_api_url=http://localhost:8088 # URL of the ProxyServer API proxy_api_url=http://localhost:8088 # URL of the HistoryServer API history_server_api_url=http://localhost:19888 Hive Here we need a running HiveServer2 in order tosend SQL queries. [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost Note:If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too: [beeswax] # Host where HiveServer2 is running. hive_server_host=localhost # Hive configuration directory, where hive-site.xml is located hive_conf_dir=/etc/hive/conf Solr Search We just need to specify the address of a Solr Cloud (or non Cloud Solr), theninteractive dashboardscapabilities are unleashed! [search] # URL of the Solr Server solr_url=http://localhost:8983/solr/ Oozie An Oozie server should be up and running beforesubmitting or monitoring workflows. [liboozie] # The URL where the Oozie service runs on. oozie_url=http://localhost:11000/oozie HBase The HBase app works with a HBase Thrift Server version 1. It lets youbrowse, query and edit HBase tables. [hbase] # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'. hbase_clusters=(Cluster|localhost:9090) 本文转自 yntmdr 51CTO博客,原文链接:http://blog.51cto.com/yntmdr/1743223,如需转载请自行联系原作者

优秀的个人博客,低调大师

FastDfs容器启动

Run as a trackerdocker run -ti -d --name trakcer -v ~/tracker_data:/fastdfs/tracker/data --net=host season/fastdfs trackerporttracker default port is 22122 base_pathyou should map the path: /fastdfs/tracker/data to keep the data Run as a storagedocker run -ti --name storage -v ~/storage_data:/fastdfs/storage/data -v ~/store_path:/fastdfs/store_path --net=host -e TRACKER_SERVER:192.168.1.2:22122 season/fastdfs storagestorage_dataequal to "base_path" in store.conf store_pathequal to "store_path0" in store.conf TRACKER_SERVERtracker address

资源下载

更多资源
腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册