首页 文章 精选 留言 我的

精选列表

搜索[部署],共10003篇文章
优秀的个人博客,低调大师

TensorFlow安装部署

1.环境依赖 Centos7 组件 版本 Python 2.7.5 TensorFlow 0.14.0 pyhton依赖库 Package Version -------------------- --------- absl-py 0.8.0 astor 0.8.0 backports.weakref 1.0.post1 enum34 1.1.6 funcsigs 1.0.2 futures 3.3.0 gast 0.2.2 google-pasta 0.1.7 grpcio 1.23.0 h5py 2.9.0 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 Markdown 3.1.1 mock 3.0.5 numpy 1.16.5 pip 19.2.3 protobuf 3.9.1 setuptools 41.2.0 six 1.12.0 tensorboard 1.14.0 tensorflow 1.14.0 tensorflow-estimator 1.14.0 termcolor 1.1.0 Werkzeug 0.15.5 wheel 0.33.6 wrapt 1.11.2 2.资源准备 安装包下载 百度云盘链接:https://pan.baidu.com/s/1cm9mZpP1JRwyGpq945ffaQ 提取码:8bxj 3.安装 a).安装python-devel ## 在线安装 yum install python-devel ## 离线安装 rpm -ivh python-devel-2.7.5-80.el7_6.x86_64.rpm b).安装virtualenv ## 在线安装 pip install virtualenv ## 离线安装 c).安装tensorflow 进入虚拟机,安装tensorflow ## Step1: 创建虚拟机 virtualenv tensorflow ## Step2: 进入虚拟机 source ./tensorflow/bin/activate ## 在线安装 pip install tensorflow ## 离线安装 pip install --no-index --find-links=/path/to/tensorflow_1.14.0_package -r tensorflow_requirement.txt 4.测试 测试代码 import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test) 执行测试 python test.py 测试结果 5.环境依赖下载 ## python-devel http://mirror.centos.org/centos/7/updates/x86_64/Packages/python-devel-2.7.5-80.el7_6.x86_64.rpm ## tensorflow-1.14.0 https://files.pythonhosted.org/packages/d3/59/d88fe8c58ffb66aca21d03c0e290cd68327cc133591130c674985e98a482/tensorflow-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl ## pip-19.2.3 https://files.pythonhosted.org/packages/00/9e/4c83a0950d8bdec0b4ca72afd2f9cea92d08eb7c1a768363f2ea458d08b4/pip-19.2.3.tar.gz ## virtualenv https://files.pythonhosted.org/packages/f7/69/1ad2d17560c4fc60170056dcd0a568b83f3453a2ac91155af746bcdb9a07/virtualenv-16.7.4-py2.py3-none-any.whl

优秀的个人博客,低调大师

docker部署sonarqube

gitlab-ce + gitlab-runner + sonarqube,在提交代码时对代码质量进行检测,对不符合要求的代码不允许提交到gitlab 1、配置docker-compose.yml version: '3.1' services: gitlab-ce: image: 'gitlab/gitlab-ce:latest' container_name: gitlab-ce restart: always hostname: 'gitlab.localhost.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://192.168.31.109' ports: - '80:80' - '443:443' - '10022:22' volumes: - '/root/gitlab-ce/home/config:/etc/gitlab' - '/root/gitlab-ce/home/logs:/var/log/gitlab' - '/root/gitlab-ce/home/data:/var/opt/gitlab' networks: - 'default' gitlab-runner: image: 'gitlab/gitlab-runner:latest' container_name: gitlab-runner depends_on: - 'gitlab-ce' restart: always volumes: - '/root/gitlab-ce/runnerconfig:/etc/gitlab-runner' - '/var/run/docker.sock:/var/run/docker.sock' networks: - 'default' links: - 'gitlab-ce:gitlab.localhost.com' mysql: image: mysql:5.7.27 container_name: mysql command: --default-authentication-plugin=mysql_native_password restart: always ports: - 3306:3306 volumes: - /root/gitlab-ce/mysql/data:/var/lib/mysql - /root/gitlab-ce/mysql/logs:/logs - /root/gitlab-ce/mysql/init:/docker-entrypoint-initdb.d environment: MYSQL_ROOT_PASSWORD: root@123456 MYSQL_USER: test #创建test用户 MYSQL_PASSWORD: test #设置test用户的密码 networks: - 'default' sonarqube: image: sonarqube:7.7-community # image: sonarqube:latest container_name: sonarqube ports: - "9000:9000" - "9002:9002" volumes: - "/root/gitlab-ce/sonarqube/conf:/opt/sonarqube/conf" - "/root/gitlab-ce/sonarqube/extensions:/opt/sonarqube/extensions" - "/root/gitlab-ce/sonarqube/logs:/opt/sonarqube/logs" - "/etc/sysctl.conf:/etc/sysctl.conf" # - "/root/gitlab-ce/sonarqube/data:/opt/sonarqube/data" environment: sonar.jdbc.username: root #root管理员用户密码 sonar.jdbc.password: root@123456 #创建test用户 sonar.jdbc.url: "jdbc:mysql://mysql:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance&useSSL=false" restart: always depends_on: - mysql links: - mysql networks: - 'default' sysctls: - net.core.somaxconn=1024 ulimits: nproc: 65536 nofile: soft: 65536 hard: 65536 networks: default: driver: 'bridge'

优秀的个人博客,低调大师

Nginx 部署HTTPS

系统:Linux Centos 7.4 x64软件:Nginx 1.12.2注:需要阿里云申请本地域名与证书并添加下载到本地。 注:证书文件为 xxxx.pem 与 xxxx.key 两个文件。Nginx 配置文件内添加HTTPS vim /etc/nginx/nginx.conf http { server { listen 443 ssl; server_name xxx.xxx.com; ssl on; root /; index index.html index.htm; ssl_certificate /etc/nginx/cert/215058739960601.pem; ssl_certificate_key /etc/nginx/cert/215058739960601.key; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; location / { root /; index index.html index.htm; } }}含注释http { server { # 启动443端口 listen 443 ssl; # 证书申请的域名 server_name xxx.xxx.com; # 开启SSL协议 ssl on; # 指定访问根目录 root /; # 指定索引 index index.html index.htm; # 指定xxx.pem证书文件 ssl_certificate /etc/nginx/cert/215058739960601.pem; # 指定xxx.key证书文件 ssl_certificate_key /etc/nginx/cert/215058739960601.key; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; # 指定访问目录及索引 location / { root /; index index.html index.htm; } } }

优秀的个人博客,低调大师

redash部署使用

一、安装 从https://github.com/getredash/redash 拉取代码,运行docker-compose.production.yml 1,docker-compose文件调整 主要修改了两处: 1,增加了redis和postgres的db文件与宿主机的映射,不让docker容器停止后数据丢失。 官方默认的docker-compose.production.yml在docker-compose down 命令后,所有的配置信息都丢失了 2,增加了redis和postgres的端口映射,方便调试,线上环境也可以关掉。 3,修改REDASH_COOKIE_SECRET。 修改后的docker-compose.production.yml文件如下: # This is an example configuration for Docker Compose. Make sure to atleast update # the cookie secret & postgres database password. # # Some other recommendations: # 1. To persist Postgres data, assign it a volume host location. # 2. Split the worker service to adhoc workers and scheduled queries workers. version: '2' services: server: image: redash/redash:latest command: server depends_on: - postgres - redis ports: - "5000:5000" environment: PYTHONUNBUFFERED: 0 REDASH_LOG_LEVEL: "INFO" REDASH_REDIS_URL: "redis://redis:6379/0" REDASH_DATABASE_URL: "postgresql://postgres@postgres/postgres" REDASH_COOKIE_SECRET: "Q422k6vaXUk8" REDASH_WEB_WORKERS: 4 restart: always worker: image: redash/redash:latest command: scheduler environment: PYTHONUNBUFFERED: 0 REDASH_LOG_LEVEL: "INFO" REDASH_REDIS_URL: "redis://redis:6379/0" REDASH_DATABASE_URL: "postgresql://postgres@postgres/postgres" QUEUES: "queries,scheduled_queries,celery" WORKERS_COUNT: 2 restart: always redis: image: redis:3.0-alpine ports: - "6379:6379" volumes: - ./data/redis_data:/data restart: always postgres: image: postgres:9.5.6-alpine ports: - "5432:5432" volumes: - ./data/postgresql_data:/var/lib/postgresql/data restart: always nginx: image: redash/nginx:latest ports: - "88:80" depends_on: - server links: - server:redash restart: always 2,创建db [root@VM_38_115_centos redash]# docker-compose -f docker-compose.production.yml run --rm server create_db Starting redash_redis_1 Starting redash_postgres_1 [2018-09-11 09:02:39,580][PID:1][INFO][root] Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt [2018-09-11 09:02:39,601][PID:1][INFO][root] Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt [2018-09-11 09:02:41,707][PID:1][INFO][alembic.runtime.migration] Context impl PostgresqlImpl. [2018-09-11 09:02:41,708][PID:1][INFO][alembic.runtime.migration] Will assume transactional DDL. [2018-09-11 09:02:41,724][PID:1][INFO][alembic.runtime.migration] Running stamp_revision -> 969126bd800f 3,运行 # docker-compose -f docker-compose.production.yml up 运行起来的容器如下: 4,邮件配置 For the system to be able to send emails (for example when alerts trigger), you need to set the mail server to use and the host name of your Redash server. If you’re using one of our images, you can do this by editing the .env file: # Note that not all values are required, as they have default values. export REDASH_MAIL_SERVER="" # default: localhost export REDASH_MAIL_PORT="" # default: 25 export REDASH_MAIL_USE_TLS="" # default: false export REDASH_MAIL_USE_SSL="" # default: false export REDASH_MAIL_USERNAME="" # default: None export REDASH_MAIL_PASSWORD="" # default: None export REDASH_MAIL_DEFAULT_SENDER="" # Email address to send from export REDASH_HOST="" # base address of your Redash instance, for example: "https://demo.redash.io" docker-compose文件中配置 server: image: redash/redash:latest environment: ... #邮箱 REDASH_MAIL_SERVER: "smtp.exmail.qq.com" REDASH_MAIL_PORT: 465 REDASH_MAIL_USE_TLS: "false" REDASH_MAIL_USE_SSL: "true" REDASH_MAIL_USERNAME: "no-reply@yoursite.com" REDASH_MAIL_PASSWORD: "111111" REDASH_MAIL_DEFAULT_SENDER: "no-reply@yoursite.com" REDASH_HOST: "http://redash.mysite.com" 测试是否配置成功: [root@VM_38_115_centos ~]# docker exec -it redash_server_1 python manage.py send_test_mail [2018-09-11 10:02:28,627][PID:37][INFO][root] Generating grammar tables from /usr/lib/python2.7/lib2to3/Grammar.txt [2018-09-11 10:02:28,649][PID:37][INFO][root] Generating grammar tables from /usr/lib/python2.7/lib2to3/PatternGrammar.txt 自己邮件会收到Test message。 PS:配置成功了,但在Alerts中触发了警告,设置的接受邮件没收到警告信息。还没找到原因... 二、使用 1,配置数据库 2,查询语句 获取表的更新时间 增加图表展示: 3,Dashboards 三、用户管理 添加用户 四、特性 1,支持iframe嵌入到其它网页 对于任何Query,表格和图形都支持embed点击Embed弹出: 这个功能可以实现自己系统中集成报表展示。 2,可视化图形支持的类型 箱型图 线形图、柱状图、面积图、饼图、散点图 队列图 计数器 漏斗图 地图 透视表 桑基图 桑基图用于表达流量分布于结构对比,最初的发明者使用它来呈现能量的流动与分布。 旭日图(Sunburst) 词云图 五,实践 根据scrapy log日志表,设计出一张bashboard监控面板: 参考 redash邮箱设置 redash环境变量 Visualization Types

优秀的个人博客,低调大师

Cassandra 安装部署

python版本:2.7 jdk版本:1.8 Cassandra版本:3.11.2 官网: http://cassandra.apache.org/ 下载: wget http://mirrors.tuna.tsinghua.edu.cn/apache/cassandra/3.11.2/apache-cassandra-3.11.2-bin.tar.gz 1.解压: tar xvzf apache-cassandra-3.11.2-bin.tar.gz -C ../app/ 2.修改目录名称: mv apache-cassandra-3.11.2/ cassandra-3.11.2/ 3.配置环境变量: vim ~/.bash_profile export CASSANDRA_HOME=/home/hadoop/app/cassandra-3.11.2 export PATH=$CASSANDRA_HOME/bin:$PATH source ~/.bash_profile 4.修改配置文件: vim $CASSANDRA_HOME/conf/cassandra.yaml 修改如下位置,为对应的主机名称: rpc_address: online101 listen_address: online101 - seeds: "online101" - cluster_name: 'online_01' # 可选项 修改 启动脚本.py文件 vim $CASSANDRA_HOME/bin/cqlsh.py DEFAULT_HOST = 'online101' cd $CASSANDRA_HOME/pylib python setup.py install 初始化类似: cassandra -f -R 进入cqlsh cqlsh 或者 cqlsh -ucassandra -pcassandra

优秀的个人博客,低调大师

Spring部署步骤

其实也就那么几步:添加jar包;写配置文件;写相应的java类。当然,如果 需要和Struts2.0集成,还是有几个地方需要多加注意的。 1、jar包 可以单独加Spring-web.jar、Spring-aop.jar这些包,也可以加一个总包Spring.jar。只要包含需要用到的类就 行。 如果需要Spring和Struts2.0集成,记得加上struts2-spring-plugin.jar。 2、写配置文件 applicationContext.xml嘛,基本写法就这样咯: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-2.5.xsd http://www.springframework.org/schema/txhttp://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/aophttp://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> <bean id="BASIC" class="common.action.BasicAction"></bean> </beans> 如果与Struts集成,记得在web.xml里写上,声明由Spring来做类管理 <!-- 以下是Spring的监听器定义 --> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <!-- 以上是Spring的监听器定义 --> 还可以加一行,定义applicationContext.xml的位置或名称。*是通配符。 <!-- 以下是Spring配置文件位置的定义 --> <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/classes/applicationContext_*.xml </param-value> </context-param> <!-- 以上是Spring配置文件位置的定义 --> 3、java类 java类的编写只有一条需要注意:要通过Spring来注入的属性,需要在类中写明getter和setter方法。比如 applicationContext.xml中定义了: <bean id="BASIC" class="common.action.BasicAction"> <property name="errorbean" ref="eb"/> </bean> <bean id="eb" class="common.bean.ErrorBean"></bean> 在BasicAction.java中就必须要有如下定义,否则Spring是要报错的。 private ErrorBean errorbean; public ErrorBean getErrorbean() { return errorbean; } public void setErrorbean(ErrorBean errorbean) { this.errorbean = errorbean; } 按上述步骤配置好之后,Spring的ioc功能应该是可以正常使用了。其它方面,如aop等,另外记录吧 本文转自 斯然在天边 51CTO博客,原文链接:http://blog.51cto.com/winters1224/799058,如需转载请自行联系原作者

优秀的个人博客,低调大师

Docker部署CouchDB

CouchDB介绍: CouchDB是Apache组织发布的一款开源的、面向文档类型的NoSQL数据库。由Erlang编写,使用json格式保存数据。CouchDB以RESTful的格式提供服务 可以很方便的对接各种语言的客户端 CouchDB最大的竞争对手就是熟悉的MangoDB。它们的不同点比较会在另外一篇里面写入 CouchDB 目标是做下一代的Web应用存储系统 CouchDB下载,安装: 官网:http://couchdb.apache.org/ 目前只有2.0.0版本 2016年9月份 CouchDB 安装完成后自动启动,默认是5984 Docker下的安装配置: 1、先创建一个couchdb容器,并赋值给couch1 1 2 COUCH1=$(dockerrun-d-p5984- v /var/lib/couchdb couchdb) root@ubuntu:~ #echo$COUCH1 1 6d708f72e25e9f0d693aa5a8ce5afd1a61e945355f728f409bc5a90676e0524c 2、给couchDB中插入数据 要确保你的HOST是可用的 1 2 3 $HOST=localhost #这里如果localhost不行,就更换成主机ip $URL= "http://$HOST:$(dockerport$COUCH15984|grep-o'[1-9][0-9]*$')/_utils/" $ echo "Navigateto$URLinyourbrowser,andusethecouchinterfacetoadddata" 操作结果: 1 2 3 4 5 6 7 root@ubuntu:~ #HOST=123.xx.xx.x8#这里是公网ip就不显示出来了 root@ubuntu:~ #URL="http://$HOST:$(dockerport$COUCH15984|grep-o'[1-9][0-9]*$')/_utils/" root@ubuntu:~ #dockerport$COUCH15984#这里映射到了宿主机32768端口,并监听所有ip 0.0.0.0:32768 root@ubuntu:~ #echo"Navigateto$URLinyourbrowser,andusethecouchinterfacetoadddata" Navigatetohttp: //123 .xx.xx.x8:32768 /_utils/ in yourbrowser,andusethecouchinterfacetoadddata #打开浏览器输入http://123.xx.xx.x8:32768/_utils/将会显示couchDB的页面 创建数据库账号 控制台页面的左边的主要部分列出了当前数据库,并提供了一个 Create Database … 的操作;右边的侧边栏呢,从上到下分别提供了 工具、 文档 、 诊断 和 当前数据库 3个菜单。最右下角是版本号和一句话:“Welcome to Admin Party! Everyone is admin. Fix this”,什么意思呢?就是说ConchDB默认用户都是admin,对于用惯了关系型数据库的我们,本能的就会感觉这并不安全,没错,其实就是不安全!!!!,所以,点击 Fix this 来新建一个用户吧。 注意:如果使用中发现某些按钮是灰色的或者 Compact & Cleanup 一直在转圈等待,请先看看自己是不是管理员。 点击Create Database …并输入一个数据库名称就可以新建一个数据啦 创建成功后会跳转到当前新建的数据库,如果我们返回Overview页面就会发现刚才新建的数据库已经被添加到列表中了。 CouchDB的增删改操作【INSERT/UPDATE/DELETE】 在新建的数据库的管理页面中点击New Document,我们就可以新建一个文档,其中会包含一个默认的Id字段作为唯一标识,我们可以不用管他,同时也可以点击Add Field来新增字段,这里假设我们新增一个人,字段如下图: 点击右面的“source”按钮,就能看到json格式的内容 点击上面的“Save doucement”来保存数据,然后再主页就能看到我们新创建的数据库了 双击Field或者Value列的单元格可以更改字段名或字段值 点击右侧的绿色对勾按钮后,该字段的值将会被保存,此时切换到Source标签就可以看到json格式的数据 编辑完成后点击上方的Save Document按钮就可以将刚才的json数据保存到数据库中,保存成功后系统会自动为我们添加一个rev字段,这个字段代表当前文档的版本号 更新操作呢也比较简单,只需要单击当前文档的Key或者双击Value就可以进入编辑页面,在编辑页面中不但能修改字段的Key/Value,还可以新增和删除字段,同样,点击Save Document即可保存当前修改,同时,我们可以注意到,该文档的rev值已经发生了变化,这代表该文档的版本号已经被更新 同样,进入当前记录的详情页我们就可以看到Delete Document按钮,点击之后当前文档就会被删除。 但是,值得注意的是,当我们使用Delete Document操作将文档删除后,表面看是没有数据了,但是该数据仍然占用了那么多空间,并没有减少,如下图: 这时我们需要进入数据库详情页,使用Compact & Cleanup…中的Compact Database操作来清理被占用的空间,如果发现此功能点击之后一直在转圈等待,去看看自己是不是管理员用户吧。 使用Docker再次创建一个couchDB 1 COUCH2=$(dockerrun-d-p5984--volumes-from$COUCH1couchdb) 使用浏览器浏览第二个数据库 1 2 3 HOST=localhost #这里如果localhost不行,就更换成主机ip URL= "http://$HOST:$(dockerport$COUCH25984|grep-o'[1-9][0-9]*$')/_utils/" echo "Navigateto$URLinyourbrowser,andusethecouchinterfacetoadddata" 这里可以看到数据已经同步到couchdb的第二台上面临 使用 cURL 通过 RESTful API 对 CouchDB 进行增删查改 Tips: 对cURL还不太了解的亲们可以搜索下,很好用的工具,可以从这里下载:http://curl.haxx.se/download/ 注意:为了测试方便,我们删除原来的数据库。 前面已经介绍过,CouchDB支持使用RESTful API的方式来对数据进行操作,例如,我们在浏览器中输入http://x.x.x.x:32768/demo就可以查看demo数据库中详细信息。如图: 下面是借鉴别人的操作流程,自己没有时间敲了,大家随意感受一下CouchDB流畅的RESTful操作方式: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 #获取CouchDB相关信息 curl-XGEThttp: //127 .0.0.1:5984 { "couchdb" : "Welcome" , "uuid" : "a853c053a5a54a4d3ccbaad0d9ffd3b0" , "version" : "1.6.1" , "vendor" :{ "version" : "1.6.1" , "name" : "TheApacheSoftwareFoundation" }} #创建demo数据库(需要admin权限,参照下一条命令) curl-XPUThttp: //127 .0.0.1:5984 /demo { "error" : "unauthorized" , "reason" : "Youarenotaserveradmin." } #使用用户名密码登录CouchDB,并创建demo数据库 curl-XPUThttp: //username :password@127.0.0.1:5984 /demo { "ok" : true } #查看所有数据库 curl-XGEThttp: //127 .0.0.1:5984 /_all_dbs [ "_replicator" , "_users" , "demo" ] #查看demo数据库相关信息 curl-XGEThttp: //127 .0.0.1:5984 /demo { "db_name" : "demo" , "doc_count" :0, "doc_del_count" :0, "update_seq" :0, "purge_seq" :0, "compact_running" : false , "disk_size" :79, "data_size" :0, "instance_start_time" : "1452000207199340" , "disk_format_version" :6, "committed_update_seq" :0} #向demo数据库中添加一个文档(自动生成Id),注意:windows下需要使用Content-Type:application/json的HttpHeader curl-H "Content-Type:application/json" -XPOSThttp: //127 .0.0.1:5984 /demo -d{\"name\":\"fooly\"} { "ok" : true , "id" : "3ebb59dd78ff448f283f48817800321c" , "rev" : "1-0e4ea534f2c1e7f05e21804b5f2f7a71" } #查看demo数据库中的所有文档 curl-XGEThttp: //127 .0.0.1:5984 /demo/_all_docs { "total_rows" :1, "offset" :0, "rows" :[ { "id" : "3ebb59dd78ff448f283f48817800321c" , "key" : "3ebb59dd78ff448f283f48817800321c" , "value" :{ "rev" : "1-0e4ea534f2c1e7f05e21804b5f2f7a71" }} ]} #获取一个uuid curl-XGEThttp: //127 .0.0.1:5984 /_uuids { "uuids" :[ "3ebb59dd78ff448f283f4881780033c0" ]} #向demo数据库中添加一个文档(使用获取到的uuid作为Id) curl-H "Content-Type:application/json" -XPUThttp: //127 .0.0.1:5984 /demo/3ebb59dd78ff448f283f4881780033c0 -d{\"name\":\"momo314\"} { "ok" : true , "id" : "3ebb59dd78ff448f283f4881780033c0" , "rev" : "1-eb393d36ac1ad38ada8361d94fc5d0b6" } #更新指定Id的文档(但是失败了,因为CouchDB是按版本提交的,同一个源提交多次会造成一定的混乱。所以,需要指定文档版本进行控制。) curl-H "Content-Type:application/json" -XPUThttp: //127 .0.0.1:5984 /demo/3ebb59dd78ff448f283f4881780033c0 -d{\"name\":\"momo314\",\"age\":18} { "error" : "conflict" , "reason" : "Documentupdateconflict." } #更新指定Id和指定版本的文档 curl-H "Content-Type:application/json" -XPUThttp: //127 .0.0.1:5984 /demo/3ebb59dd78ff448f283f4881780033c0 -d{\"_rev\":\"1-eb393d36ac1ad38ada8361d94fc5d0b6\",\"name\":\"momo314\",\"age\":18} { "ok" : true , "id" : "3ebb59dd78ff448f283f4881780033c0" , "rev" : "2-5d081e17588c03c27340035e420edecd" } #获取指定Id的文档内容 curl-XGEThttp: //127 .0.0.1:5984 /demo/3ebb59dd78ff448f283f4881780033c0 { "_id" : "3ebb59dd78ff448f283f4881780033c0" , "_rev" : "2-5d081e17588c03c27340035e420edecd" , "name" : "momo314" , "age" :18} #删除指定Id和rev版本号的文档 curl-XDELETEhttp: //username :password@127.0.0.1:5984 /demo/3ebb59dd78ff448f283f4881780033c0 ?rev=2-5d081e17588c03c27340035e420edecd { "ok" : true , "id" : "3ebb59dd78ff448f283f4881780033c0" , "rev" : "2-5d081e17588c03c27340035e420edecd" } #查看demo数据库中的所有文档(刚才的文档确实被删除掉了) curl-XGEThttp: //127 .0.0.1:5984 /demo/_all_docs { "total_rows" :1, "offset" :0, "rows" :[ { "id" : "3ebb59dd78ff448f283f48817800321c" , "key" : "3ebb59dd78ff448f283f48817800321c" , "value" :{ "rev" : "1-0e4ea534f2c1e7f05e21804b5f2f7a71" }} ]} #删除demo数据库(需要admin权限,参照下一条命令) curl-XDELETEhttp: //127 .0.0.1:5984 /demo { "error" : "unauthorized" , "reason" : "Youarenotaserveradmin." } #使用用户名密码登录CouchDB,并删除demo数据库 curl-XDELETEhttp: //username :password@127.0.0.1:5984 /demo { "ok" : true } #查看所有数据库(demo数据库确实被删除掉了) curl-XGEThttp: //127 .0.0.1:5984 /_all_dbs [ "_replicator" , "_users" ] 本文转自 kesungang 51CTO博客,原文链接:http://blog.51cto.com/sgk2011/1911814,如需转载请自行联系原作者

优秀的个人博客,低调大师

saltstack部署openstack

172.25.254.111 salt-master服务器 172.25.254.112OpenStack控制节点 ps: 下面都在salt-master操作 1.安装插件: yum install salt-cloud python-libcloud 2.创建salt-cloud配置文件: mkdir /etc/salt/cloud.providers.d/ vim /etc/salt/cloud.providers.d/openstack.conf my-openstack-config: # Set the location of the salt-master # minion: master: 172.25.254.112 # Configure the OpenStack driver # identity_url: http://172.25.254.111:5000/v2.0/tokens compute_name: nova protocol: ipv4 compute_region: RegionOne # Configure Openstack authentication credentials # user: demo password: demo # tenant is the project name tenant: demo driver: openstack provider: openstack # skip SSL certificate validation (default false) insecure: false 3.查看镜像列表: salt-cloud --list-images openstack #查看镜像列表 salt-cloud --list-size openstack #查看云主机类型 4.创建saltstack虚拟机模板文件: vim /etc/salt/cloud.profiles.d/web.conf web-node: #虚拟机模板名称 provider: my-openstack-config #前面配置文件定义的 size: m1.tiny #云主机类型 image: cirros #镜像名称 ssh_key_file: /root/.ssh/id_rsa #公钥文件 ssh_key_name: mykey #密钥对名称 ssh_interface: private_ips networks: - fixed: - 69200e49-0f8b-47b6-9bb5-2db9bca9a393 #网络的ID minion: #下面是自动给虚拟机安装salt-minion并配置 master: 172.25.254.111 grains: role: webserver-01 4.通过saltstack创建Openstack虚拟机: salt-cloud -p web-node web-test1 -l debug -p: 虚拟机模板名称 web-test1: 创建虚拟机的名称 -l debug: 打印debug 本文转自铁骑传说51CTO博客,原文链接: http://blog.51cto.com/ybzbfs/1957182,如需转载请自行联系原作者

优秀的个人博客,低调大师

Oozie安装部署

不多说,直接上干货! 首先,大家先去看我这篇博客。对于Oozie的安装有一个全新的认识。 Oozie安装的说明 我这里呢,本篇博文定位于手动来安装Oozie,同时避免Apache版本的繁琐编译安装,直接使用CDH版本,已经编译好的oozie-4.1.0-cdh5.5.4.tar.gz。 如果,你要使用Apache版本的话,则需要自己去编译吧! Apache版本只有1.7M。CDH(已经帮我们编译好了)有1.0G。 第一大步:oozie-4.1.0-cdh5.5.4.tar.gz的下载 http://archive.cloudera.com/cdh5/cdh/5/ 当然,你这里也可以不像我这里,本地下载好,也可以在线下载。 第二大步:ApacheOozie 4.1.0编译的参考官方文档(这个大家去做吧) http://oozie.apache.org/docs/4.1.0/DG_QuickStart.html#Building_Oozie 上面是官网要求,至少的! 但是呢,现在,我建议大家,如下。 使用的环境是:Hadoop-2.6.0 Oozie 4.2.0 Jdk 1.7 Maven 3.3.9 Pig0.15.0 Hive-1.2.1 Sqoop 1.99.6 第二大步:Cloudera Oozie 4.1.0编译的参考官方文档(本博文重点) 当然,大家也可以用CDH版本的源码去编译安装,我这里也不多赘述。主要注意的是,把这个源码包打开之后,里面有个pom.xml文件 这个很重要,把里面的什么hive啊、hbase等版本,改成自己机器里的版本。别用默认的。注意这些细节就可以了。然后大家有兴趣,自己去做吧! 注意:apache版本的oozie需要自己编译,由于我本身的环境是cdh5,所以可以直接cdh编译好的版本。(即oozie-4.1.0-cdh5.5.4.tar.gz) Oozie Server Architecture 大家,可以看到Oozie Server端是在Tomcat里。 Oozie Server 的安装 上传 需要一段时间 [hadoop@bigdatamaster app]$ pwd /home/hadoop/app [hadoop@bigdatamaster app]$ ll total 68 drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin lrwxrwxrwx 1 hadoop hadoop 19 May 5 11:15 elasticsearch -> elasticsearch-2.4.3 drwxrwxr-x 7 hadoop hadoop 4096 May 5 11:35 elasticsearch-2.4.3 lrwxrwxrwx 1 hadoop hadoop 22 May 5 12:44 filebeat -> filebeat-1.3.1-x86_64/ drwxr-xr-x 2 hadoop hadoop 4096 May 5 12:47 filebeat-1.3.1-x86_64 lrwxrwxrwx 1 hadoop hadoop 32 May 5 09:31 flume -> apache-flume-1.6.0-cdh5.5.4-bin/ lrwxrwxrwx. 1 hadoop hadoop 21 May 4 20:59 hadoop -> hadoop-2.6.0-cdh5.5.4 drwxr-xr-x. 15 hadoop hadoop 4096 May 4 21:14 hadoop-2.6.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 21:48 hbase -> hbase-1.0.0-cdh5.5.4 drwxr-xr-x. 27 hadoop hadoop 4096 May 4 22:05 hbase-1.0.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 22:37 hive -> hive-1.1.0-cdh5.5.4/ drwxr-xr-x. 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 19 May 5 20:44 hue -> hue-3.9.0-cdh5.5.4/ drwxr-xr-x 11 hadoop hadoop 4096 May 7 09:27 hue-3.9.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 11 May 4 20:34 jdk -> jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 lrwxrwxrwx. 1 hadoop hadoop 19 May 4 22:49 kafka -> kafka_2.11-0.8.2.2/ drwxr-xr-x. 6 hadoop hadoop 4096 May 4 22:57 kafka_2.11-0.8.2.2 lrwxrwxrwx 1 hadoop hadoop 26 May 5 19:03 kibana -> kibana-4.6.3-linux-x86_64/ drwxrwxr-x 11 hadoop hadoop 4096 Nov 4 2016 kibana-4.6.3-linux-x86_64 lrwxrwxrwx 1 hadoop hadoop 15 May 5 14:44 logstash -> logstash-2.4.1/ drwxrwxr-x 5 hadoop hadoop 4096 May 5 14:44 logstash-2.4.1 lrwxrwxrwx 1 hadoop hadoop 12 May 5 09:05 scala -> scala-2.11.8 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 lrwxrwxrwx 1 hadoop hadoop 25 May 5 09:05 spark -> spark-2.1.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 May 5 09:20 spark-2.1.0-bin-hadoop2.6 lrwxrwxrwx 1 hadoop hadoop 23 May 7 17:55 sqoop -> sqoop2-1.99.5-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 sqoop-1.4.6-cdh5.5.4 drwxr-xr-x 22 hadoop hadoop 4096 May 7 20:18 sqoop2-1.99.5-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 25 May 4 20:44 zookeeper -> zookeeper-3.4.5-cdh5.5.4/ drwxr-xr-x. 18 hadoop hadoop 4096 May 7 16:13 zookeeper-3.4.5-cdh5.5.4 [hadoop@bigdatamaster app]$ rz [hadoop@bigdatamaster app]$ ll total 1696368 drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin lrwxrwxrwx 1 hadoop hadoop 19 May 5 11:15 elasticsearch -> elasticsearch-2.4.3 drwxrwxr-x 7 hadoop hadoop 4096 May 5 11:35 elasticsearch-2.4.3 lrwxrwxrwx 1 hadoop hadoop 22 May 5 12:44 filebeat -> filebeat-1.3.1-x86_64/ drwxr-xr-x 2 hadoop hadoop 4096 May 5 12:47 filebeat-1.3.1-x86_64 lrwxrwxrwx 1 hadoop hadoop 32 May 5 09:31 flume -> apache-flume-1.6.0-cdh5.5.4-bin/ lrwxrwxrwx. 1 hadoop hadoop 21 May 4 20:59 hadoop -> hadoop-2.6.0-cdh5.5.4 drwxr-xr-x. 15 hadoop hadoop 4096 May 4 21:14 hadoop-2.6.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 21:48 hbase -> hbase-1.0.0-cdh5.5.4 drwxr-xr-x. 27 hadoop hadoop 4096 May 4 22:05 hbase-1.0.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 22:37 hive -> hive-1.1.0-cdh5.5.4/ drwxr-xr-x. 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 19 May 5 20:44 hue -> hue-3.9.0-cdh5.5.4/ drwxr-xr-x 11 hadoop hadoop 4096 May 7 09:27 hue-3.9.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 11 May 4 20:34 jdk -> jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 lrwxrwxrwx. 1 hadoop hadoop 19 May 4 22:49 kafka -> kafka_2.11-0.8.2.2/ drwxr-xr-x. 6 hadoop hadoop 4096 May 4 22:57 kafka_2.11-0.8.2.2 lrwxrwxrwx 1 hadoop hadoop 26 May 5 19:03 kibana -> kibana-4.6.3-linux-x86_64/ drwxrwxr-x 11 hadoop hadoop 4096 Nov 4 2016 kibana-4.6.3-linux-x86_64 lrwxrwxrwx 1 hadoop hadoop 15 May 5 14:44 logstash -> logstash-2.4.1/ drwxrwxr-x 5 hadoop hadoop 4096 May 5 14:44 logstash-2.4.1 -rw-r--r-- 1 hadoop hadoop 1737004796 May 7 22:37 oozie-4.1.0-cdh5.5.4.tar.gz lrwxrwxrwx 1 hadoop hadoop 12 May 5 09:05 scala -> scala-2.11.8 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 lrwxrwxrwx 1 hadoop hadoop 25 May 5 09:05 spark -> spark-2.1.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 May 5 09:20 spark-2.1.0-bin-hadoop2.6 lrwxrwxrwx 1 hadoop hadoop 23 May 7 17:55 sqoop -> sqoop2-1.99.5-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 sqoop-1.4.6-cdh5.5.4 drwxr-xr-x 22 hadoop hadoop 4096 May 7 20:18 sqoop2-1.99.5-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 25 May 4 20:44 zookeeper -> zookeeper-3.4.5-cdh5.5.4/ drwxr-xr-x. 18 hadoop hadoop 4096 May 7 16:13 zookeeper-3.4.5-cdh5.5.4 [hadoop@bigdatamaster app]$ 解压 [hadoop@bigdatamaster app]$ pwd /home/hadoop/app [hadoop@bigdatamaster app]$ ll total 1696368 drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin lrwxrwxrwx 1 hadoop hadoop 19 May 5 11:15 elasticsearch -> elasticsearch-2.4.3 drwxrwxr-x 7 hadoop hadoop 4096 May 5 11:35 elasticsearch-2.4.3 lrwxrwxrwx 1 hadoop hadoop 22 May 5 12:44 filebeat -> filebeat-1.3.1-x86_64/ drwxr-xr-x 2 hadoop hadoop 4096 May 5 12:47 filebeat-1.3.1-x86_64 lrwxrwxrwx 1 hadoop hadoop 32 May 5 09:31 flume -> apache-flume-1.6.0-cdh5.5.4-bin/ lrwxrwxrwx. 1 hadoop hadoop 21 May 4 20:59 hadoop -> hadoop-2.6.0-cdh5.5.4 drwxr-xr-x. 15 hadoop hadoop 4096 May 4 21:14 hadoop-2.6.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 21:48 hbase -> hbase-1.0.0-cdh5.5.4 drwxr-xr-x. 27 hadoop hadoop 4096 May 4 22:05 hbase-1.0.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 22:37 hive -> hive-1.1.0-cdh5.5.4/ drwxr-xr-x. 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 19 May 5 20:44 hue -> hue-3.9.0-cdh5.5.4/ drwxr-xr-x 11 hadoop hadoop 4096 May 7 09:27 hue-3.9.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 11 May 4 20:34 jdk -> jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 lrwxrwxrwx. 1 hadoop hadoop 19 May 4 22:49 kafka -> kafka_2.11-0.8.2.2/ drwxr-xr-x. 6 hadoop hadoop 4096 May 4 22:57 kafka_2.11-0.8.2.2 lrwxrwxrwx 1 hadoop hadoop 26 May 5 19:03 kibana -> kibana-4.6.3-linux-x86_64/ drwxrwxr-x 11 hadoop hadoop 4096 Nov 4 2016 kibana-4.6.3-linux-x86_64 lrwxrwxrwx 1 hadoop hadoop 15 May 5 14:44 logstash -> logstash-2.4.1/ drwxrwxr-x 5 hadoop hadoop 4096 May 5 14:44 logstash-2.4.1 -rw-r--r-- 1 hadoop hadoop 1737004796 May 7 22:37 oozie-4.1.0-cdh5.5.4.tar.gz lrwxrwxrwx 1 hadoop hadoop 12 May 5 09:05 scala -> scala-2.11.8 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 lrwxrwxrwx 1 hadoop hadoop 25 May 5 09:05 spark -> spark-2.1.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 May 5 09:20 spark-2.1.0-bin-hadoop2.6 lrwxrwxrwx 1 hadoop hadoop 23 May 7 17:55 sqoop -> sqoop2-1.99.5-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 sqoop-1.4.6-cdh5.5.4 drwxr-xr-x 22 hadoop hadoop 4096 May 7 20:18 sqoop2-1.99.5-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 25 May 4 20:44 zookeeper -> zookeeper-3.4.5-cdh5.5.4/ drwxr-xr-x. 18 hadoop hadoop 4096 May 7 16:13 zookeeper-3.4.5-cdh5.5.4 [hadoop@bigdatamaster app]$ tar -zxvf oozie-4.1.0-cdh5.5.4.tar.gz 建立软链接(为了适应不同版本的需求) [hadoop@bigdatamaster app]$ pwd /home/hadoop/app [hadoop@bigdatamaster app]$ ll total 72 drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin lrwxrwxrwx 1 hadoop hadoop 19 May 5 11:15 elasticsearch -> elasticsearch-2.4.3 drwxrwxr-x 7 hadoop hadoop 4096 May 5 11:35 elasticsearch-2.4.3 lrwxrwxrwx 1 hadoop hadoop 22 May 5 12:44 filebeat -> filebeat-1.3.1-x86_64/ drwxr-xr-x 2 hadoop hadoop 4096 May 5 12:47 filebeat-1.3.1-x86_64 lrwxrwxrwx 1 hadoop hadoop 32 May 5 09:31 flume -> apache-flume-1.6.0-cdh5.5.4-bin/ lrwxrwxrwx. 1 hadoop hadoop 21 May 4 20:59 hadoop -> hadoop-2.6.0-cdh5.5.4 drwxr-xr-x. 15 hadoop hadoop 4096 May 4 21:14 hadoop-2.6.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 21:48 hbase -> hbase-1.0.0-cdh5.5.4 drwxr-xr-x. 27 hadoop hadoop 4096 May 4 22:05 hbase-1.0.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 22:37 hive -> hive-1.1.0-cdh5.5.4/ drwxr-xr-x. 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 19 May 5 20:44 hue -> hue-3.9.0-cdh5.5.4/ drwxr-xr-x 11 hadoop hadoop 4096 May 7 09:27 hue-3.9.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 11 May 4 20:34 jdk -> jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 lrwxrwxrwx. 1 hadoop hadoop 19 May 4 22:49 kafka -> kafka_2.11-0.8.2.2/ drwxr-xr-x. 6 hadoop hadoop 4096 May 4 22:57 kafka_2.11-0.8.2.2 lrwxrwxrwx 1 hadoop hadoop 26 May 5 19:03 kibana -> kibana-4.6.3-linux-x86_64/ drwxrwxr-x 11 hadoop hadoop 4096 Nov 4 2016 kibana-4.6.3-linux-x86_64 lrwxrwxrwx 1 hadoop hadoop 15 May 5 14:44 logstash -> logstash-2.4.1/ drwxrwxr-x 5 hadoop hadoop 4096 May 5 14:44 logstash-2.4.1 drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 oozie-4.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 12 May 5 09:05 scala -> scala-2.11.8 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 lrwxrwxrwx 1 hadoop hadoop 25 May 5 09:05 spark -> spark-2.1.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 May 5 09:20 spark-2.1.0-bin-hadoop2.6 lrwxrwxrwx 1 hadoop hadoop 23 May 7 17:55 sqoop -> sqoop2-1.99.5-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 sqoop-1.4.6-cdh5.5.4 drwxr-xr-x 22 hadoop hadoop 4096 May 7 20:18 sqoop2-1.99.5-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 25 May 4 20:44 zookeeper -> zookeeper-3.4.5-cdh5.5.4/ drwxr-xr-x. 18 hadoop hadoop 4096 May 7 16:13 zookeeper-3.4.5-cdh5.5.4 [hadoop@bigdatamaster app]$ ln -s oozie-4.1.0-cdh5.5.4/ oozie [hadoop@bigdatamaster app]$ ll total 72 drwxr-xr-x 8 hadoop hadoop 4096 Apr 26 2016 apache-flume-1.6.0-cdh5.5.4-bin lrwxrwxrwx 1 hadoop hadoop 19 May 5 11:15 elasticsearch -> elasticsearch-2.4.3 drwxrwxr-x 7 hadoop hadoop 4096 May 5 11:35 elasticsearch-2.4.3 lrwxrwxrwx 1 hadoop hadoop 22 May 5 12:44 filebeat -> filebeat-1.3.1-x86_64/ drwxr-xr-x 2 hadoop hadoop 4096 May 5 12:47 filebeat-1.3.1-x86_64 lrwxrwxrwx 1 hadoop hadoop 32 May 5 09:31 flume -> apache-flume-1.6.0-cdh5.5.4-bin/ lrwxrwxrwx. 1 hadoop hadoop 21 May 4 20:59 hadoop -> hadoop-2.6.0-cdh5.5.4 drwxr-xr-x. 15 hadoop hadoop 4096 May 4 21:14 hadoop-2.6.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 21:48 hbase -> hbase-1.0.0-cdh5.5.4 drwxr-xr-x. 27 hadoop hadoop 4096 May 4 22:05 hbase-1.0.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 20 May 4 22:37 hive -> hive-1.1.0-cdh5.5.4/ drwxr-xr-x. 10 hadoop hadoop 4096 Apr 26 2016 hive-1.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 19 May 5 20:44 hue -> hue-3.9.0-cdh5.5.4/ drwxr-xr-x 11 hadoop hadoop 4096 May 7 09:27 hue-3.9.0-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 11 May 4 20:34 jdk -> jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x. 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 lrwxrwxrwx. 1 hadoop hadoop 19 May 4 22:49 kafka -> kafka_2.11-0.8.2.2/ drwxr-xr-x. 6 hadoop hadoop 4096 May 4 22:57 kafka_2.11-0.8.2.2 lrwxrwxrwx 1 hadoop hadoop 26 May 5 19:03 kibana -> kibana-4.6.3-linux-x86_64/ drwxrwxr-x 11 hadoop hadoop 4096 Nov 4 2016 kibana-4.6.3-linux-x86_64 lrwxrwxrwx 1 hadoop hadoop 15 May 5 14:44 logstash -> logstash-2.4.1/ drwxrwxr-x 5 hadoop hadoop 4096 May 5 14:44 logstash-2.4.1 lrwxrwxrwx 1 hadoop hadoop 21 May 8 10:23 oozie -> oozie-4.1.0-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 oozie-4.1.0-cdh5.5.4 lrwxrwxrwx 1 hadoop hadoop 12 May 5 09:05 scala -> scala-2.11.8 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 lrwxrwxrwx 1 hadoop hadoop 25 May 5 09:05 spark -> spark-2.1.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 May 5 09:20 spark-2.1.0-bin-hadoop2.6 lrwxrwxrwx 1 hadoop hadoop 23 May 7 17:55 sqoop -> sqoop2-1.99.5-cdh5.5.4/ drwxr-xr-x 10 hadoop hadoop 4096 Apr 26 2016 sqoop-1.4.6-cdh5.5.4 drwxr-xr-x 22 hadoop hadoop 4096 May 7 20:18 sqoop2-1.99.5-cdh5.5.4 lrwxrwxrwx. 1 hadoop hadoop 25 May 4 20:44 zookeeper -> zookeeper-3.4.5-cdh5.5.4/ drwxr-xr-x. 18 hadoop hadoop 4096 May 7 16:13 zookeeper-3.4.5-cdh5.5.4 [hadoop@bigdatamaster app]$ 设置环境变量 [hadoop@bigdatamaster ~]$ su root Password: [root@bigdatamaster hadoop]# vim /etc/profile #oozie export OOZIE_HOME=/home/hadoop/app/oozie export PATH=$PATH:$OOZIE_HOME/bin [hadoop@bigdatamaster ~]$ su root Password: [root@bigdatamaster hadoop]# vim /etc/profile [root@bigdatamaster hadoop]# source /etc/profile 初步认识下Oozie的目录结构 [hadoop@bigdatamaster app]$ cd oozie [hadoop@bigdatamaster oozie]$ pwd /home/hadoop/app/oozie [hadoop@bigdatamaster oozie]$ ll total 1014180 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 bin drwxr-xr-x 4 hadoop hadoop 4096 Apr 26 2016 conf drwxr-xr-x 6 hadoop hadoop 4096 Apr 26 2016 docs drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 lib drwxr-xr-x 2 hadoop hadoop 12288 Apr 26 2016 libtools -rw-r--r-- 1 hadoop hadoop 37664 Apr 26 2016 LICENSE.txt -rw-r--r-- 1 hadoop hadoop 909 Apr 26 2016 NOTICE.txt drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 oozie-core -rwxr-xr-x 1 hadoop hadoop 46275 Apr 26 2016 oozie-examples.tar.gz -rwxr-xr-x 1 hadoop hadoop 77456039 Apr 26 2016 oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz drwxr-xr-x 9 hadoop hadoop 4096 Apr 26 2016 oozie-server -r--r--r-- 1 hadoop hadoop 428704179 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4.tar.gz -r--r--r-- 1 hadoop hadoop 429103879 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz -rwxr-xr-x 1 hadoop hadoop 103020321 Apr 26 2016 oozie.war -rw-r--r-- 1 hadoop hadoop 83521 Apr 26 2016 release-log.txt drwxr-xr-x 21 hadoop hadoop 4096 Apr 26 2016 src [hadoop@bigdatamaster oozie]$ 点击它,就可以下载。然后上传。 http://dev.sencha.com/deploy/ext-2.2.zip 建议用迅雷下载 这里,我暂时上传到/home/hadoop下,其实,最终只需放到$OOZIE_HOME/libext下即可。(但是这个目录暂时是没有的,得要新建) [hadoop@bigdatamaster ~]$ pwd /home/hadoop [hadoop@bigdatamaster ~]$ ll total 44 drwxrwxr-x. 20 hadoop hadoop 4096 May 8 10:23 app drwxrwxr-x. 7 hadoop hadoop 4096 May 5 11:22 data drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Desktop drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Documents drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Downloads drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Music drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Pictures drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Public drwxrwxr-x. 2 hadoop hadoop 4096 May 4 17:46 shell drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Templates drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Videos [hadoop@bigdatamaster ~]$ rz [hadoop@bigdatamaster ~]$ ll total 6688 drwxrwxr-x. 20 hadoop hadoop 4096 May 8 10:23 app drwxrwxr-x. 7 hadoop hadoop 4096 May 5 11:22 data drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Desktop drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Documents drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Downloads -rw-r--r-- 1 hadoop hadoop 6800612 Oct 1 2015 ext-2.2.zip drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Music drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Pictures drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Public drwxrwxr-x. 2 hadoop hadoop 4096 May 4 17:46 shell drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Templates drwxr-xr-x 2 hadoop hadoop 4096 May 5 08:51 Videos [hadoop@bigdatamaster ~]$ <!-- OOZIE --> <property> <name>hadoop.proxyuser.[OOZIE_SERVER_USER].hosts</name> <value>[OOZIE_SERVER_HOSTNAME]</value> </property> <property> <name>hadoop.proxyuser.[OOZIE_SERVER_USER].groups</name> <value>[USER_GROUPS_THAT_ALLOW_IMPERSONATION]</value> </property> 我的这里是 <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>bigdatamaster</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property> 或者(一般用这种) <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property> 这里只需对安装oozie的机器即可,我这里只安装在bigdatamaster机器上。 注意,先配置好,再要重启hadoop。不然,不生效的哈。 接下来,是解压hadoooplibs.tar.gz Expand the Oozie hadooplibs tar.gz in the same location Oozie distribution tar.gz was expanded 我这里对应的是,oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz [hadoop@bigdatamaster oozie]$ pwd /home/hadoop/app/oozie [hadoop@bigdatamaster oozie]$ ll total 1014180 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 bin drwxr-xr-x 4 hadoop hadoop 4096 Apr 26 2016 conf drwxr-xr-x 6 hadoop hadoop 4096 Apr 26 2016 docs drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 lib drwxr-xr-x 2 hadoop hadoop 12288 Apr 26 2016 libtools -rw-r--r-- 1 hadoop hadoop 37664 Apr 26 2016 LICENSE.txt -rw-r--r-- 1 hadoop hadoop 909 Apr 26 2016 NOTICE.txt drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 oozie-core -rwxr-xr-x 1 hadoop hadoop 46275 Apr 26 2016 oozie-examples.tar.gz -rwxr-xr-x 1 hadoop hadoop 77456039 Apr 26 2016 oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz drwxr-xr-x 9 hadoop hadoop 4096 Apr 26 2016 oozie-server -r--r--r-- 1 hadoop hadoop 428704179 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4.tar.gz -r--r--r-- 1 hadoop hadoop 429103879 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz -rwxr-xr-x 1 hadoop hadoop 103020321 Apr 26 2016 oozie.war -rw-r--r-- 1 hadoop hadoop 83521 Apr 26 2016 release-log.txt drwxr-xr-x 21 hadoop hadoop 4096 Apr 26 2016 src [hadoop@bigdatamaster oozie]$ tar -zxf oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz [hadoop@bigdatamaster oozie]$ ll total 1014184 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 bin drwxr-xr-x 4 hadoop hadoop 4096 Apr 26 2016 conf drwxr-xr-x 6 hadoop hadoop 4096 Apr 26 2016 docs drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 lib drwxr-xr-x 2 hadoop hadoop 12288 Apr 26 2016 libtools -rw-r--r-- 1 hadoop hadoop 37664 Apr 26 2016 LICENSE.txt -rw-r--r-- 1 hadoop hadoop 909 Apr 26 2016 NOTICE.txt drwxrwxr-x 3 hadoop hadoop 4096 May 8 11:38 oozie-4.1.0-cdh5.5.4 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 oozie-core -rwxr-xr-x 1 hadoop hadoop 46275 Apr 26 2016 oozie-examples.tar.gz -rwxr-xr-x 1 hadoop hadoop 77456039 Apr 26 2016 oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz drwxr-xr-x 9 hadoop hadoop 4096 Apr 26 2016 oozie-server -r--r--r-- 1 hadoop hadoop 428704179 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4.tar.gz -r--r--r-- 1 hadoop hadoop 429103879 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz -rwxr-xr-x 1 hadoop hadoop 103020321 Apr 26 2016 oozie.war -rw-r--r-- 1 hadoop hadoop 83521 Apr 26 2016 release-log.txt drwxr-xr-x 21 hadoop hadoop 4096 Apr 26 2016 src [hadoop@bigdatamaster oozie]$ A *hadooplibs/* directory will be created containing the Hadoop JARs for the versions of Hadoop that the Oozie distribution supports. 因为, 它是支持MR1,也支持MR2(YARN)。我们的是在YARN里。即成功生成了 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ pwd /home/hadoop/app/oozie/oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ ll total 4 drwxr-xr-x 4 hadoop hadoop 4096 Apr 26 2016 hadooplibs [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ cd hadooplibs/ [hadoop@bigdatamaster hadooplibs]$ pwd /home/hadoop/app/oozie/oozie-4.1.0-cdh5.5.4/hadooplibs [hadoop@bigdatamaster hadooplibs]$ ll total 8 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 hadooplib-2.6.0-mr1-cdh5.5.4.oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster hadooplibs]$ cd hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4]$ pwd /home/hadoop/app/oozie/oozie-4.1.0-cdh5.5.4/hadooplibs/hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4]$ ls activation-1.1.jar commons-lang-2.4.jar hadoop-mapreduce-client-shuffle-2.6.0-cdh5.5.4.jar jsr305-3.0.0.jar apacheds-i18n-2.0.0-M15.jar commons-logging-1.1.jar hadoop-yarn-api-2.6.0-cdh5.5.4.jar leveldbjni-all-1.8.jar apacheds-kerberos-codec-2.0.0-M15.jar commons-math3-3.1.1.jar hadoop-yarn-client-2.6.0-cdh5.5.4.jar log4j-1.2.17.jar api-asn1-api-1.0.0-M20.jar commons-net-3.1.jar hadoop-yarn-common-2.6.0-cdh5.5.4.jar netty-3.6.2.Final.jar api-util-1.0.0-M20.jar curator-client-2.7.1.jar hadoop-yarn-server-common-2.6.0-cdh5.5.4.jar netty-all-4.0.23.Final.jar avro-1.7.6-cdh5.5.4.jar curator-framework-2.7.1.jar htrace-core4-4.0.1-incubating.jar paranamer-2.3.jar aws-java-sdk-core-1.10.6.jar curator-recipes-2.7.1.jar httpclient-4.2.5.jar protobuf-java-2.5.0.jar aws-java-sdk-kms-1.10.6.jar gson-2.2.4.jar httpcore-4.2.5.jar servlet-api-2.5.jar aws-java-sdk-s3-1.10.6.jar guava-11.0.2.jar jackson-annotations-2.2.3.jar slf4j-api-1.7.5.jar commons-beanutils-1.7.0.jar hadoop-annotations-2.6.0-cdh5.5.4.jar jackson-core-2.2.3.jar slf4j-log4j12-1.7.5.jar commons-beanutils-core-1.8.0.jar hadoop-auth-2.6.0-cdh5.5.4.jar jackson-core-asl-1.8.8.jar snappy-java-1.0.4.1.jar commons-cli-1.2.jar hadoop-aws-2.6.0-cdh5.5.4.jar jackson-databind-2.2.3.jar stax-api-1.0-2.jar commons-codec-1.4.jar hadoop-client-2.6.0-cdh5.5.4.jar jackson-jaxrs-1.8.8.jar xercesImpl-2.10.0.jar commons-collections-3.2.2.jar hadoop-common-2.6.0-cdh5.5.4.jar jackson-mapper-asl-1.8.8.jar xml-apis-1.4.01.jar commons-compress-1.4.1.jar hadoop-hdfs-2.6.0-cdh5.5.4.jar jackson-xc-1.8.8.jar xmlenc-0.52.jar commons-configuration-1.6.jar hadoop-mapreduce-client-app-2.6.0-cdh5.5.4.jar jaxb-api-2.2.2.jar xz-1.0.jar commons-digester-1.8.jar hadoop-mapreduce-client-common-2.6.0-cdh5.5.4.jar jersey-client-1.9.jar zookeeper-3.4.5-cdh5.5.4.jar commons-httpclient-3.1.jar hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar jersey-core-1.9.jar commons-io-2.4.jar hadoop-mapreduce-client-jobclient-2.6.0-cdh5.5.4.jar jetty-util-6.1.26.cloudera.2.jar [hadoop@bigdatamaster hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4]$ The ExtJS library is optional (only required for the Oozie web-console to work) IMPORTANT: all Oozie server scripts (=oozie-setup.sh=, oozied.sh , oozie-start.sh , oozie-run.sh and oozie-stop.sh ) run only under the Unix user that owns the Oozie installation directory, if necessary use sudo -u OOZIE_USER when invoking the scripts. As of Oozie 3.3.2, use ofoozie-start.sh,oozie-run.sh, andoozie-stop.shhas been deprecated and will print a warning. Theoozied.shscript should be used instead; passing itstart,run, orstopas an argument will perform the behaviors ofoozie-start.sh,oozie-run.sh, andoozie-stop.shrespectively. Create a libext/ directory in the directory where Oozie was expanded. 由此可见,安装后,是没有这个目录libext的。 所以,得新建mkdir libext [hadoop@bigdatamaster oozie]$ mkdir libext [hadoop@bigdatamaster oozie]$ ll total 1014188 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 bin drwxr-xr-x 4 hadoop hadoop 4096 Apr 26 2016 conf drwxr-xr-x 6 hadoop hadoop 4096 Apr 26 2016 docs drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 lib drwxrwxr-x 2 hadoop hadoop 4096 May 8 12:51 libext drwxr-xr-x 2 hadoop hadoop 12288 Apr 26 2016 libtools -rw-r--r-- 1 hadoop hadoop 37664 Apr 26 2016 LICENSE.txt -rw-r--r-- 1 hadoop hadoop 909 Apr 26 2016 NOTICE.txt drwxrwxr-x 3 hadoop hadoop 4096 May 8 11:38 oozie-4.1.0-cdh5.5.4 drwxr-xr-x 2 hadoop hadoop 4096 Apr 26 2016 oozie-core -rwxr-xr-x 1 hadoop hadoop 46275 Apr 26 2016 oozie-examples.tar.gz -rwxr-xr-x 1 hadoop hadoop 77456039 Apr 26 2016 oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz drwxr-xr-x 9 hadoop hadoop 4096 Apr 26 2016 oozie-server -r--r--r-- 1 hadoop hadoop 428704179 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4.tar.gz -r--r--r-- 1 hadoop hadoop 429103879 Apr 26 2016 oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz -rwxr-xr-x 1 hadoop hadoop 103020321 Apr 26 2016 oozie.war -rw-r--r-- 1 hadoop hadoop 83521 Apr 26 2016 release-log.txt drwxr-xr-x 21 hadoop hadoop 4096 Apr 26 2016 src [hadoop@bigdatamaster oozie]$ 新建好目录之后,然后,将hadooplibs下所有的hadoop jar包都复制一份到这个新建好的libext目录下 If using a version of Hadoop bundled in Oozie hadooplibs/ , copy the corresponding Hadoop JARs from hadooplibs/ to the libext/ directory. If using a different version of Hadoop, copy the required Hadoop JARs from such version in the libext/ directory. [hadoop@bigdatamaster oozie]$ pwd /home/hadoop/app/oozie [hadoop@bigdatamaster oozie]$ ls bin docs libext LICENSE.txt NOTICE.txt oozie-core oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz oozie-sharelib-4.1.0-cdh5.5.4.tar.gz oozie.war src conf lib libtools logs oozie-4.1.0-cdh5.5.4 oozie-examples.tar.gz oozie-server oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz release-log.txt [hadoop@bigdatamaster oozie]$ cp -r oozie-4.1.0-cdh5.5.4/hadooplibs/hadooplib-2.6.0-cdh5.5.4.oozie-4.1.0-cdh5.5.4/* libext/ [hadoop@bigdatamaster oozie]$ 查看有没有拷贝成功 然后,再拷贝,我们之前,暂时上传在/home/hadop下的ext-2.2.zip到$OOZIE_HOME/libext目录下 If using the ExtJS library copy the ZIP file to the libext/ directory. 这里官网,说的很谦虚,还来什么如果。其实是必须的啊!因为Ooize的前端界面就是用到ExtJS。 [hadoop@bigdatamaster libext]$ pwd /home/hadoop/app/oozie/libext [hadoop@bigdatamaster libext]$ cp /home/hadoop/ext-2.2.zip /home/hadoop/app/oozie/libext/ [hadoop@bigdatamaster libext]$ ls activation-1.1.jar commons-lang-2.4.jar hadoop-mapreduce-client-jobclient-2.6.0-cdh5.5.4.jar jetty-util-6.1.26.cloudera.2.jar apacheds-i18n-2.0.0-M15.jar commons-logging-1.1.jar hadoop-mapreduce-client-shuffle-2.6.0-cdh5.5.4.jar jsr305-3.0.0.jar apacheds-kerberos-codec-2.0.0-M15.jar commons-math3-3.1.1.jar hadoop-yarn-api-2.6.0-cdh5.5.4.jar leveldbjni-all-1.8.jar api-asn1-api-1.0.0-M20.jar commons-net-3.1.jar hadoop-yarn-client-2.6.0-cdh5.5.4.jar log4j-1.2.17.jar api-util-1.0.0-M20.jar curator-client-2.7.1.jar hadoop-yarn-common-2.6.0-cdh5.5.4.jar netty-3.6.2.Final.jar avro-1.7.6-cdh5.5.4.jar curator-framework-2.7.1.jar hadoop-yarn-server-common-2.6.0-cdh5.5.4.jar netty-all-4.0.23.Final.jar aws-java-sdk-core-1.10.6.jar curator-recipes-2.7.1.jar htrace-core4-4.0.1-incubating.jar paranamer-2.3.jar aws-java-sdk-kms-1.10.6.jar ext-2.2.zip httpclient-4.2.5.jar protobuf-java-2.5.0.jar aws-java-sdk-s3-1.10.6.jar gson-2.2.4.jar httpcore-4.2.5.jar servlet-api-2.5.jar commons-beanutils-1.7.0.jar guava-11.0.2.jar jackson-annotations-2.2.3.jar slf4j-api-1.7.5.jar commons-beanutils-core-1.8.0.jar hadoop-annotations-2.6.0-cdh5.5.4.jar jackson-core-2.2.3.jar slf4j-log4j12-1.7.5.jar commons-cli-1.2.jar hadoop-auth-2.6.0-cdh5.5.4.jar jackson-core-asl-1.8.8.jar snappy-java-1.0.4.1.jar commons-codec-1.4.jar hadoop-aws-2.6.0-cdh5.5.4.jar jackson-databind-2.2.3.jar stax-api-1.0-2.jar commons-collections-3.2.2.jar hadoop-client-2.6.0-cdh5.5.4.jar jackson-jaxrs-1.8.8.jar xercesImpl-2.10.0.jar commons-compress-1.4.1.jar hadoop-common-2.6.0-cdh5.5.4.jar jackson-mapper-asl-1.8.8.jar xml-apis-1.4.01.jar commons-configuration-1.6.jar hadoop-hdfs-2.6.0-cdh5.5.4.jar jackson-xc-1.8.8.jar xmlenc-0.52.jar commons-digester-1.8.jar hadoop-mapreduce-client-app-2.6.0-cdh5.5.4.jar jaxb-api-2.2.2.jar xz-1.0.jar commons-httpclient-3.1.jar hadoop-mapreduce-client-common-2.6.0-cdh5.5.4.jar jersey-client-1.9.jar zookeeper-3.4.5-cdh5.5.4.jar commons-io-2.4.jar hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar jersey-core-1.9.jar [hadoop@bigdatamaster libext]$ 欧克,这样的话。我们的/home/hadoop下的ext-2.2.zip 就可以删除了,不要了。 下面的这些操作,就是我们之前的那么jar包都准备好了,然后,怎么打到libext目录里去。 A "sharelib create -fs fs_default_name [-locallib sharelib]" command is available when running oozie-setup.sh for uploading new sharelib into hdfs where the first argument is the default fs name and the second argument is the Oozie sharelib to install, it can be a tarball or the expanded version of it. If the second argument is omitted, the Oozie sharelib tarball from the Oozie installation directory will be used. Upgrade command is deprecated, one should use create command to create new version of sharelib. Sharelib files are copied to new lib_ directory. At start, server picks the sharelib from latest time-stamp directory. While starting server also purge sharelib directory which is older than sharelib retention days (defined as oozie.service.ShareLibService.temp.sharelib.retention.days and 7 days is default). 继续往下。 "prepare-war [-d directory]" command is for creating war files for oozie with an optional alternative directory other than libext. db create|upgrade|postupgrade -run [-sqlfile ] command is for create, upgrade or postupgrade oozie db with an optional sql file Run the oozie-setup.sh script to configure Oozie with all the components added to the libext/ directory. 因为,官网说的很明白,如果我们没有像上述那样,把所有的包都拷贝到$OOZIE_HOME/libext下,则就直接执行下面的命令需要指定,也是可以达到目的的。 那我这里,已经弄好了,就不需去指定参数了,可以直接执行 bin/oozie-setup.sh prepare-war [hadoop@bigdatamaster oozie]$ pwd /home/hadoop/app/oozie [hadoop@bigdatamaster oozie]$ ls bin docs libext LICENSE.txt NOTICE.txt oozie-core oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz oozie-sharelib-4.1.0-cdh5.5.4.tar.gz oozie.war src conf lib libtools logs oozie-4.1.0-cdh5.5.4 oozie-examples.tar.gz oozie-server oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz release-log.txt [hadoop@bigdatamaster oozie]$ bin/oozie-setup.sh prepare-war setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" INFO: Adding extension: /home/hadoop/app/oozie/libext/activation-1.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/apacheds-i18n-2.0.0-M15.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/apacheds-kerberos-codec-2.0.0-M15.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/api-asn1-api-1.0.0-M20.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/api-util-1.0.0-M20.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/avro-1.7.6-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/aws-java-sdk-core-1.10.6.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/aws-java-sdk-kms-1.10.6.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/aws-java-sdk-s3-1.10.6.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-beanutils-1.7.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-beanutils-core-1.8.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-cli-1.2.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-codec-1.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-collections-3.2.2.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-compress-1.4.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-configuration-1.6.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-digester-1.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-httpclient-3.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-io-2.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-lang-2.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-logging-1.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-math3-3.1.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/commons-net-3.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/curator-client-2.7.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/curator-framework-2.7.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/curator-recipes-2.7.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/gson-2.2.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/guava-11.0.2.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-annotations-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-auth-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-aws-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-client-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-common-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-hdfs-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-mapreduce-client-app-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-mapreduce-client-common-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-mapreduce-client-core-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-mapreduce-client-jobclient-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-mapreduce-client-shuffle-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-yarn-api-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-yarn-client-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-yarn-common-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/hadoop-yarn-server-common-2.6.0-cdh5.5.4.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/htrace-core4-4.0.1-incubating.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/httpclient-4.2.5.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/httpcore-4.2.5.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-annotations-2.2.3.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-core-2.2.3.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-core-asl-1.8.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-databind-2.2.3.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-jaxrs-1.8.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-mapper-asl-1.8.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jackson-xc-1.8.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jaxb-api-2.2.2.jar INFO: Adding extension: /home/hadoop/app/oozieb/libext/jersey-client-1.9.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jersey-core-1.9.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jetty-util-6.1.26.cloudera.2.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/jsr305-3.0.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/leveldbjni-all-1.8.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/log4j-1.2.17.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/netty-3.6.2.Final.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/netty-all-4.0.23.Final.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/paranamer-2.3.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/protobuf-java-2.5.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/servlet-api-2.5.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/slf4j-api-1.7.5.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/slf4j-log4j12-1.7.5.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/snappy-java-1.0.4.1.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/stax-api-1.0-2.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/xercesImpl-2.10.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/xml-apis-1.4.01.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/xmlenc-0.52.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/xz-1.0.jar INFO: Adding extension: /home/hadoop/app/oozie/libext/zookeeper-3.4.5-cdh5.5.4.jar File/Dir does no exist: /home/hadoop/app/sqoop/server/conf/ssl/server.xml [hadoop@bigdatamaster oozie]$ 我一直在这里反复试了好几次,oozie-server下的oozie.war生成不出来。 解决办法 CDH版本的oozie安装执行bin/oozie-setup.sh prepare-war,没生成oozie.war? [hadoop@bigdatamaster webapps]$ pwd /home/hadoop/app/oozie-4.1.0-cdh5.5.4/oozie-server/webapps [hadoop@bigdatamaster webapps]$ ll total 122432 -rw-rw-r-- 1 hadoop hadoop 125365511 May 8 16:08 oozie.war drwxr-xr-x 3 hadoop hadoop 4096 Apr 26 2016 ROOT [hadoop@bigdatamaster webapps]$ [hadoop@bigdatamaster oozie-server]$ pwd /home/hadoop/app/oozie-4.1.0-cdh5.5.4/oozie-server [hadoop@bigdatamaster oozie-server]$ ls bin conf lib LICENSE logs NOTICE RELEASE-NOTES RUNNING.txt temp webapps work [hadoop@bigdatamaster oozie-server]$ 然后,我们配置好$OOZIE_HOME/conf/oozie-site.xml 和 配置$OOZIE_HOME/conf/oozie-default.xml (这个配置文件,一般是不需动的,它里面是最全的) 注意,我们的oozie-site.xml配置文件里面默认是如下 <?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <!-- Refer to the oozie-default.xml file for the complete list of Oozie configuration properties and their default values. --> <!-- Proxyuser Configuration --> <!-- <property> <name>oozie.service.ProxyUserService.proxyuser.#USER#.hosts</name> <value>*</value> <description> List of hosts the '#USER#' user is allowed to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of hostnames. For multiple users copy this property and replace the user name in the property name. </description> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.#USER#.groups</name> <value>*</value> <description> List of groups the '#USER#' user is allowed to impersonate users from to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of groups. For multiple users copy this property and replace the user name in the property name. </description> </property> --> <!-- Default proxyuser configuration for Hue --> <property> <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name> <value>*</value> </property> </configuration> 然后,去网上找到如下的配置信息,复制粘贴进去。 Oozie配置说明 最后oozie-site.xml配置文件,如下 <?xml version="1.0"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <!-- Refer to the oozie-default.xml file for the complete list of Oozie configuration properties and their default values. --> <!-- Proxyuser Configuration --> <!-- <property> <name>oozie.service.ProxyUserService.proxyuser.#USER#.hosts</name> <value>*</value> <description> List of hosts the '#USER#' user is allowed to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of hostnames. For multiple users copy this property and replace the user name in the property name. </description> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.#USER#.groups</name> <value>*</value> <description> List of groups the '#USER#' user is allowed to impersonate users from to perform 'doAs' operations. The '#USER#' must be replaced with the username o the user who is allowed to perform 'doAs' operations. The value can be the '*' wildcard or a list of groups. For multiple users copy this property and replace the user name in the property name. </description> </property> --> <!-- Default proxyuser configuration for Hue --> <property> <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name> <value>*</value> </property> <property> <name>oozie.db.schema.name</name> <value>oozie</value> <description> Oozie DataBase Name </description> </property> <property> <name>oozie.service.JPAService.create.db.schema</name> <value>false</value> <description> Creates Oozie DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. </description> </property> <property> <name>oozie.service.JPAService.jdbc.driver</name> <value>com.mysql.jdbc.Driver</value> <description> JDBC driver class. </description> </property> <property> <name>oozie.service.JPAService.jdbc.url</name> <value>jdbc:mysql://bigdatamaster:3306/oozie?createDatabaseIfNotExist=true</value> <description> JDBC URL. </description> </property> <property> <name>oozie.service.JPAService.jdbc.username</name> <value>oozie</value> <description> DB user name. </description> </property> <property> <name>oozie.service.JPAService.jdbc.password</name> <value>oozie</value> <description> DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, if empty Configuration assumes it is NULL. </description> </property> <property> <name>oozie.service.HadoopAccessorService.hadoop.configurations</name> <value>*=/home/hadoop/app/hadoop-2.6.0-cdh5.5.4/etc/hadoop</value> <description> Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. </description> </property> </configuration> oozie-default.xml保持默认,因为,越新的版本很多都默认配置好了。 <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <!-- ************************** VERY IMPORTANT ************************** --> <!-- This file is in the Oozie configuration directory only for reference. --> <!-- It is not loaded by Oozie, Oozie uses its own privatecopy. --> <!-- ************************** VERY IMPORTANT ************************** --> <property> <name>oozie.output.compression.codec</name> <value>gz</value> <description> The name of the compression codec to use. The implementation class for the codec needs to be specified through another property oozie.compression.codecs. You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs where codec class implements the interface org.apache.oozie.compression.CompressionCodec. If oozie.compression.codecs is not specified, gz codec implementation is used by default. </description> </property> <property> <name>oozie.action.mapreduce.uber.jar.enable</name> <value>false</value> <description> If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an uber jar in HDFS. Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0. If false, workflows which specify the oozie.mapreduce.uber.jar configuration property will fail. </description> </property> <property> <name>oozie.processing.timezone</name> <value>UTC</value> <description> Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason is changed, note that GMT(+/-)#### timezones do not observe DST changes. </description> </property> <!-- Base Oozie URL: <SCHEME>://<HOST>:<PORT>/<CONTEXT> --> <property> <name>oozie.base.url</name> <value>http://localhost:8080/oozie</value> <description> Base Oozie URL. </description> </property> <!-- Services --> <property> <name>oozie.system.id</name> <value>oozie-${user.name}</value> <description> The Oozie system ID. </description> </property> <property> <name>oozie.systemmode</name> <value>NORMAL</value> <description> System mode for Oozie at startup. </description> </property> <property> <name>oozie.delete.runtime.dir.on.shutdown</name> <value>true</value> <description> If the runtime directory should be kept after Oozie shutdowns down. </description> </property> <property> <name>oozie.services</name> <value> org.apache.oozie.service.SchedulerService, org.apache.oozie.service.InstrumentationService, org.apache.oozie.service.MemoryLocksService, org.apache.oozie.service.UUIDService, org.apache.oozie.service.ELService, org.apache.oozie.service.AuthorizationService, org.apache.oozie.service.UserGroupInformationService, org.apache.oozie.service.HadoopAccessorService, org.apache.oozie.service.JobsConcurrencyService, org.apache.oozie.service.URIHandlerService, org.apache.oozie.service.DagXLogInfoService, org.apache.oozie.service.SchemaService, org.apache.oozie.service.LiteWorkflowAppService, org.apache.oozie.service.JPAService, org.apache.oozie.service.StoreService, org.apache.oozie.service.SLAStoreService, org.apache.oozie.service.DBLiteWorkflowStoreService, org.apache.oozie.service.CallbackService, org.apache.oozie.service.ActionService, org.apache.oozie.service.ShareLibService, org.apache.oozie.service.CallableQueueService, org.apache.oozie.service.ActionCheckerService, org.apache.oozie.service.RecoveryService, org.apache.oozie.service.PurgeService, org.apache.oozie.service.CoordinatorEngineService, org.apache.oozie.service.BundleEngineService, org.apache.oozie.service.DagEngineService, org.apache.oozie.service.CoordMaterializeTriggerService, org.apache.oozie.service.StatusTransitService, org.apache.oozie.service.PauseTransitService, org.apache.oozie.service.GroupsService, org.apache.oozie.service.ProxyUserService, org.apache.oozie.service.XLogStreamingService, org.apache.oozie.service.JvmPauseMonitorService, org.apache.oozie.service.SparkConfigurationService </value> <description> All services to be created and managed by Oozie Services singleton. Class names must be separated by commas. </description> </property> <property> <name>oozie.services.ext</name> <value> </value> <description> To add/replace services defined in 'oozie.services' with custom implementations. Class names must be separated by commas. </description> </property> <property> <name>oozie.service.XLogStreamingService.buffer.len</name> <value>4096</value> <description>4K buffer for streaming the logs progressively</description> </property> <!-- HCatAccessorService --> <property> <name>oozie.service.HCatAccessorService.jmsconnections</name> <value> default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory </value> <description> Specify the map of endpoints to JMS configuration properties. In general, endpoint identifies the HCatalog server URL. "default" is used if no endpoint is mentioned in the query. If some JMS property is not defined, the system will use the property defined jndi.properties. jndi.properties files is retrieved from the application classpath. Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers. hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616 </description> </property> <!-- TopicService --> <property> <name>oozie.service.JMSTopicService.topic.name</name> <value> default=${username} </value> <description> Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a particular job type. For e.g To have a fixed string topic for workflows, coordinators and bundles, specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2} where job type can be WORKFLOW, COORDINATOR or BUNDLE. e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action, bundle job and bundle action WORKFLOW=workflow, COORDINATOR=coordinator, BUNDLE=bundle For jobs with no defined topic, default topic will be ${username} </description> </property> <!-- JMS Producer connection --> <property> <name>oozie.jms.producer.connection.properties</name> <value>java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory</value> </property> <!-- JMSAccessorService --> <property> <name>oozie.service.JMSAccessorService.connectioncontext.impl</name> <value> org.apache.oozie.jms.DefaultConnectionContext </value> <description> Specifies the Connection Context implementation </description> </property> <!-- ConfigurationService --> <property> <name>oozie.service.ConfigurationService.ignore.system.properties</name> <value> oozie.service.AuthorizationService.security.enabled </value> <description> Specifies "oozie.*" properties to cannot be overriden via Java system properties. Property names must be separted by commas. </description> </property> <property> <name>oozie.service.ConfigurationService.verify.available.properties</name> <value>true</value> <description> Specifies whether the available configurations check is enabled or not. </description> </property> <!-- SchedulerService --> <property> <name>oozie.service.SchedulerService.threads</name> <value>10</value> <description> The number of threads to be used by the SchedulerService to run deamon tasks. If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available. </description> </property> <!-- AuthorizationService --> <property> <name>oozie.service.AuthorizationService.authorization.enabled</name> <value>false</value> <description> Specifies whether security (user name/admin role) is enabled or not. If disabled any user can manage Oozie system and manage any job. </description> </property> <property> <name>oozie.service.AuthorizationService.default.group.as.acl</name> <value>false</value> <description> Enables old behavior where the User's default group is the job's ACL. </description> </property> <!-- InstrumentationService --> <property> <name>oozie.service.InstrumentationService.logging.interval</name> <value>60</value> <description> Interval, in seconds, at which instrumentation should be logged by the InstrumentationService. If set to 0 it will not log instrumentation data. </description> </property> <!-- PurgeService --> <property> <name>oozie.service.PurgeService.older.than</name> <value>30</value> <description> Completed workflow jobs older than this value, in days, will be purged by the PurgeService. </description> </property> <property> <name>oozie.service.PurgeService.coord.older.than</name> <value>7</value> <description> Completed coordinator jobs older than this value, in days, will be purged by the PurgeService. </description> </property> <property> <name>oozie.service.PurgeService.bundle.older.than</name> <value>7</value> <description> Completed bundle jobs older than this value, in days, will be purged by the PurgeService. </description> </property> <property> <name>oozie.service.PurgeService.purge.old.coord.action</name> <value>false</value> <description> Whether to purge completed workflows and their corresponding coordinator actions of long running coordinator jobs if the completed workflow jobs are older than the value specified in oozie.service.PurgeService.older.than. </description> </property> <property> <name>oozie.service.PurgeService.purge.limit</name> <value>100</value> <description> Completed Actions purge - limit each purge to this value </description> </property> <property> <name>oozie.service.PurgeService.purge.interval</name> <value>3600</value> <description> Interval at which the purge service will run, in seconds. </description> </property> <!-- RecoveryService --> <property> <name>oozie.service.RecoveryService.wf.actions.older.than</name> <value>120</value> <description> Age of the actions which are eligible to be queued for recovery, in seconds. </description> </property> <property> <name>oozie.service.RecoveryService.wf.actions.created.time.interval</name> <value>7</value> <description> Created time period of the actions which are eligible to be queued for recovery in days. </description> </property> <property> <name>oozie.service.RecoveryService.callable.batch.size</name> <value>10</value> <description> This value determines the number of callable which will be batched together to be executed by a single thread. </description> </property> <property> <name>oozie.service.RecoveryService.push.dependency.interval</name> <value>200</value> <description> This value determines the delay for push missing dependency command queueing in Recovery Service </description> </property> <property> <name>oozie.service.RecoveryService.interval</name> <value>60</value> <description> Interval at which the RecoverService will run, in seconds. </description> </property> <property> <name>oozie.service.RecoveryService.coord.older.than</name> <value>600</value> <description> Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds. </description> </property> <property> <name>oozie.service.RecoveryService.bundle.older.than</name> <value>600</value> <description> Age of the Bundle jobs which are eligible to be queued for recovery, in seconds. </description> </property> <!-- CallableQueueService --> <property> <name>oozie.service.CallableQueueService.queue.size</name> <value>10000</value> <description>Max callable queue size</description> </property> <property> <name>oozie.service.CallableQueueService.threads</name> <value>10</value> <description>Number of threads used for executing callables</description> </property> <property> <name>oozie.service.CallableQueueService.callable.concurrency</name> <value>3</value> <description> Maximum concurrency for a given callable type. Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc). Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc). All commands that use action executors (action-start, action-end, action-kill and action-check) use the action type as the callable type. </description> </property> <property> <name>oozie.service.CallableQueueService.callable.next.eligible</name> <value>true</value> <description> If true, when a callable in the queue has already reached max concurrency, Oozie continuously find next one which has not yet reach max concurrency. </description> </property> <property> <name>oozie.service.CallableQueueService.InterruptMapMaxSize</name> <value>500</value> <description> Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size. </description> </property> <property> <name>oozie.service.CallableQueueService.InterruptTypes</name> <value>kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend</value> <description> Getting the types of XCommands that are considered to be of Interrupt type </description> </property> <!-- CoordMaterializeTriggerService --> <property> <name>oozie.service.CoordMaterializeTriggerService.lookup.interval </name> <value>300</value> <description> Coordinator Job Lookup interval.(in seconds). </description> </property> <!-- Enable this if you want different scheduling interval for CoordMaterializeTriggerService. By default it will use lookup interval as scheduling interval <property> <name>oozie.service.CoordMaterializeTriggerService.scheduling.interval </name> <value>300</value> <description> The frequency at which the CoordMaterializeTriggerService will run.</description> </property> --> <property> <name>oozie.service.CoordMaterializeTriggerService.materialization.window </name> <value>3600</value> <description> Coordinator Job Lookup command materialized each job for this next "window" duration </description> </property> <property> <name>oozie.service.CoordMaterializeTriggerService.callable.batch.size</name> <value>10</value> <description> This value determines the number of callable which will be batched together to be executed by a single thread. </description> </property> <property> <name>oozie.service.CoordMaterializeTriggerService.materialization.system.limit</name> <value>50</value> <description> This value determines the number of coordinator jobs to be materialized at a given time. </description> </property> <property> <name>oozie.service.coord.normal.default.timeout </name> <value>120</value> <description>Default timeout for a coordinator action input check (in minutes) for normal job. -1 means infinite timeout</description> </property> <property> <name>oozie.service.coord.default.max.timeout </name> <value>86400</value> <description>Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days </description> </property> <property> <name>oozie.service.coord.input.check.requeue.interval </name> <value>60000</value> <description>Command re-queue interval for coordinator data input check (in millisecond). </description> </property> <property> <name>oozie.service.coord.push.check.requeue.interval </name> <value>600000</value> <description>Command re-queue interval for push dependencies (in millisecond). </description> </property> <property> <name>oozie.service.coord.default.concurrency </name> <value>1</value> <description>Default concurrency for a coordinator job to determine how many maximum action should be executed at the same time. -1 means infinite concurrency.</description> </property> <property> <name>oozie.service.coord.default.throttle </name> <value>12</value> <description>Default throttle for a coordinator job to determine how many maximum action should be in WAITING state at the same time.</description> </property> <property> <name>oozie.service.coord.materialization.throttling.factor </name> <value>0.05</value> <description>Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by this factor X the total queue size.</description> </property> <property> <name>oozie.service.coord.check.maximum.frequency</name> <value>true</value> <description> When true, Oozie will reject any coordinators with a frequency faster than 5 minutes. It is not recommended to disable this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and additional system stress. </description> </property> <!-- ELService --> <!-- List of supported groups for ELService --> <property> <name>oozie.service.ELService.groups</name> <value>job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout</value> <description>List of groups for different ELServices</description> </property> <property> <name>oozie.service.ELService.constants.job-submit</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.functions.job-submit</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.constants.job-submit</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.ext.functions.job-submit</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. </description> </property> <!-- Workflow specifics --> <property> <name>oozie.service.ELService.constants.workflow</name> <value> KB=org.apache.oozie.util.ELConstantsFunctions#KB, MB=org.apache.oozie.util.ELConstantsFunctions#MB, GB=org.apache.oozie.util.ELConstantsFunctions#GB, TB=org.apache.oozie.util.ELConstantsFunctions#TB, PB=org.apache.oozie.util.ELConstantsFunctions#PB, RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS, MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN, MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT, REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN, REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT, GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.workflow</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.workflow</name> <value> firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull, concat=org.apache.oozie.util.ELConstantsFunctions#concat, replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll, appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll, trim=org.apache.oozie.util.ELConstantsFunctions#trim, timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp, urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode, toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr, toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr, toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr, wf:id=org.apache.oozie.DagELFunctions#wf_id, wf:name=org.apache.oozie.DagELFunctions#wf_name, wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath, wf:conf=org.apache.oozie.DagELFunctions#wf_conf, wf:user=org.apache.oozie.DagELFunctions#wf_user, wf:group=org.apache.oozie.DagELFunctions#wf_group, wf:callback=org.apache.oozie.DagELFunctions#wf_callback, wf:transition=org.apache.oozie.DagELFunctions#wf_transition, wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode, wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode, wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage, wf:run=org.apache.oozie.DagELFunctions#wf_run, wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData, wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId, wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri, wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus, hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf, fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists, fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir, fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize, fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize, fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize, hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength</name> <value>100000</value> <description> The maximum length of the workflow definition in bytes An error will be reported if the length exceeds the given maximum </description> </property> <property> <name>oozie.service.ELService.ext.functions.workflow</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Resolve SLA information during Workflow job submission --> <property> <name>oozie.service.ELService.constants.wf-sla-submit</name> <value> MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES, HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS, DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.wf-sla-submit</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.wf-sla-submit</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.wf-sla-submit</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Coordinator specifics -->l <!-- Phase 1 resolution during job submission --> <!-- EL Evalautor setup to resolve mainly frequency tags --> <property> <name>oozie.service.ELService.constants.coord-job-submit-freq</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-job-submit-freq</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-job-submit-freq</name> <value> coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays, coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-job-submit-freq</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.constants.coord-job-wait-timeout</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-job-wait-timeout</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-job-wait-timeout</name> <value> coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days, coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months, coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours, coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-job-wait-timeout</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions without having to include all the built in ones. </description> </property> <!-- EL Evalautor setup to resolve mainly all constants/variables - no EL functions is resolved --> <property> <name>oozie.service.ELService.constants.coord-job-submit-nofuncs</name> <value> MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE, HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR, DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY, MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH, YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-job-submit-nofuncs</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-job-submit-nofuncs</name> <value> coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-job-submit-nofuncs</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- EL Evalautor setup to **check** whether instances/start-instance/end-instances are valid no EL functions will be resolved --> <property> <name>oozie.service.ELService.constants.coord-job-submit-instances</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-job-submit-instances</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-job-submit-instances</name> <value> coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo, coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-job-submit-instances</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- EL Evalautor setup to **check** whether dataIn and dataOut are valid no EL functions will be resolved --> <property> <name>oozie.service.ELService.constants.coord-job-submit-data</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-job-submit-data</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-job-submit-data</name> <value> coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-job-submit-data</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Resolve SLA information during Coordinator job submission --> <property> <name>oozie.service.ELService.constants.coord-sla-submit</name> <value> MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-sla-submit</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-sla-submit</name> <value> coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo, coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-sla-submit</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Action creation for coordinator --> <property> <name>oozie.service.ELService.constants.coord-action-create</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-action-create</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-action-create</name> <value> coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-action-create</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Action creation for coordinator used to only evaluate instance number like ${current (daysInMonth())}. current will be echo-ed --> <property> <name>oozie.service.ELService.constants.coord-action-create-inst</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-action-create-inst</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-action-create-inst</name> <value> coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset, coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo, coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo, coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo, coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo, coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-action-create-inst</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Resolve SLA information during Action creation/materialization --> <property> <name>oozie.service.ELService.constants.coord-sla-create</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-sla-create</name> <value> MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES, HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS, DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS</value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-sla-create</name> <value> coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-sla-create</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- Action start for coordinator --> <property> <name>oozie.service.ELService.constants.coord-action-start</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. </description> </property> <property> <name>oozie.service.ELService.ext.constants.coord-action-start</name> <value> </value> <description> EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.functions.coord-action-start</name> <value> coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay, coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth, coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset, coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest, coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange, coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future, coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange, coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn, coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut, coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime, coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime, coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset, coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset, coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime, coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId, coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name, coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf, coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user, coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn, coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut, coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn, coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut, coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter, coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin, coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax, coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions, coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions, coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue, hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. </description> </property> <property> <name>oozie.service.ELService.ext.functions.coord-action-start</name> <value> </value> <description> EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <property> <name>oozie.service.ELService.latest-el.use-current-time</name> <value>false</value> <description> Determine whether to use the current time to determine the latest dependency or the action creation time. This is for backward compatibility with older oozie behaviour. </description> </property> <!-- UUIDService --> <property> <name>oozie.service.UUIDService.generator</name> <value>counter</value> <description> random : generated UUIDs will be random strings. counter: generated UUIDs generated will be a counter postfixed with the system startup time. </description> </property> <!-- DBLiteWorkflowStoreService --> <property> <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval</name> <value>5</value> <description> Workflow Status metrics collection interval in minutes.</description> </property> <property> <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.window</name> <value>3600</value> <description> Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window. </description> </property> <!-- DB Schema Info, used by DBLiteWorkflowStoreService --> <property> <name>oozie.db.schema.name</name> <value>oozie</value> <description> Oozie DataBase Name </description> </property> <!-- StoreService --> <property> <name>oozie.service.JPAService.create.db.schema</name> <value>false</value> <description> Creates Oozie DB. If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP. If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up. </description> </property> <property> <name>oozie.service.JPAService.validate.db.connection</name> <value>true</value> <description> Validates DB connections from the DB connection pool. If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored. </description> </property> <property> <name>oozie.service.JPAService.validate.db.connection.eviction.interval</name> <value>300000</value> <description> Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep between runs of the idle object evictor thread. </description> </property> <property> <name>oozie.service.JPAService.validate.db.connection.eviction.num</name> <value>10</value> <description> Validates DB connections from the DB connection pool. When validate db connection 'TestWhileIdle' is true, the number of objects to examine during each run of the idle object evictor thread. </description> </property> <property> <name>oozie.service.JPAService.connection.data.source</name> <value>org.apache.commons.dbcp.BasicDataSource</value> <description> DataSource to be used for connection pooling. </description> </property> <property> <name>oozie.service.JPAService.connection.properties</name> <value> </value> <description> DataSource connection properties. </description> </property> <property> <name>oozie.service.JPAService.jdbc.driver</name> <value>org.apache.derby.jdbc.EmbeddedDriver</value> <description> JDBC driver class. </description> </property> <property> <name>oozie.service.JPAService.jdbc.url</name> <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value> <description> JDBC URL. </description> </property> <property> <name>oozie.service.JPAService.jdbc.username</name> <value>sa</value> <description> DB user name. </description> </property> <property> <name>oozie.service.JPAService.jdbc.password</name> <value> </value> <description> DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value, if empty Configuration assumes it is NULL. IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in the console. </description> </property> <property> <name>oozie.service.JPAService.pool.max.active.conn</name> <value>10</value> <description> Max number of connections. </description> </property> <!-- SchemaService --> <property> <name>oozie.service.SchemaService.wf.schemas</name> <value> oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd, oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd, shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd, email-action-0.1.xsd,email-action-0.2.xsd, hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,hive-action-0.6.xsd, sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd, ssh-action-0.1.xsd,ssh-action-0.2.xsd, distcp-action-0.1.xsd,distcp-action-0.2.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd, hive2-action-0.1.xsd, hive2-action-0.2.xsd, spark-action-0.1.xsd </value> <description> List of schemas for workflows (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.wf.ext.schemas</name> <value> </value> <description> List of additional schemas for workflows (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.coord.schemas</name> <value> oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd, oozie-sla-0.1.xsd,oozie-sla-0.2.xsd </value> <description> List of schemas for coordinators (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.coord.ext.schemas</name> <value> </value> <description> List of additional schemas for coordinators (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.bundle.schemas</name> <value> oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd </value> <description> List of schemas for bundles (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.bundle.ext.schemas</name> <value> </value> <description> List of additional schemas for bundles (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.sla.schemas</name> <value> gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd </value> <description> List of schemas for semantic validation for GMS SLA (separated by commas). </description> </property> <property> <name>oozie.service.SchemaService.sla.ext.schemas</name> <value> </value> <description> List of additional schemas for semantic validation for GMS SLA (separated by commas). </description> </property> <!-- CallbackService --> <property> <name>oozie.service.CallbackService.base.url</name> <value>${oozie.base.url}/callback</value> <description> Base callback URL used by ActionExecutors. </description> </property> <property> <name>oozie.service.CallbackService.early.requeue.max.retries</name> <value>5</value> <description> If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times to give the action time to transition to RUNNING. </description> </property> <!-- CallbackServlet --> <property> <name>oozie.servlet.CallbackServlet.max.data.len</name> <value>2048</value> <description> Max size in characters for the action completion data output. </description> </property> <!-- External stats--> <property> <name>oozie.external.stats.max.size</name> <value>-1</value> <description> Max size in bytes for action stats. -1 means infinite value. </description> </property> <!-- JobCommand --> <property> <name>oozie.JobCommand.job.console.url</name> <value>${oozie.base.url}?job=</value> <description> Base console URL for a workflow job. </description> </property> <!-- ActionService --> <property> <name>oozie.service.ActionService.executor.classes</name> <value> org.apache.oozie.action.decision.DecisionActionExecutor, org.apache.oozie.action.hadoop.JavaActionExecutor, org.apache.oozie.action.hadoop.FsActionExecutor, org.apache.oozie.action.hadoop.MapReduceActionExecutor, org.apache.oozie.action.hadoop.PigActionExecutor, org.apache.oozie.action.hadoop.HiveActionExecutor, org.apache.oozie.action.hadoop.ShellActionExecutor, org.apache.oozie.action.hadoop.SqoopActionExecutor, org.apache.oozie.action.hadoop.DistcpActionExecutor, org.apache.oozie.action.hadoop.Hive2ActionExecutor, org.apache.oozie.action.ssh.SshActionExecutor, org.apache.oozie.action.oozie.SubWorkflowActionExecutor, org.apache.oozie.action.email.EmailActionExecutor, org.apache.oozie.action.hadoop.SparkActionExecutor </value> <description> List of ActionExecutors classes (separated by commas). Only action types with associated executors can be used in workflows. </description> </property> <property> <name>oozie.service.ActionService.executor.ext.classes</name> <value> </value> <description> List of ActionExecutors extension classes (separated by commas). Only action types with associated executors can be used in workflows. This property is a convenience property to add extensions to the built in executors without having to include all the built in ones. </description> </property> <!-- ActionCheckerService --> <property> <name>oozie.service.ActionCheckerService.action.check.interval</name> <value>60</value> <description> The frequency at which the ActionCheckService will run. </description> </property> <property> <name>oozie.service.ActionCheckerService.action.check.delay</name> <value>600</value> <description> The time, in seconds, between an ActionCheck for the same action. </description> </property> <property> <name>oozie.service.ActionCheckerService.callable.batch.size</name> <value>10</value> <description> This value determines the number of actions which will be batched together to be executed by a single thread. </description> </property> <!-- StatusTransitService --> <property> <name>oozie.service.StatusTransitService.statusTransit.interval</name> <value>60</value> <description> The frequency in seconds at which the StatusTransitService will run. </description> </property> <property> <name>oozie.service.StatusTransitService.backward.support.for.coord.status</name> <value>false</value> <description> true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit. if set true, 1. SUCCEEDED state in coordinator job means materialization done. 2. No DONEWITHERROR state in coordinator job 3. No PAUSED or PREPPAUSED state in coordinator job 4. PREPSUSPENDED becomes SUSPENDED in coordinator job </description> </property> <property> <name>oozie.service.StatusTransitService.backward.support.for.states.without.error</name> <value>true</value> <description> true, if you want to keep Oozie 3.2 status transit. Change it to false for Oozie 4.x releases. if set true, No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR for coordinator and bundle </description> </property> <!-- PauseTransitService --> <property> <name>oozie.service.PauseTransitService.PauseTransit.interval</name> <value>60</value> <description> The frequency in seconds at which the PauseTransitService will run. </description> </property> <!-- LauncherMapper --> <property> <name>oozie.action.max.output.data</name> <value>2048</value> <description> Max size in characters for output data. </description> </property> <property> <name>oozie.action.fs.glob.max</name> <value>50000</value> <description> Maximum number of globbed files. </description> </property> <!-- JavaActionExecutor --> <!-- This is common to the subclasses of action executors for Java (e.g. map-reduce, pig, hive, java, etc) --> <property> <name>oozie.action.launcher.mapreduce.job.ubertask.enable</name> <value>true</value> <description> Enables Uber Mode for the launcher job in YARN/Hadoop 2 (no effect in Hadoop 1) for all action types by default. This can be overridden on a per-action-type basis by setting oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site.xml (where #action-type# is the action type; for example, "pig"). And that can be overridden on a per-action basis by setting oozie.launcher.mapreduce.job.ubertask.enable in an action's configuration section in a workflow. In summary, the priority is this: 1. action's configuration section in a workflow 2. oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site 3. oozie.action.launcher.mapreduce.job.ubertask.enable in oozie-site </description> </property> <property> <name>oozie.action.shell.launcher.mapreduce.job.ubertask.enable</name> <value>false</value> <description> The Shell action may have issues with the $PATH environment when using Uber Mode, and so Uber Mode is disabled by default for it. See oozie.action.launcher.mapreduce.job.ubertask.enable </description> </property> <property> <name>oozie.action.shell.setup.hadoop.conf.dir</name> <value>true</value> <description> The Shell action is commonly used to run programs that rely on HADOOP_CONF_DIR (e.g. hive, beeline, sqoop, etc). With YARN, HADOO_CONF_DIR is set to the NodeManager's copies of Hadoop's *-site.xml files, which can be problematic because (a) they are for meant for the NM, not necessarily clients, and (b) they won't have any of the configs that Oozie, or the user through Oozie, sets. When this property is set to true, The Shell action will prepare the *-site.xml files based on the correct config and set HADOOP_CONF_DIR to point to it. Setting it to false will make Oozie leave HADOOP_CONF_DIR alone. This can also be set at the Action level by putting it in the Shell Action's configuration section, which also has priorty. That all said, it's recommended to use the appropriate action type when possible. </description> </property> <!-- HadoopActionExecutor --> <!-- This is common to the subclasses action executors for map-reduce and pig --> <property> <name>oozie.action.retries.max</name> <value>3</value> <description> The number of retries for executing an action in case of failure </description> </property> <property> <name>oozie.action.retry.interval</name> <value>10</value> <description> The interval between retries of an action in case of failure </description> </property> <property> <name>oozie.action.retry.policy</name> <value>periodic</value> <description> Retry policy of an action in case of failure. Possible values are periodic/exponential </description> </property> <!-- SshActionExecutor --> <property> <name>oozie.action.ssh.delete.remote.tmp.dir</name> <value>true</value> <description> If set to true, it will delete temporary directory at the end of execution of ssh action. </description> </property> <property> <name>oozie.action.ssh.http.command</name> <value>curl</value> <description> Command to use for callback to oozie, normally is 'curl' or 'wget'. The command must available in PATH environment variable of the USER@HOST box shell. </description> </property> <property> <name>oozie.action.ssh.http.command.post.options</name> <value>--data-binary @#stdout --request POST --header "content-type:text/plain"</value> <description> The callback command POST options. Used when the ouptut of the ssh action is captured. </description> </property> <property> <name>oozie.action.ssh.allow.user.at.host</name> <value>true</value> <description> Specifies whether the user specified by the ssh action is allowed or is to be replaced by the Job user </description> </property> <!-- SubworkflowActionExecutor --> <property> <name>oozie.action.subworkflow.max.depth</name> <value>50</value> <description> The maximum depth for subworkflows. For example, if set to 3, then a workflow can start subwf1, which can start subwf2, which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail. This is helpful in preventing errant workflows from starting infintely recursive subworkflows. </description> </property> <!-- HadoopAccessorService --> <property> <name>oozie.service.HadoopAccessorService.kerberos.enabled</name> <value>false</value> <description> Indicates if Oozie is configured to use Kerberos. </description> </property> <property> <name>local.realm</name> <value>LOCALHOST</value> <description> Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration </description> </property> <property> <name>oozie.service.HadoopAccessorService.keytab.file</name> <value>${user.home}/oozie.keytab</value> <description> Location of the Oozie user keytab file. </description> </property> <property> <name>oozie.service.HadoopAccessorService.kerberos.principal</name> <value>${user.name}/localhost@${local.realm}</value> <description> Kerberos principal for Oozie service. </description> </property> <property> <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name> <value> </value> <description> Whitelisted job tracker for Oozie service. </description> </property> <property> <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name> <value> </value> <description> Whitelisted job tracker for Oozie service. </description> </property> <property> <name>oozie.service.HadoopAccessorService.hadoop.configurations</name> <value>*=hadoop-conf</value> <description> Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is used when there is no exact match for an authority. The HADOOP_CONF_DIR contains the relevant Hadoop *-site.xml files. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. </description> </property> <property> <name>oozie.service.HadoopAccessorService.action.configurations</name> <value>*=action-conf</value> <description> Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is used when there is no exact match for an authority. The ACTION_CONF_DIR may contain ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig', 'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used as defaults properties for the action. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute (i.e. to point to Hadoop client conf/ directories in the local filesystem. </description> </property> <!-- Credentials --> <property> <name>oozie.credentials.credentialclasses</name> <value> </value> <description> A list of credential class mapping for CredentialsProvider </description> </property> <property> <name>oozie.credentials.skip</name> <value>false</value> <description> This determines if Oozie should skip getting credentials from the credential providers. This can be overwritten at a job-level or action-level. </description> </property> <property> <name>oozie.actions.main.classnames</name> <value>distcp=org.apache.hadoop.tools.DistCp</value> <description> A list of class name mapping for Action classes </description> </property> <property> <name>oozie.service.WorkflowAppService.system.libpath</name> <value>/user/${user.name}/share/lib</value> <description> System library path to use for workflow applications. This path is added to workflow application if their job properties sets the property 'oozie.use.system.libpath' to true. </description> </property> <property> <name>oozie.command.default.lock.timeout</name> <value>5000</value> <description> Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity. </description> </property> <property> <name>oozie.command.default.requeue.delay</name> <value>10000</value> <description> Default time (in milliseconds) for commands that are requeued for delayed execution. </description> </property> <!-- LiteWorkflowStoreService, Workflow Action Automatic Retry --> <property> <name>oozie.service.LiteWorkflowStoreService.user.retry.max</name> <value>3</value> <description> Automatic retry max count for workflow action is 3 in default. </description> </property> <property> <name>oozie.service.LiteWorkflowStoreService.user.retry.inteval</name> <value>10</value> <description> Automatic retry interval for workflow action is in minutes and the default value is 10 minutes. </description> </property> <property> <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code</name> <value>JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014</value> <description> Automatic retry interval for workflow action is handled for these specified error code: FS009, FS008 is file exists error when using chmod in fs action. FS014 is permission error in fs action JA018 is output directory exists error in workflow map-reduce action. JA019 is error while executing distcp action. JA017 is job not exists error in action executor. JA008 is FileNotFoundException in action executor. JA009 is IOException in action executor. ALL is the any kind of error in action executor. </description> </property> <property> <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext</name> <value> </value> <description> Automatic retry interval for workflow action is handled for these specified extra error code: ALL is the any kind of error in action executor. </description> </property> <property> <name>oozie.service.LiteWorkflowStoreService.node.def.version</name> <value>_oozie_inst_v_1</value> <description> NodeDef default version, _oozie_inst_v_0 or _oozie_inst_v_1 </description> </property> <!-- Oozie Authentication --> <property> <name>oozie.authentication.type</name> <value>simple</value> <description> Defines authentication used for Oozie HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME# </description> </property> <property> <name>oozie.server.authentication.type</name> <value>${oozie.authentication.type}</value> <description> Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s). Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME# </description> </property> <property> <name>oozie.authentication.token.validity</name> <value>36000</value> <description> Indicates how long (in seconds) an authentication token is valid before it has to be renewed. </description> </property> <property> <name>oozie.authentication.cookie.domain</name> <value></value> <description> The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across multiple hosts the domain must be correctly set. </description> </property> <property> <name>oozie.authentication.simple.anonymous.allowed</name> <value>true</value> <description> Indicates if anonymous requests are allowed when using 'simple' authentication. </description> </property> <property> <name>oozie.authentication.kerberos.principal</name> <value>HTTP/localhost@${local.realm}</value> <description> Indicates the Kerberos principal to be used for HTTP endpoint. The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification. </description> </property> <property> <name>oozie.authentication.kerberos.keytab</name> <value>${oozie.service.HadoopAccessorService.keytab.file}</value> <description> Location of the keytab file with the credentials for the principal. Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop. </description> </property> <property> <name>oozie.authentication.kerberos.name.rules</name> <value>DEFAULT</value> <description> The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's KerberosName for more details. </description> </property> <!-- Coordinator "NONE" execution order default time tolerance --> <property> <name>oozie.coord.execution.none.tolerance</name> <value>1</value> <description> Default time tolerance in minutes after action nominal time for an action to be skipped when execution order is "NONE" </description> </property> <!-- Coordinator Actions default length --> <property> <name>oozie.coord.actions.default.length</name> <value>1000</value> <description> Default number of coordinator actions to be retrieved by the info command </description> </property> <!-- ForkJoin validation --> <property> <name>oozie.validate.ForkJoin</name> <value>true</value> <description> If true, fork and join should be validated at wf submission time. </description> </property> <property> <name>oozie.coord.action.get.all.attributes</name> <value>false</value> <description> Setting to true is not recommended as coord job/action info will bring all columns of the action in memory. Set it true only if backward compatibility for action/job info is required. </description> </property> <property> <name>oozie.service.HadoopAccessorService.supported.filesystems</name> <value>hdfs,hftp,webhdfs</value> <description> Enlist the different filesystems supported for federation. If wildcard "*" is specified, then ALL file schemes will be allowed. </description> </property> <property> <name>oozie.service.URIHandlerService.uri.handlers</name> <value>org.apache.oozie.dependency.FSURIHandler</value> <description> Enlist the different uri handlers supported for data availability checks. </description> </property> <!-- Oozie HTTP Notifications --> <property> <name>oozie.notification.url.connection.timeout</name> <value>10000</value> <description> Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url', 'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url' properties in their job.properties. Refer to section '5 Oozie Notifications' in the Workflow specification for details. </description> </property> <!-- Enable Distributed Cache workaround for Hadoop 2.0.2-alpha (MAPREDUCE-4820) --> <property> <name>oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache</name> <value>false</value> <description> Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set the distributed cache for the action job because the local JARs are implicitly included triggering a duplicate check. This flag removes the distributed cache files for the action as they'll be included from the local JARs of the JobClient (MRApps) submitting the action job from the launcher. </description> </property> <property> <name>oozie.service.EventHandlerService.filter.app.types</name> <value>workflow_job, coordinator_action</value> <description> The app-types among workflow/coordinator/bundle job/action for which for which events system is enabled. </description> </property> <property> <name>oozie.service.EventHandlerService.event.queue</name> <value>org.apache.oozie.event.MemoryEventQueue</value> <description> The implementation for EventQueue in use by the EventHandlerService. </description> </property> <property> <name>oozie.service.EventHandlerService.event.listeners</name> <value>org.apache.oozie.jms.JMSJobEventListener</value> </property> <property> <name>oozie.service.EventHandlerService.queue.size</name> <value>10000</value> <description> Maximum number of events to be contained in the event queue. </description> </property> <property> <name>oozie.service.EventHandlerService.worker.interval</name> <value>30</value> <description> The default interval (seconds) at which the worker threads will be scheduled to run and process events. </description> </property> <property> <name>oozie.service.EventHandlerService.batch.size</name> <value>10</value> <description> The batch size for batched draining per thread from the event queue. </description> </property> <property> <name>oozie.service.EventHandlerService.worker.threads</name> <value>3</value> <description> Number of worker threads to be scheduled to run and process events. </description> </property> <property> <name>oozie.sla.service.SLAService.capacity</name> <value>5000</value> <description> Maximum number of sla records to be contained in the memory structure. </description> </property> <property> <name>oozie.sla.service.SLAService.alert.events</name> <value>END_MISS</value> <description> Default types of SLA events for being alerted of. </description> </property> <property> <name>oozie.sla.service.SLAService.calculator.impl</name> <value>org.apache.oozie.sla.SLACalculatorMemory</value> <description> The implementation for SLACalculator in use by the SLAService. </description> </property> <property> <name>oozie.sla.service.SLAService.job.event.latency</name> <value>90000</value> <description> Time in milliseconds to account of latency of getting the job status event to compare against and decide sla miss/met </description> </property> <property> <name>oozie.sla.service.SLAService.check.interval</name> <value>30</value> <description> Time interval, in seconds, at which SLA Worker will be scheduled to run </description> </property> <!-- ZooKeeper configuration --> <property> <name>oozie.zookeeper.connection.string</name> <value>localhost:2181</value> <description> Comma-separated values of host:port pairs of the ZooKeeper servers. </description> </property> <property> <name>oozie.zookeeper.namespace</name> <value>oozie</value> <description> The namespace to use. All of the Oozie Servers that are planning on talking to each other should have the same namespace. </description> </property> <property> <name>oozie.zookeeper.connection.timeout</name> <value>180</value> <description> Default ZK connection timeout (in sec). If connection is lost for more than timeout, then Oozie server will shutdown itself if oozie.zookeeper.server.shutdown.ontimeout is true. </description> </property> <property> <name>oozie.zookeeper.server.shutdown.ontimeout</name> <value>true</value> <description> If true, Oozie server will shutdown itself on ZK connection timeout. </description> </property> <property> <name>oozie.http.hostname</name> <value>localhost</value> <description> Oozie server host name. </description> </property> <property> <name>oozie.http.port</name> <value>11000</value> <description> Oozie server port. </description> </property> <property> <name>oozie.instance.id</name> <value>${oozie.http.hostname}</value> <description> Each Oozie server should have its own unique instance id. The default is system property =${OOZIE_HTTP_HOSTNAME}= (i.e. the hostname). </description> </property> <!-- Sharelib Configuration --> <property> <name>oozie.service.ShareLibService.mapping.file</name> <value> </value> <description> Sharelib mapping files contains list of key=value, where key will be the sharelib name for the action and value is a comma separated list of DFS directories or jar files. Example. oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/ oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/ oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar </description> </property> <property> <name>oozie.service.ShareLibService.fail.fast.on.startup</name> <value>false</value> <description> Fails server starup if sharelib initilzation fails. </description> </property> <property> <name>oozie.service.ShareLibService.purge.interval</name> <value>1</value> <description> How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS. </description> </property> <property> <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name> <value>7</value> <description> ShareLib retention time in days. </description> </property> <property> <name>oozie.action.ship.launcher.jar</name> <value>false</value> <description> Specifies whether launcher jar is shipped or not. </description> </property> <property> <name>oozie.action.jobinfo.enable</name> <value>false</value> <description> JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have property(oozie.job.info) which value is multiple key/value pair separated by ",". This information can be used for analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs, etc from mapreduce job history logs and configuration. User can also add custom workflow property to jobinfo by adding property which prefix with "oozie.job.info." Eg. oozie.job.info="bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=, wf.name=,action.name=,action.type=,launcher=true" </description> </property> <property> <name>oozie.service.XLogStreamingService.max.log.scan.duration</name> <value>-1</value> <description> Max log scan duration in hours. If log scan request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. </description> </property> <property> <name>oozie.service.XLogStreamingService.actionlist.max.log.scan.duration</name> <value>-1</value> <description> Max log scan duration in hours for coordinator job when list of actions are specified. If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration. -1 indicate no limit. This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified. </description> </property> <!-- JvmPauseMonitorService Configuration --> <property> <name>oozie.service.JvmPauseMonitorService.warn-threshold.ms</name> <value>10000</value> <description> The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log a WARN level message; there is also a counter named "jvm.pause.warn-threshold". </description> </property> <property> <name>oozie.service.JvmPauseMonitorService.info-threshold.ms</name> <value>1000</value> <description> The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate that the JVM or host machine is overloaded or other problems. This thread sleeps for 500ms; if it sleeps for significantly longer, then there is likely a problem. This property specifies the threadshold for when Oozie should log an INFO level message; there is also a counter named "jvm.pause.info-threshold". </description> </property> <property> <name>oozie.service.ZKLocksService.locks.reaper.threshold</name> <value>300</value> <description> The frequency at which the ChildReaper will run. Duration should be in sec. Default is 5 min. </description> </property> <property> <name>oozie.service.ZKLocksService.locks.reaper.threads</name> <value>2</value> <description> Number of fixed threads used by ChildReaper to delete empty locks. </description> </property> <property> <name>oozie.service.AbandonedCoordCheckerService.check.interval </name> <value>1440</value> <description> Interval, in minutes, at which AbandonedCoordCheckerService should run. </description> </property> <property> <name>oozie.service.AbandonedCoordCheckerService.check.delay </name> <value>60</value> <description> Delay, in minutes, at which AbandonedCoordCheckerService should run. </description> </property> <property> <name>oozie.service.AbandonedCoordCheckerService.failure.limit </name> <value>25</value> <description> Failure limit. A job is considered to be abandoned/faulty if total number of actions in failed/timedout/suspended >= "Failure limit" and there are no succeeded action. </description> </property> <property> <name>oozie.service.AbandonedCoordCheckerService.kill.jobs </name> <value>false</value> <description> If true, AbandonedCoordCheckerService will kill abandoned coords. </description> </property> <property> <name>oozie.service.AbandonedCoordCheckerService.job.older.than</name> <value>2880</value> <description> In minutes, job will be considered as abandoned/faulty if job is older than this value. </description> </property> <property> <name>oozie.notification.proxy</name> <value></value> <description> System level proxy setting for job notifications. </description> </property> <property> <name>oozie.wf.rerun.disablechild</name> <value>false</value> <description> By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and it will only rerun through parent. </description> </property> <property> <name>oozie.service.PauseTransitService.callable.batch.size </name> <value>10</value> <description> This value determines the number of callable which will be batched together to be executed by a single thread. </description> </property> <!-- XConfiguration --> <property> <name>oozie.configuration.substitute.depth</name> <value>20</value> <description> This value determines the depth of substitution in configurations. If set -1, No limitation on substitution. </description> </property> <property> <name>oozie.service.SparkConfigurationService.spark.configurations</name> <value>*=spark-conf</value> <description> Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of the ResourceManager of a YARN cluster. The wildcard '*' configuration is used when there is no exact match for an authority. The SPARK_CONF_DIR contains the relevant spark-defaults.conf properties file. If the path is relative is looked within the Oozie configuration directory; though the path can be absolute. This is only used when the Spark master is set to either "yarn-client" or "yarn-cluster". </description> </property> <property> <name>oozie.service.SparkConfigurationService.spark.configurations.ignore.spark.yarn.jar</name> <value>true</value> <description> If true, Oozie will ignore the "spark.yarn.jar" property from any Spark configurations specified in oozie.service.SparkConfigurationService.spark.configurations. If false, Oozie will not ignore it. It is recommended to leave this as true because it can interfere with the jars in the Spark sharelib. </description> </property> <property> <name>oozie.email.attachment.enabled</name> <value>true</value> <description> This value determines whether to support email attachment of a file on HDFS. Set it false if there is any security concern. </description> </property> <property> <name>oozie.actions.default.name-node</name> <value> </value> <description> The default value to use for the &lt;name-node&gt; element in applicable action types. This value will be used when neither the action itself nor the global section specifies a &lt;name-node&gt;. As expected, it should be of the form "hdfs://HOST:PORT". </description> </property> <property> <name>oozie.actions.default.job-tracker</name> <value> </value> <description> The default value to use for the &lt;job-tracker&gt; element in applicable action types. This value will be used when neither the action itself nor the global section specifies a &lt;job-tracker&gt;. As expected, it should be of the form "HOST:PORT". </description> </property> </configuration> 此时,得要启动hadoop集群 这里,我不多赘述。 刚开始,/user/hadoop/下只有这个。 $ bin/oozie-setup.sh sharelib create -fs <FS_URI> [-locallib <PATH>] 注意: 我们要oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz ,不要oozie-sharelib-4.1.0-cdh5.5.4.tar.gz。 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ pwd /home/hadoop/app/oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ ls bin lib LICENSE.txt oozie-core oozie-server oozie.war conf libext NOTICE.txt oozie-examples.tar.gz oozie-sharelib-4.1.0-cdh5.5.4.tar.gz release-log.txt docs libtools oozie-4.1.0-cdh5.5.4 oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz src [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ bin/oozie-setup.sh sharelib create -fs hdfs://bigdatamaster:9000 -locallib oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hadoop/app/oozie-4.1.0-cdh5.5.4/libtools/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/app/oozie-4.1.0-cdh5.5.4/libtools/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hadoop/app/oozie-4.1.0-cdh5.5.4/libext/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] the destination path for sharelib is: /user/hadoop/share/lib/lib_20170508192944 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ 然后,现在,/user/hadoop/下,有了/user/hadoop/share/lib/lib_20170508192944(注意这个时间,是刚执行那一刻时间命名的) bin/ooziedb.sh create -sqlfile oozie.sql -runValidate DB Connection (注意不是这条,官网的bug) bin/ooziedb.sh create -sqlfile oozie.sql -run DB Connection (得要用这条) [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ bin/ooziedb.sh create -sqlfile oozie.sql -run DB Connection setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" Validate DB Connection Exception in thread "main" java.lang.UnsupportedClassVersionError: com/mysql/jdbc/Driver : Unsupported major.minor version 52.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:191) at org.apache.oozie.tools.OozieDBCLI.createConnection(OozieDBCLI.java:894) at org.apache.oozie.tools.OozieDBCLI.validateConnection(OozieDBCLI.java:901) at org.apache.oozie.tools.OozieDBCLI.createDB(OozieDBCLI.java:185) at org.apache.oozie.tools.OozieDBCLI.run(OozieDBCLI.java:129) at org.apache.oozie.tools.OozieDBCLI.main(OozieDBCLI.java:80) 行,这里二选一。我就拿这条命令来执行演示吧 解决办法: Exception in thread "main" java.lang.UnsupportedClassVersionError: com/mysql/jdbc/Driver : Unsupported major.minor version 52.0 见 Oozie时出现Exception in thread "main" java.lang.UnsupportedClassVersionError: com/mysql/jdbc/Driver : Unsupported major.minor version 52.0? Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure 解决办法,见 Oozie时出现Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure? [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ bin/ooziedb.sh create -sqlfile oozie.sql -run DB Connection setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" Validate DB Connection DONE Check DB schema does not exist DONE Check OOZIE_SYS table does not exist DONE Create SQL schema DONE Create OOZIE_SYS table DONE Oozie DB has been created for Oozie version '4.1.0-cdh5.5.4' The SQL commands have been written to: oozie.sql [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ pwd /home/hadoop/app/oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ ls bin docs libext LICENSE.txt NOTICE.txt oozie-core oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz oozie-sharelib-4.1.0-cdh5.5.4.tar.gz oozie.sql release-log.txt conf lib libtools logs oozie-4.1.0-cdh5.5.4 oozie-examples.tar.gz oozie-server oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz oozie.war src [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ 成功! Start Oozie as a daemon process run: $ bin/oozied.sh start To start Oozie as a foreground process run: $ bin/oozied.sh run Check the Oozie log file logs/oozie.log to ensure Oozie started properly. [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ pwd /home/hadoop/app/oozie-4.1.0-cdh5.5.4 [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ ls bin docs libext LICENSE.txt NOTICE.txt oozie-core oozie-hadooplibs-4.1.0-cdh5.5.4.tar.gz oozie-sharelib-4.1.0-cdh5.5.4.tar.gz oozie.sql release-log.txt conf lib libtools logs oozie-4.1.0-cdh5.5.4 oozie-examples.tar.gz oozie-server oozie-sharelib-4.1.0-cdh5.5.4-yarn.tar.gz oozie.war src [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ bin/oozied.sh start Setting OOZIE_HOME: /home/hadoop/app/oozie-4.1.0-cdh5.5.4 Setting OOZIE_CONFIG: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/conf Sourcing: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/conf/oozie-env.sh setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" Setting OOZIE_CONFIG_FILE: oozie-site.xml Setting OOZIE_DATA: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/data Setting OOZIE_LOG: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/logs Setting OOZIE_LOG4J_FILE: oozie-log4j.properties Setting OOZIE_LOG4J_RELOAD: 10 Setting OOZIE_HTTP_HOSTNAME: bigdatamaster Setting OOZIE_HTTP_PORT: 11000 Setting OOZIE_ADMIN_PORT: 11001 Setting OOZIE_HTTPS_PORT: 11443 Setting OOZIE_BASE_URL: http://bigdatamaster:11000/oozie Using CATALINA_BASE: /home/hadoop/app/sqoop/server Setting OOZIE_HTTPS_KEYSTORE_FILE: /home/hadoop/.keystore Setting OOZIE_HTTPS_KEYSTORE_PASS: password Setting OOZIE_INSTANCE_ID: bigdatamaster Setting CATALINA_OUT: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/logs/catalina.out Setting CATALINA_PID: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/oozie-server/temp/oozie.pid 出现如下,一直没反应 Setting OOZIE_INSTANCE_ID: bigdatamaster Setting CATALINA_OUT: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/logs/catalina.out Setting CATALINA_PID: /home/hadoop/app/oozie-4.1.0-cdh5.5.4/oozie-server/temp/oozie.pid Using CATALINA_OPTS: -Xmx1024m -Dderby.stream.error.file=/home/hadoop/app/oozie-4.1.0-cdh5.5.4/logs/derby.log Adding to CATALINA_OPTS: -Doozie.home.dir=/home/hadoop/app/oozie-4.1.0-cdh5.5.4 -Doozie.config.dir=/home/hadoop/app/oozie-4.1.0-cdh5.5.4/conf -Doozie.log.dir=/home/hadoop/app/oozie-4.1.0-cdh5.5.4/logs -Doozie.data.dir=/home/hadoop/app/oozie-4.1.0-cdh5.5.4/data -Doozie.instance.id=bigdatamaster -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=bigdatamaster -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://bigdatamaster:11000/oozie -Doozie.https.keystore.file=/home/hadoop/.keystore -Doozie.https.keystore.pass=password -Djava.library.path= WARN: Oozie WAR has not been set up at ''/home/hadoop/app/sqoop/server/webapps'', doing default set up setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m" no arguments given Usage : oozie-setup.sh <Command and OPTIONS> prepare-war [-d directory] [-secure] (-d identifies an alternative directory for processing jars -secure will configure the war file to use HTTPS (SSL)) sharelib create -fs FS_URI [-locallib SHARED_LIBRARY] (create sharelib for oozie, FS_URI is the fs.default.name for hdfs uri; SHARED_LIBRARY, path to the Oozie sharelib to install, it can be a tarball or an expanded version of it. If ommited, the Oozie sharelib tarball from the Oozie installation directory will be used) (action failes if sharelib is already installed in HDFS) sharelib upgrade -fs FS_URI [-locallib SHARED_LIBRARY] (upgrade existing sharelib, fails if there is no existing sharelib installed in HDFS) db create|upgrade|postupgrade -run [-sqlfile <FILE>] (create, upgrade or postupgrade oozie db with an optional sql File) (without options prints this usage information) EXTJS can be downloaded from http://www.extjs.com/learn/Ext_Version_Archives [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ jps 3543 ThriftServer 3101 QuorumPeerMain 3281 HMaster 8257 ResourceManager 14271 Jps 7918 NameNode 8075 SecondaryNameNode [hadoop@bigdatamaster oozie-4.1.0-cdh5.5.4]$ 见解决如下 然后呢,大家也许还会出现如下问题: Oozie安装时放置Mysql驱动包的总结(网上最全) Oozie时bin/oozied.sh start或bin/oozied.sh run出现Bootstrap进程无法启动,http://bigdatamaster:11000/oozie界面也无法打开?E0103: Could not load service classes, java.lang.ClassNotFoundException: Class org.apache.oozie.ser Oozie时bin/oozied.sh start或bin/oozied.sh run出现Bootstrap进程无法启动,http://bigdatamaster:11000/oozie界面也无法打开? Oozie时出现Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure? Oozie时出现Exception in thread "main" java.lang.UnsupportedClassVersionError: com/mysql/jdbc/Driver : Unsupported major.minor version 52.0? 本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6118431.html,如需转载请自行联系原作者

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册