首页 文章 精选 留言 我的

精选列表

搜索[部署],共10003篇文章
优秀的个人博客,低调大师

使用 Docker 部署 MediaWiki

MediaWiki 0 简介 MediaWiki 是 Wikipedia 使用的网站解决方案的开源版,以个人观点来看,Wiki 在这个时代显得不够时尚,且不支持 MarkDown 等新兴的标记语言,另外页面的组织方式采用了自己的一套管理语言,上手需要一定的学习成本。不过经典总归是经典。 MediaWiki 也提供了官方的 Docker image,这就节省了不少安装环境的工作量,接下来就来看看私有 MediaWiki 站点是如何搭建起来的吧。 1 使用 docker 安装 MediaWiki 第一部分中的命令除非特殊说明,都需要 root 权限。 1.1 安装 Docker 第一部自然是要先安装 docker,我们使用官方的 docker 安装脚本来规避不同操作系统安装命令不同的问题,命令运行结束后,docker 就安装好了,如果你的环境中还没有 wget 命令,CentOS 和 RedHat 用 yum install -y wget,Debian 和 Ubuntu 系统用 apt install -y wget 安装。 # wget -qO- https://get.docker.com/ | sh 接下来需修改 docker 的下载源: # cat /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com", "http://hub.c.163.com"] } # systemctl restart docker.service 1.2 下载所需的 docker images MediaWiki 需要 MySQL,且 MediaWiki 镜像中不提供 MySQL,所以 MySQL 镜像也须要下载。 # docker pull wikimedia/mediawiki:1.30.0-wmf4 # docker pull mysql/mysql-server:5.7 1.3 启动 MediaWiki 和 MySQL,并关联 MediaWiki 需要依赖于 MySQL,所以要先启动 MySQL,再启动 MediaWiki,不然启动会失败。而且需要开启 MySQL 的远程连接权限。 # docker run -d --name mediawiki-mysql -e MYSQL_ROOT_PASSWORD=<mysql-root-password> mysql/mysql-server:5.7 # docker exec -it mediawiki-mysql /bin/bash bash-4.2# mysql -uroot -p<mysql-root-password> ...... mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '<mysql-root-password>' WITH GRANT OPTION; mysql> FLUSH PRIVILEGES; 然后启动 MediaWiki。 # docker run --name facethink-mediawiki --link mediawiki-mysql:mysql -p 80:80 -e MEDIAWIKI_DB_PASSWORD=<mysql-root-password> -d wikimedia/mediawiki:1.30.0-wmf4 需要注意的是,启动 MediaWiki 时,需要使用 --link 参数来关联之前启动的 MySQL。 另外 -p 将 MediaWiki docker 中的 80 端口和 docker 宿主机上的 80 端口绑定在了一起。在浏览器中访问 docker 宿主机的 IP 就可以访问刚刚建好的 MediaWiki 网站了。不过要保证宿主机上 80 端口没有被其他程序占用,不然 docker run 命令无法执行成功。 下面就是刚刚建好的 wiki 站点页面,过程并不复杂,如果遇到问题,可以流言讨论: MediaWiki main page 2. MediaWiki 配置 2.1 MediaWiki 的默认管理员 Wiki 是有了,不过这么素的界面,当然是要做些配置,那么就需要管理员权限了,可在安装过程中,我们并不知道这些信息。不过我们可以从 MySQL 中找到。 # docker exec -it mediawiki-mysql -- mysql -uroot -p<mysql-root-password> ...... mysql> use mediawiki; mysql> select * from user; mysql> select * from user; +---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------+------------------+-------------------+---------------------+----------------+----------------------------------+--------------------------+----------------------------------+--------------------------+-------------------+----------------+-----------------------+ | user_id | user_name | user_real_name | user_password | user_newpassword | user_newpass_time | user_email | user_touched | user_token | user_email_authenticated | user_email_token | user_email_token_expires | user_registration | user_editcount | user_password_expires | +---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------+------------------+-------------------+---------------------+----------------+----------------------------------+--------------------------+----------------------------------+--------------------------+-------------------+----------------+-----------------------+ | 1 | Admin | | :pbkdf2:sha512:30000:64:CyuznKx44JuAClGG7avxow==:V9MLp3r/obJIjv+BR2Bs0eCvyWkyDK0eveqEE+9HiUgxvMjzu26kGBz+BcZSmlRssLswzq1j3a+PVuh6AFEaxQ== | | NULL | NULL | 20180622063649 | 490898f83d4ad9d1ec1c0276a740209b | NULL | d3bcbd107e31220c891334c8e1ba0440 | 20180629063644 | 20180622050411 | 0 | NULL +---------+-----------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------+------------------+-------------------+---------------------+----------------+----------------------------------+--------------------------+----------------------------------+--------------------------+-------------------+----------------+-----------------------+ 看以看到默认账户是 Admin,但密码是加密的,好在 root 用户是有权限修改这个密码的, mysql> UPDATE user SET user_password = MD5( CONCAT( user_id, '-', MD5( 'NEWPASS' ) ) ) WHERE user_id =1; 至此,我们终于可以用管理员的权限登陆了。 login page 2.2 使用 php 变量配置 MediaWiki 站点地址 假设已经为站点申请了域名:wiki.example.com,如何让 wiki 自己能够识别这个域名呢? 这需要登陆到 MediaWiki 的 docker 中去,修改配置文件。MediaWiki 是 php 语言编写的,所以配置文件以 .php 后缀结尾。 $ sudo docker exec -it facethink-mediawiki /bin/bash root@1a0f3692a08d:/# vi /var/www/html/LocalSettings.php ... $wgServer = "http://wiki.example.com"; ... php 可以动态读取配置文件,所以无需重启即可生效。 2.3 修改 Logo 默认 logo 是金色的葵花,那么如何更换成自己心仪的图标呢? 首先需要开启 wiki 的文件上传功能: $ sudo docker exec -it facethink-mediawiki /bin/bash root@1a0f3692a08d:/# vi /var/www/html/LocalSettings.php ... $wgEnableUploads = true; ... 然后给 /var/www/html/images 目录添加全部用户开启所有权限。 $ sudo docker exec -it facethink-mediawiki /bin/bash root@1a0f3692a08d:/# chmod 777 /var/www/html/images 然后在 Upload File 页面上传文件: upload file page 找到文件所在目录: # ll /var/www/html/images/thumb/6/64/example.png/120px-example.png 修改 php 配置文件: $ sudo docker exec -it facethink-mediawiki /bin/bash root@1a0f3692a08d:/# vi /var/www/html/LocalSettings.php ... $wgLogo = $wgScriptPath . "images/thumb/6/64/example.png/120px-example.png"; ... 好了,刷新一下页面,看看更换 logo 之后效果如何吧。 2.4 邮箱配置 MediaWiki 的邮箱配置很坑,调试不太方便,而且默认配置很容易被判定成垃圾邮件, 或者无效,被拒掉,需要调整发件人的地址来规避。这里用的是 Elastic Email 的邮件服务器系统,这里大家需要自己注册。 配置部分还是需要修改 /var/www/html/LocalSettings.php # cat /var/www/html/LocalSettings.php ... $wgServerName = "example.com"; $wgPasswordSender = ""; $wgSMTP = array( 'host' => 'smtp.elasticemail.com', 'port' => 2525, 'IDHost' => 'wiki.example.com', 'username' => <user-id>, 'password' => <password>, 'auth' => true ); ... 同时需要安装 PHP 与邮件发送相关的插件,这里还需要说明一点,MediaWiki 的 docker 虽然能运行 PHP 代码,但实际上并没有安装 PHP,原因是 Apache 能够解析运行 PHP,LAMP 果然是集成度很高。而安装 pear 是要依赖于 PHP 环境的,所以必须要安装 PHP。 # apt install php, php-pear # pear install mail, net_smtp 2.5 限制用户权限 如果不想开放 Wiki 的公开注册,并且在未登陆时,限制可见的页面的话,还是需要通过对 LocalSettings.php 的定制实现。 # cat /var/www/html/LocalSettings.php ... # Prevent new user registrations $wgWhitelistAccount = array("user" => 0, "sysop" => 1, "developer" => 1); $wgGroupPermissions['*']['createaccount'] = false; $wgGroupPermissions['*']['read'] = true; $wgGroupPermissions['*']['edit'] = false; $wgWhitelistRead = array("Main Page", "Special:Userlogin", "Wikipedia:Help"); ... 2.6 添加用户 现在已经关闭了用户注册,那用户只能手动添加了。MediaWiki 也提供添加用户的脚本: # apt install php-mbstring, php-mysql # php /usr/src/mediawiki/maintenance/createAndPromote.php --conf=/var/www/html/LocalSettings.php --force <user-id> <password> 3. 参考文档 MySQL Docker MediaWiki docker file MediaWiki default admin user and password MediaWiki configuration settings Trouble uploading after installation

优秀的个人博客,低调大师

Saltstack基本安装部署

配置环境 master节点 :172.16.100.10 minion节点 :172.16.100.20 msater节点 #####安装软件包##### # curl -o /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo # yum -y install salt-master salt-minion # systemctl start salt-master.service # ^start^enable #####修改minion配置文件##### # vim /etc/salt/minion master: 172.16.100.10 主节点地址 id: FQDN 不设置的话为默认主机名,存放位置/etc/salt/minion_id # systemctl start salt-minion # ^start^enable # tree /etc/salt/pki/ /etc/salt/pki/ ├── master │ ├── master.pem │ ├── master.pub │ ├── minions │ ├── minions_autosign │ ├── minions_denied │ ├── minions_pre 存放监控节点公钥 │ │ ├── compute │ │ └── controller │ └── minions_rejected └── minion ├── minion.pem └── minion.pub minion节点 #####安装软件包##### # curl -o /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo # yum -y install salt-minion #####修改配置文件##### # vim /etc/salt/minion master: 172.16.100.10 # systemctl start salt-minion # ^start^enable # tree /etc/salt/pki/minion /etc/salt/pki/minion ├── minion.pem └── minion.pub 服务启动生成的公钥会传输到主节点的/etc/salt/pki/master/minions_pre目录下 master点 #####添加监控节点##### # salt-key -a compute,controller # salt-key 查看允许通信的监控主机 Accepted Keys: compute controller Denied Keys: Unaccepted Keys: Rejected Keys: # tree /etc/salt/pki /etc/salt/pki ├── master │ ├── master.pem │ ├── master.pub │ ├── minions 公钥从minios_pre转到minios │ │ ├── compute │ │ └── controller │ ├── minions_autosign │ ├── minions_denied │ ├── minions_pre │ └── minions_rejected └── minion ├── minion_master.pub ├── minion.pem └── minion.pub # netstat -lpta |grep 4505 发送端口 tcp 0 0 0.0.0.0:4505 0.0.0.0:* LISTEN 67903/python tcp 0 0 172.16.100.10:52424 172.16.100.10:4505 ESTABLISHED 69995/python tcp 0 0 172.16.100.10:4505 172.16.100.20:60225 ESTABLISHED 67903/python tcp 0 0 172.16.100.10:4505 172.16.100.10:52424 ESTABLISHED 67903/python # netstat -lpta |grep 4506 接受端口 tcp 0 0 0.0.0.0:4506 0.0.0.0:* LISTEN 67925/python tcp 0 0 172.16.100.10:4506 172.16.100.10:51547 ESTABLISHED 67925/python tcp 0 0 172.16.100.10:51547 172.16.100.10:4506 ESTABLISHED 69995/python tcp 0 0 172.16.100.10:4506 172.16.100.20:44469 ESTABLISHED 67925/python

优秀的个人博客,低调大师

docker快速部署gitlab

docker安装gitlabhttps://docs.gitlab.com/omnibus/docker/使用文档:https://docs.gitlab.com.cn/ce/gitlab-basics/README.htmlgit使用:http://blog.jobbole.com/25775/ 拉取镜像:docker pull gitlab/gitlab-ce 运行gitlab sudo docker run --detach \ --hostname 10.39.10.223 \ --publish 443:443 --publish 80:80 --publish 2222:22 \ --name gitlab \ --restart always \ --volume /data0/gitlab/config:/etc/gitlab \ --volume /data0/gitlab/logs:/var/log/gitlab \ --volume /data0/gitlab/data:/var/opt/gitlab \ gitlab/gitlab-ce:latest 运行成功后,即可通过80访问页面,2222为ssh端口。 配置gitlab服务器的访问地址修改gitlab的配置文件vi /data0/gitlab/config/gitlab.rb 配置http协议所使用的访问地址,此处为主机ip地址: external_url 'http://10.39.10.223' 修改gitlab.rb配置文件之后,重启容器。或者在容器里执行gitlab-ctl reconfigure命令 http://10.39.3.23 Command line instructions Git global setup git config --global user.name "dataqa"git config --global user.email "249016681@qq.com" Create a new repository git clone git@10.39.10.223:dataqa/te.gitcd tetouch README.mdgit add README.mdgit commit -m "add README"git push -u origin master Existing folder cd existing_foldergit initgit remote add origin git@10.39.3.23:dataqa/te.gitgit add .git commit -m "Initial commit"git push -u origin master Existing Git repository cd existing_repogit remote add origin git@10.39.3.23:dataqa/te.gitgit push -u origin --allgit push -u origin --tags

优秀的个人博客,低调大师

Confluence 平台部署记录

1.1 Confluence简介 Confluence是一个专业的企业知识管理与协同软件,也可以用于构建企业wiki。使用简单,但它强大的编辑和站点管理特征能够帮助团队成员之间共享信息、文档协作、集体讨论,信息推送。 Confluence为团队提供一个协作环境。在这里,团队成员齐心协力,各擅其能,协同地编写文档和管理项目。从此打破不同团队、不同部门以及个人之间信息孤岛的僵局,Confluence真正实现了组织资源共享。 1.1.1 使用情况 Confluence 已经在超过100个国家,13500个组织中成功地应用于企业内网平台、知识管理及文档管理,涉及财富1000企业、政府机构、教育机构、财务金融机构及技术研究领域。 包括IBM、Sun MicroSystems、SAP等众多知名企业使用Confluence来构建企业Wiki并面向公众开放。 1.2 环境准备 confluence的运行是依赖java环境的,也就是说需要安装jdk并且要是1.7以上版本, 1.2.1 系统环境说明 [root@conflunce ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) [root@conflunce ~]# uname -a Linux conflunce 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [root@conflunce ~]# getenforce Disabled [root@conflunce ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) 1.2.2 软件环境说明 [root@conflunce tools]# java -version java version "1.8.0_60" Java(TM) SE Runtime Environment (build 1.8.0_60-b27) Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode) # 安装 jdk wget http://10.0.0.1/apache/tomcat/jdk-8u60-linux-x64.tar.gz tar xf jdk-8u60-linux-x64.tar.gz -C /application/ ln -s /application/jdk1.8.0_60 /application/jdk sed -i.ori '$a export JAVA_HOME=/application/jdk\nexport PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH\nexport CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar' /etc/profile source /etc/profile 为confluence创建对应的数据库 # 安装数据库 [root@conflunce ~]# yum install -y mariadb-server [root@conflunce ~]# systemctl start mariadb.service mysql配置 create database confluence default character set utf8 collate utf8_bin; grant all on confluence.* to 'confluence'@'localhost' identified by 'confluence'; 1.3 下载confluence cd /server/tools wget https://www.atlassian.com/software/confluence/downloads/binary/atlassian-confluence-5.6.6-x64.bin 1.4 安装confluence 1.4.1 安装 修改权限 [root@conflunce tools]# chmod 755 atlassian-confluence-5.6.6-x64.bin [root@conflunce tools]# ./atlassian-confluence-5.6.6-x64.bin 安装confluence [root@conflunce tools]# ./atlassian-confluence-5.6.6-x64.bin Unpacking JRE ... Starting Installer ... 十一月 24, 2017 4:56:41 下午 java.util.prefs.FileSystemPreferences$ INFO: Created user preferences directory. 十一月 24, 2017 4:56:41 下午 java.util.prefs.FileSystemPreferences$ INFO: Created system preferences directory in java.home. This will install Confluence 5.6.6 on your computer. OK [o, Enter], Cancel [c] o Choose the appropriate installation or upgrade option. Please choose one of the following: Express Install (uses default settings) [1], Custom Install (recommd users) [2, Enter], Upgrade an existing Confluence installation [3 1 See where Confluence will be installed and the settings that will b Installation Directory: /opt/atlassian/confluence Home Directory: /var/atlassian/application-data/confluence HTTP Port: 8090 RMI Port: 8000 Install as service: Yes Install [i, Enter], Exit [e] i Extracting files ... …… Please wait a few moments while Confluence starts up. Launching Confluence ... Installation of Confluence 5.6.6 is complete Your installation of Confluence 5.6.6 is now ready and can be accessed via your browser. Confluence 5.6.6 can be accessed at http://localhost:8090 Finishing installation ... 使用浏览器访问 http://10.0.0.211:8090/setup/ 注意:这个访问地址根据自己的世纪服务器地址进行调整。 1.4.2 修改程序 通过上图,我们可以看到现在confluence要我们输入license,下面我们进行破解。 # 首先下载修改包 http://down.51cto.com/data/2236416 https://page00.ctfile.com/fs/15323800-217465309 # 先停止 conflunce服务 [root@conflunce tools]# /etc/init.d/confluence stop executing using dedicated user If you encounter issues starting up Confluence, please see the Installation guide at http://confluence.atlassian.com/display/DOC/Confluence+Installation+Guide Server startup logs are located in /opt/atlassian/confluence/logs/catalina.out Using CATALINA_BASE: /opt/atlassian/confluence Using CATALINA_HOME: /opt/atlassian/confluence Using CATALINA_TMPDIR: /opt/atlassian/confluence/temp Using JRE_HOME: /opt/atlassian/confluence/jre/ Using CLASSPATH: /opt/atlassian/confluence/bin/bootstrap.jar:/opt/atlassian/confluence/bin/tomcat-juli.jar Using CATALINA_PID: /opt/atlassian/confluence/work/catalina.pid Tomcat stopped. # 删除原来的包文件 [root@conflunce ~]# cd /opt/atlassian/confluence/confluence/WEB-INF/lib [root@conflunce lib]# ll |grep atlassian-extra |wc -l 6 [root@conflunce lib]# ll |grep atlassian-extra -rw-r--r-- 1 root root 14935 12月 1 2014 atlassian-extras-api-3.2.jar -rw-r--r-- 1 root root 21788 12月 1 2014 atlassian-extras-common-3.2.jar -rw-r--r-- 1 root root 38244 12月 1 2014 atlassian-extras-core-3.2.jar -rw-r--r-- 1 root root 5171 12月 1 2014 atlassian-extras-decoder-api-3.2.jar -rw-r--r-- 1 root root 6668 12月 1 2014 atlassian-extras-decoder-v2-3.2.jar -rw-r--r-- 1 root root 68438 12月 1 2014 atlassian-extras-legacy-3.2.jar [root@conflunce lib]# rm -fr atlassian-extra* 解压修改包,然后把里面的atlassian-extras-3.2.jar、Confluence-5.6.6-language-pack-zh_CN.jar、mysql-connector-java-5.1.39-bin.jar 将三个jar文件复制到/opt/atlassian/confluence/confluence/WEB-INF/lib目录下 wget http://15323800.144.unicom.data.tv002.com:443/down/36077cbf0624ef69db7b6416be45dbcf-1924995/confluence5.6.6%20crack.zip?cts=ot-f-D116A117A134A73Fc448e&ctp=116A117A134A73&ctt=1511507163&limit=1&spd=100000&ctk=0c1f445e181194c024eaeaa2a268a3c2&chk=36077cbf0624ef69db7b6416be45dbcf-1924995 unzip confluence5.6.6\ crack.zip cd confluence5.6.6-crack/jar cp ./* /opt/atlassian/confluence/confluence/WEB-INF/lib/ 其中atlassian-extras-3.2.jar文件是和license相关的,Confluence-5.6.6-language-pack-zh_CN.jar是confluence中文语言包,而mysql-connector-java-5.1.39-bin.jar是confluence连接mysql数据库相关的jar包。 再次说明下: atlassian所有产品的中文语言包,我们都可以通过以下地址下载到: https://translations.atlassian.com/dashboard/download?lang=zh_CN#/Confluence/5.6.6 而mysql-connector-java-5.1.39-bin.jar文件可以连接mysql5.7及其以下的mysql版本,可以参考如下连接: http://www.w3resource.com/mysql/mysql-java-connection.php 最后要启动confluence [root@conflunce ~]# /etc/init.d/confluence start 1.4.3 在windows上运行confluence_keygen.jar 注意windows上需要安装jdk运行环境。 serverID 要填写web界面上的 将生成的key复制带web界面即可 1.5 配置数据库 选择direct JDBC 输入数据库用户密码 数据库初始化完毕后,会跳转到如下界面 配置confluence的管理员账号和密码 输入管理员信息 安装完成 安装完成后的界面 到此Confluence就安装完成了。 1.6 参考文档 https://www.ilanni.com/?p=11989# https://baike.baidu.com/item/confluence/452961?fr=aladdin 作者: 惨绿少年 出处: https://www.nmtui.com 本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

优秀的个人博客,低调大师

部署维护docker环境

1,安装环境说明 系统环境:centos6.6 服务应用了:haproxyconfdetcddocker 主机名ip 服务角色 dockerha-152 192.168.36.152haproxyconfd dockerEtcd-153192.168.36.153etcd dockermain-154192.168.36.154docker 2,安装依懒包,关闭冲突的服务 cd/etc/yum.repos.d wgethttp://www.hop5.in/yum/el6/hop5.repo 修改grub的主配置文件表示第一个title下的内容为默认启动的kernel(一般新安装的内核在第一个位置)。 然后重启,重启系统,这时候你的内核就成功升级了,版本应该在3.8以上了, uname-r [root@dockermain-154shell]# 3.10.5-3.el6.x86_64 sed-i'/^SELINUX=/c\SELINUX=disabled'/etc/selinux/config setenforce0 在FedoraEPEL源中已经提供了docker-io包,下载安装epel: rpm-ivhhttp://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm sed-i's/^mirrorlist=https/mirrorlist=http/'/etc/yum.repos.d/epel.repo 3,安装组件 (1)haproxy confd dockerha-152上操作 1、haproxy # 2、confd #wgethttps://github.com/kelseyhightower/confd/releases/download/v0.6.3/confd-0.6.3-linux-amd64 #mvconfd/usr/local/bin/confd #chmod+x/usr/local/bin/confd #/usr/local/bin/confd-version (2)etcd dockerEtcd-153上操作 #mkdir-p/home/install&&cd/home/install #wgethttps://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz #tar-zxvfetcd-v0.4.6-linux-amd64.tar.gz #cdetcd-v0.4.6-linux-amd64 #cpetcd*/bin/ #/bin/etcd-version etcdversion0.4.6 (3)docker #yum-yinstalldocker-io #servicedockerstart #chkconfigdockeron 4,docker命令使用 (1)查看帮助 dockerCOMMAND--help (2)搜索可用的境像 dockersearchname如示例:[root@dockermain-154src]#dockersearchcentos NAMEDESCRIPTIONSTARSOFFICIALAUTOMATED centosTheofficialbuildofCentOS.817[OK] ansible/centos7-ansibleAnsibleonCentos730[OK] tutum/centosCentosimagewithSSHaccess.Fortheroot...13[OK] jdeathe/centos-ssh-apache-phpCentOS-66.5x86_64/Apache/PHP/PHPm...8[OK] blalor/centosBare-bonesbaseCentOS6.5image8[OK] jprjr/centos-php-fpm6[OK] steeef/graphite-centosCentOS6.xwithGraphiteandCarbonviang...6[OK] tutum/centos-6.4DEPRECATED.Usetutum/centos:6.4instead....5[OK] layerworx/centosAgeneralCentOS6imagewiththeEPEL6an...2[OK] jr42/chef-solo-centosOfficialCentOSbaseimageswithcurrentc...1[OK] million12/centos-supervisorBaseCentOS-7withsupervisordlauncher,h...1[OK] internavenue/centos-perconaCentos-basedPerconaimage.1[OK] jdeathe/centos-sshCentOS-66.5x86_64/EPELRepo./OpenSSH...1[OK] jdeathe/centos-ssh-mysqlCentOS-66.5x86_64/MySQL.Imageinclude...1[OK] yajo/centos-epelCentOSwithEPELandfullyupdated1[OK] nimmis/java-centosThisisdockerimagesofCentOS7withdif...0[OK] lighthopper/orientdb-centosADockerfileforcreatinganOrientDBimag...0[OK] bbrietzke/centos-starterCentOS7withEPELandSupervisorD0[OK] tcnksm/centos-nodeDockerfileforCentOSpackagingnode0[OK] insaneworks/centosCentOS6.5x86_64+@update0[OK] snowyday/centosProvideforemacsandRictyfontonX11en...0[OK] dmglab/centosCentOSwithsuperpowers!0[OK] akroh/centosCentos6containerthathasbeenupdatedw...0[OK] timhughes/centosCentoswithsystemdinstalledandrunning0[OK] solict/provisionous-puppet-centosCentOSprovisionswithPuppetincluded0[OK] (3)下载镜像 注意镜像名称要写全,就是用docker search name 搜出来的NAME列的名称。 dockerpullname 如示例:[root@dockermain-154src]#dockerpulljdeathe/centos-ssh-apache-php Pullingrepositoryjdeathe/centos-ssh-apache-php ........ 62203f428b1f:Downloadcomplete e1812755a4ca:Downloadcomplete 0910edda3736:Downloadcomplete Status:Downloadednewerimageforjdeathe/centos-ssh-apache-php:latest (4)查看已安装的镜像 [root@dockermain-154src]#dockerimages REPOSITORYTAGIMAGEIDCREATEDVIRTUALSIZE centoslatestdade6cb4530a10daysago210.1MB jdeathe/centos-ssh-apache-phplatestf1a489312a4a3monthsago297.7MB (5)docker容器中运行命令 dockerrun命令有两个参数,一个是镜像名,一个是要在镜像中运行的命令。注意:IMAGE=REPOSITORY[:TAG],如果IMAGE参数不指定镜像的TAG,默认TAG为latest。[root@dockermain-154run]#sudodockerrunjdeathe/centos-ssh-apache-phpecho'helloworld!' helloworld![root@dockermain-154run]#sudodockerrunjdeathe/centos-ssh-apache-phphostname db7e1d2269fb (6)列出容器 查看最近生成的容器:dockerps-l 查看正在运行的容器:dockerps[root@dockermain-154run]#dockerps-l CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES db7e1d2269fbjdeathe/centos-ssh-apache-php:latest"hostname"27secondsagoExited(0)26secondsagoadoring_babbage (7)显示容器标准输出 [root@dockermain-154run]#dockerps-l CONTAINERIDIMAGECOMMANDCREATEDSTATUSPORTSNAMES db7e1d2269fbjdeathe/centos-ssh-apache-php:latest"hostname"27secondsagoExited(0)26secondsagoadoring_babbage [root@dockermain-154run]#dockerlogsdb7e1d2269fb db7e1d2269fb (8)给容器中安装程序或服务 [root@dockermain-154run]#sudodockerruncentosyuminstall-yhttpd Loadedplugins:fastestmirror .......... DependencyInstalled: apr.x86_640:1.4.8-3.el7 apr-util.x86_640:1.5.2-6.el7 centos-logos.noarch0:70.0.6-1.el7.centos httpd-tools.x86_640:2.4.6-19.el7.centos mailcap.noarch0:2.1.41-2.el7 Failed: httpd.x86_640:2.4.6-19.el7.centos Complete! (9)保存对容器中的修改并生成新的镜像 dockercommitCONTAINERID[REPOSITORY[:TAG]] REPOSITORY参数可以是新的镜像名字,也可以是旧的镜像名;如果和旧的镜像名和TAG都相同,会覆盖掉旧的镜像。[root@dockermain-154~]#dockercommitbd7cc4f4ac92centos:httpd 1e0915f3247b86414ebc11fd994fc6abfb590ff3b1ab890949c845ee88b2d9f4[root@dockermain-154~]#dockerimages REPOSITORYTAGIMAGEIDCREATEDVIRTUALSIZE centoshttpd1e0915f3247b9secondsago320.8MB centoslatestdade6cb4530a10daysago210.1MB jdeathe/centos-ssh-apache-phplatestf1a489312a4a3monthsago297.7MB (10)停止正在运行的容器 (11)查看容器或镜像详情 下面是我刚保存提交产生的新的容器[root@dockermain-154~]#dockerinspect943e45b6e5f3[{"AppArmorProfile":"","Args":[],"Config":{"AttachStderr":true,"AttachStdin":true,"AttachStdout":true,"Cmd":["/bin/bash"],"CpuShares":0,"Cpuset":"","Domainname":"","Entrypoint":null,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"ExposedPorts":null,"Hostname":"943e45b6e5f3","Image":"centos:httpd","MacAddress":"","Memory":0,"MemorySwap":0,"NetworkDisabled":false,"OnBuild":null,"OpenStdin":true,"PortSpecs":null,"StdinOnce":true,"Tty":true,"User":"","Volumes":null,"WorkingDir":""}, (12)删除容器 dockerrmCONTAINERID 查看所有容器ID:dockerps-a-q 删除所有的容器:dockerrm$(dockerps-a-q) (13)删除镜像 dockerrmiIMAGE (14)查看docker的信息,包括Containers和Images数目、kernel版本等 [root@dockermain-154~]#dockerinfo Containers:14 Images:56 StorageDriver:aufs RootDir:/var/lib/docker/aufs Dirs:89 ExecutionDriver:native-0.2 KernelVersion:3.10.5-3.el6.x86_64 OperatingSystem:<unknown> CPUs:1 TotalMemory:989.6MiB Name:dockermain-154 ID:W4PW:W3XR:FQZE:SBAA:2DS2:BM6N:DV5B:ARF2:3SZM:XGST:5ZF7:DFZV WARNING:Noswaplimitsupport (15)创建容器,并像平常ssh登录一样使用 [root@dockermain-154~]#dockerrun-i-tcentos/bin/bash[root@7c0414d03fe7/]#ls bindevetchomeliblib64lost+foundmediamntoptprocrootrunsbinselinuxsrvsystmpusrvar 本文转自 chengxuyonghu 51CTO博客,原文链接:http://blog.51cto.com/6226001001/1896112,如需转载请自行联系原作者

优秀的个人博客,低调大师

部署Swarm Mode集群

环境准备 主机名(角色) IP swarm-manager 172.16.100.20 swarm-node1 172.16.100.22 swarm-node2 172.16.100.22 加入swarm mode集群后不允许修改主机名 前提条件 安装Docker Engine 1.12或更新版本 允许2377的tcp端口用于集群管理交互 允许7946的TCP/UDP端口用于节点间的交互(容器网络发现) 允许4789的UDP端口用于overlay网络类型 创建swarm mode集群 [root@swarm-manager ~]# docker swarm init --advertise-addr 172.16.100.20 Swarm initialized: current node (sc21k9597zasfjaf6cfpuyvy6) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-10rnutvx6cpja7wv88k7ydywpvjjz1xsj88on00s43te740xca-1pwd9juzgpwnxlxne7p2g93va 172.16.100.20:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. [root@swarm-manager ~]# docker info |grep -A5 Swarm Swarm: active NodeID: pg6fteetxsezu2ygyd3b0joye Is Manager: true ClusterID: yb5c85p7o054sxp1hb8ieqw43 Managers: 1 Nodes: 1 [root@swarm-manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader [root@swarm-manager ~]# netstat -lntp|grep docker tcp6 0 0 :::2377 :::* LISTEN 1249/dockerd tcp6 0 0 :::7946 :::* LISTEN 1249/dockerd 添加节点到swarm mode集群 swarm mode集群有manager和worker节点,可通过docker swarm join-token [manager|worker]命令获取节点添加命令 [root@swarm-manager ~]# docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-6ys543fe4zkeagkoaacgaqe3e 172.16.100.20:2377 [root@swarm-manager ~]# docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-64gphy50682jszwc19nn0onpc 172.16.100.20:2377 分别在node1和node2节点上执行如下的docker swarm join命令添加worker节点 [root@swarm-node1 ~]# docker swarm join --token SWMTKN-1-5vp5axn28a2cbrtzxlirktbhpnluayacuw81zqacooe3ooe2o3-64gphy50682jszwc19nn0onpc 172.16.100.20:2377 管理swarm mode集群节点 [root@swarm-manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS lati2179dcwgkvvkc0qcieoim swarm-node2 Ready Active pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader y83k6khc3vxmch1qd3j8kl4ak swarm-node1 Ready Active 升/降级节点 升级worker节点为manager节点[root@swarm-manager ~]# docker node promote swarm-node1 swarm-node2 Node swarm-node1 promoted to a manager in the swarm. Node swarm-node2 promoted to a manager in the swarm. # docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS lati2179dcwgkvvkc0qcieoim swarm-node2 Ready Active Reachable pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader y83k6khc3vxmch1qd3j8kl4ak swarm-node1 Ready Active Reachable 降级manager节点为worker节点[root@swarm-manager ~]# docker node demote swarm-node1 swarm-node2 Manager swarm-node1 demoted in the swarm. Manager swarm-node2 demoted in the swarm. [root@swarm-manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS lati2179dcwgkvvkc0qcieoim swarm-node2 Ready Active pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader y83k6khc3vxmch1qd3j8kl4ak swarm-node1 Ready Active 移除节点 移除节点时需要先在worker节点上执行docker swarm leave命令将节点状态设为Down后,在manager节点上执行docker node rm &lt;node-name&gt;移除。如果要移除manager节点,不建议使用--force强制移除,而应该先进行降级后再移除。 [root@swarm-manager ~]# docker swarm leave Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. Removing the last manager erases all current state of the swarm. Use `--force` to ignore this message. [root@swarm-manager ~]# docker node rm swarm-node1 Error response from daemon: rpc error: code = 9 desc = node y83k6khc3vxmch1qd3j8kl4ak is not down and can't be removed [root@swarm-node2 ~]# docker swarm leave Node left the swarm. [root@swarm-manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS lati2179dcwgkvvkc0qcieoim swarm-node2 Down Active pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader y83k6khc3vxmch1qd3j8kl4ak swarm-node1 Ready Active [root@swarm-manager ~]# docker node rm swarm-node2 swarm-node2 [root@swarm-manager ~]# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS pg6fteetxsezu2ygyd3b0joye * swarm-manager Ready Active Leader y83k6khc3vxmch1qd3j8kl4ak swarm-node1 Ready Active 本文转自Vnimos51CTO博客,原文链接:http://blog.51cto.com/vnimos/2053237 ,如需转载请自行联系原作者

优秀的个人博客,低调大师

Kubernetes集群部署2

1.配置并启用 etcd 集群 A. 配置启动项并将启动项分发至其他节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #vim/usr/lib/systemd/system/etcd.service [Unit] Description=etcd After=network.target After=network-online.target Wants=network-online.target Documentation= [Service] Type=notify WorkingDirectory= /var/lib/etcd EnvironmentFile=- /etc/etcd/etcd .conf ExecStart= /usr/bin/etcd --config- file /etc/etcd/etcd .conf Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target #ansiblenode-mcopy-a'src=/usr/lib/systemd/system/etcd.servicedest=/usr/lib/systemd/system/' B.指定 etcd 的工作目录和数据目录 1 2 3 #mkdir-p/var/lib/etcd/&&mkdir-p/etc/etcd #ansiblenode-mfile-a'path=/var/lib/etcdstate=directory' #ansiblenode-mfile-a'path=/etc/etcdstate=directory' C. 配置 etcd.conf 配置文件并分发至其他节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 #exportETCD_NAME=etcd1 #exportINTERNAL_IP=192.168.100.102 #cat<<EOF>etcd.conf name: '${ETCD_NAME}' data- dir : "/var/lib/etcd/" listen-peer-urls:https: // ${INTERNAL_IP}:2380 listen-client-urls:https: // ${INTERNAL_IP}:2379,https: //127 .0.0.1:2379 initial-advertise-peer-urls:https: // ${INTERNAL_IP}:2380 advertise-client-urls:https: // ${INTERNAL_IP}:2379 initial-cluster: "etcd1=https://192.168.100.102:2380,etcd2=https://192.168.100.103:2380,etcd3=https://192.168.100.104:2380" initial-cluster-token: 'etcd-cluster' #初始化集群状态('new'or'existing'). initial-cluster-state: 'new' client-transport-security: cert- file : /etc/kubernetes/ssl/etcd .pem key- file : /etc/kubernetes/ssl/etcd-key .pem trusted-ca- file : /etc/kubernetes/ssl/ca .pem peer-transport-security: cert- file : /etc/kubernetes/ssl/etcd .pem key- file : /etc/kubernetes/ssl/etcd-key .pem trusted-ca- file : /etc/kubernetes/ssl/ca .pem EOF #mvetcd.conf/etc/etcd/ ##更换节点etcd名及其IP,完成后分发至相应节点 #ansible192.168.100.103-mcopy-a'src=etcd.confdest=/etc/etcd/etcd.conf' #ansible192.168.100.104-mcopy-a'src=etcd.confdest=/etc/etcd/etcd.conf' D. 启动 etcd 集群 1 2 3 4 5 6 #systemctlstartetcd #systemctlstatusetcd #systemctlenableetcd #ansiblenode-a'systemctlstartetcd' #ansiblenode-a'systemctlstatusetcd' #ansiblenode-a'systemctlenableetcd' 注:首个 etcd 节点会显示启动失败,那是由于没有检测到其他节点存活状态。 E.查看集群成员 1 2 3 4 #etcdctl--endpoints=https://192.168.100.102:2379memberlist 32293bbc65784dda:name=etcd1peerURLs=https: //192 .168.100.102:2380clientURLs=https: //192 .168.100.102:2379isLeader= true 703725a0e421bc44:name=etcd2peerURLs=https: //192 .168.100.103:2380clientURLs=https: //192 .168.100.103:2379isLeader= false 78ac8de330c5272a:name=etcd3peerURLs=https: //192 .168.200.104:2380clientURLs=https: //192 .168.100.104:2379isLeader= false F.查看集群健康状况 1 2 3 4 5 #etcdctl--endpoints=https://192.168.100.102:2379cluster-health member32293bbc65784ddaishealthy:gothealthyresultfromhttps: //192 .168.100.102:2379 member703725a0e421bc44ishealthy:gothealthyresultfromhttps: //192 .168.100.103:2379 member78ac8de330c5272aishealthy:gothealthyresultfromhttps: //192 .168.100.104:2379 clusterishealthy 2.配置并启用 flanneld A. 配置启动项并分发至其他节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 #vim/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneldoverlayaddressetcdagent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify EnvironmentFile= /etc/sysconfig/flanneld EnvironmentFile=- /etc/sysconfig/docker-network ExecStart= /usr/bin/flanneld-start $FLANNEL_OPTIONS ExecStartPost= /usr/libexec/flannel/mk-docker-opts .sh-kDOCKER_NETWORK_OPTIONS-d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service #ansiblenode-mcopy-a'src=/usr/lib/systemd/system/flanneld.servicedest=/usr/lib/systemd/system/' #cat<<EOF>/usr/bin/flanneld-start #!/bin/sh exec /usr/bin/flanneld \ -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}}\ -etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}}\ "$@" EOF #chmod755/usr/bin/flanneld-start #ansiblenode-mcopy-a'src=/usr/bin/flanneld-startdest=/usr/bin/mode=755' B. 配置 flannel 配置文件并分发至其他节点 1 2 3 4 5 6 #cat<<EOF>/etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS= "https://192.168.100.102:2379,https://192.168.100.103:2379,https://192.168.100.104:2379" FLANNEL_ETCD_PREFIX= "/kube/network" FLANNEL_OPTIONS= "-etcd-cafile=/etc/kubernetes/ssl/ca.pem-etcd-certfile=/etc/kubernetes/ssl/etcd.pem-etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem" EOF #ansiblenode-mcopy-a'src=/etc/sysconfig/flannelddest=/etc/sysconfig/' C.使用 etcd 存储为 flannel 创建目录并添加网络配置 1 2 3 #etcdctl--endpoints=https://192.168.100.102:2379mkdir/kube/network #etcdctl--endpoints=https://192.168.100.102:2379set/kube/network/config'{"Network":"10.254.0.0/16"}' { "Network" : "10.254.0.0/16" } D. 启动 flanneld 1 2 3 4 5 6 #systemctlstartflanneld #systemctlstatusflanneld #systemctlenableflanneld #ansiblenode-a'systemctlstartflanneld' #ansiblenode-a'systemctlstatusflanneld' #ansiblenode-a'systemctlenableflanneld' E. 查看各节点网段 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 #cat/var/run/flannel/subnet.env FLANNEL_NETWORK=10.254.0.0 /16 FLANNEL_SUBNET=10.254.80.1 /24 FLANNEL_MTU=1472 FLANNEL_IPMASQ= false #ansiblenode-a"cat/var/run/flannel/subnet.env" 192.168.100.104|SUCCESS|rc=0>> FLANNEL_NETWORK=10.254.0.0 /16 FLANNEL_SUBNET=10.254.95.1 /24 FLANNEL_MTU=1472 FLANNEL_IPMASQ= false 192.168.100.103|SUCCESS|rc=0>> FLANNEL_NETWORK=10.254.0.0 /16 FLANNEL_SUBNET=10.254.59.1 /24 FLANNEL_MTU=1472 FLANNEL_IPMASQ= false F. 更改 docker 网段为 flannel 分配的网段 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #exportFLANNEL_SUBNET=10.254.80.1/24 #cat<<EOF>daemon.json { "bip" : "$FLANNEL_SUBNET" } EOF #mkdir-p/etc/docker/&&mvdaemon.json/etc/docker/ ##更换为相应节点网段,完成后分发至相应节点 #ansiblenode-mfile-a'path=/etc/docker/state=directory' #ansible192.168.100.103-mcopy-a'src=daemon.jsondest=/etc/docker/daemon.json' #ansible192.168.100.104-mcopy-a'src=daemon.jsondest=/etc/docker/daemon.json' #systemctldaemon-reload #systemctlrestartdocker #ansiblenode-a"systemctldaemon-reload" #ansiblenode-a"systemctlrestartdocker" G. 查看是否已分配相应网段 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #route-n KernelIProutingtable DestinationGatewayGenmaskFlagsMetricRefUseIface 0.0.0.0192.168.100.20.0.0.0UG10000ens33 10.254.0.00.0.0.0255.255.0.0U000flannel0 10.254.80.00.0.0.0255.255.255.0U000docker0 192.168.100.00.0.0.0255.255.255.0U10000ens33 #ansiblenode-a'route-n' 192.168.100.103|SUCCESS|rc=0>> KernelIProutingtable DestinationGatewayGenmaskFlagsMetricRefUseIface 0.0.0.0192.168.100.20.0.0.0UG10000ens33 10.254.0.00.0.0.0255.255.0.0U000flannel0 10.254.59.00.0.0.0255.255.255.0U000docker0 192.168.100.00.0.0.0255.255.255.0U10000ens33 192.168.100.104|SUCCESS|rc=0>> KernelIProutingtable DestinationGatewayGenmaskFlagsMetricRefUseIface 0.0.0.0192.168.100.20.0.0.0UG10000ens33 10.254.0.00.0.0.0255.255.0.0U000flannel0 10.254.95.00.0.0.0255.255.255.0U000docker0 192.168.100.00.0.0.0255.255.255.0U10000ens33 H. 使用 etcdctl 命令查看 flannel 的相关信息 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 #etcdctl--endpoints=https://192.168.100.102:2379ls/kube/network/subnets /kube/network/subnets/10 .254.80.0-24 /kube/network/subnets/10 .254.59.0-24 /kube/network/subnets/10 .254.95.0-24 #etcdctl--endpoints=https://192.168.100.102:2379-oextendedget/kube/network/subnets/10.254.80.0-24 Key: /kube/network/subnets/10 .254.80.0-24 Created-Index:10 Modified-Index:10 TTL:85486 Index:12 { "PublicIP" : "192.168.100.102" } #etcdctl--endpoints=https://192.168.100.102:2379-oextendedget/kube/network/subnets/10.254.59.0-24 Key: /kube/network/subnets/10 .254.59.0-24 Created-Index:11 Modified-Index:11 TTL:85449 Index:12 { "PublicIP" : "192.168.100.103" } #etcdctl--endpoints=https://192.168.100.102:2379-oextendedget/kube/network/subnets/10.254.95.0-24 Key: /kube/network/subnets/10 .254.95.0-24 Created-Index:12 Modified-Index:12 TTL:85399 Index:12 { "PublicIP" : "192.168.100.104" } I. 测试网络是否正常 1 2 #ping-c410.254.59.1 #ping-c410.254.95.1 3. 配置并启用 Kubernetes Master 节点 Kubernetes Master 节点包含的组件: kube-apiserver kube-controller-manager kube-scheduler A. 配置 config 文件并分发至所有节点 1 2 3 4 5 6 #grep^[A-Z]/etc/kubernetes/config KUBE_LOGTOSTDERR= "--logtostderr=true" KUBE_LOG_LEVEL= "--v=0" KUBE_ALLOW_PRIV= "--allow-privileged=true" KUBE_MASTER= "--master=http://192.168.100.102:8080" #ansiblenode-mcopy-a'src=/etc/kubernetes/configdest=/etc/kubernetes/' B. 配置 kube-apiserver 启动项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 #vim/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=KubernetesAPIServer Documentation=https: //github .com /kubernetes/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=- /etc/kubernetes/config EnvironmentFile=- /etc/kubernetes/apiserver ExecStart= /usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR\ $KUBE_LOG_LEVEL\ $KUBE_ETCD_SERVERS\ $KUBE_API_ADDRESS\ $KUBE_API_PORT\ $KUBELET_PORT\ $KUBE_ALLOW_PRIV\ $KUBE_SERVICE_ADDRESSES\ $KUBE_ADMISSION_CONTROL\ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target C. 配置 apiserver 配置文件 1 2 3 4 5 6 #grep^[A-Z]/etc/kubernetes/apiserver KUBE_API_ADDRESS= "--advertise-address=192.168.100.102--bind-address=192.168.100.102--insecure-bind-address=192.168.100.102" KUBE_ETCD_SERVERS= "--etcd-servers=https://192.168.100.102:2379,192.168.100.103:2379,192.168.100.104:2379" KUBE_SERVICE_ADDRESSES= "--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL= "--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota" KUBE_API_ARGS= "--authorization-mode=RBAC,Node--kubelet-https=true--service-node-port-range=30000-42767--enable-bootstrap-token-auth--token-auth-file=/etc/kubernetes/token.csv--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem--client-ca-file=/etc/kubernetes/ssl/ca.pem--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem--etcd-cafile=/etc/kubernetes/ssl/ca.pem--etcd-certfile=/etc/kubernetes/ssl/etcd.pem--etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem--enable-swagger-ui=true--event-ttl=1h--basic-auth-file=/etc/kubernetes/basic_auth_file" D. 配置访问用户 1 #echoadmin,admin,1>/etc/kubernetes/basic_auth_file 格式:用户名、密码和UID E. 启动 kube-apiserver 1 2 3 #systemctlstartkube-apiserver #systemctlstatuskube-apiserver #systemctlenablekube-apiserver F. 将 admin 用户与clusterrole: cluster-admin 绑定到一起并验证 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 #kubectlgetclusterrole/cluster-admin-oyaml #kubectlcreateclusterrolebindinglogin-on-dashboard-with-cluster-admin--clusterrole=cluster-admin--user=admin clusterrolebinding "login-on-dashboard-with-cluster-admin" created #kubectlgetclusterrolebinding/login-on-dashboard-with-cluster-admin-oyaml apiVersion:rbac.authorization.k8s.io /v1 kind:ClusterRoleBinding metadata: creationTimestamp:2017-10-31T10:35:06Z name:login-on-dashboard-with-cluster-admin resourceVersion: "116" selfLink: /apis/rbac .authorization.k8s.io /v1/clusterrolebindings/login-on-dashboard-with-cluster-admin uid:292ae19a-be27-11e7-853b-000c297aff5d roleRef: apiGroup:rbac.authorization.k8s.io kind:ClusterRole name:cluster-admin subjects: -apiGroup:rbac.authorization.k8s.io kind:User name:admin G. 配置 kube-controller-manager 启动项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 #vim/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=KubernetesControllerManager Documentation= [Service] EnvironmentFile=- /etc/kubernetes/config EnvironmentFile=- /etc/kubernetes/controller-manager ExecStart= /usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR\ $KUBE_LOG_LEVEL\ $KUBE_MASTER\ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target H. 配置 kube-controller-manager 配置文件 1 2 #grep^[A-Z]/etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS= "--address=127.0.0.1--service-cluster-ip-range=10.254.0.0/16--cluster-name=kubernetes--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem--root-ca-file=/etc/kubernetes/ssl/ca.pem" I. 启动 kube-controller-manager 1 2 #systemctlstartkube-controller-manager #systemctlstatuskube-controller-manager J. 配置 kube-scheduler 启动项 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 #vim/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=KubernetesSchedulerPlugin Documentation= [Service] EnvironmentFile=- /etc/kubernetes/config EnvironmentFile=- /etc/kubernetes/scheduler ExecStart= /usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR\ $KUBE_LOG_LEVEL\ $KUBE_MASTER\ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target K. 配置 kube-scheduler 配置文件 1 2 #grep^[A-Z]/etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS= "--address=127.0.0.1" L. 启动 kube-scheduler 1 2 #systemctlstartkube-scheduler #systemctlstatuskube-scheduler M. 验证 Master 节点 1 2 3 4 5 6 #kubectlgetcs #kubectlgetcomponentstatuses NAMESTATUSMESSAGEERROR schedulerHealthyok controller-managerHealthyok etcd-0Healthy{ "health" : "true" } 4. 配置并启用 Kubernetes Node 节点 Kubernetes Node 节点包含如下组件: kubelet kube-proxy A. 为 kubelet 赋予权限并新建 kubelet 数据目录 kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests)。 Master: 1 2 3 4 5 6 #kubectlcreateclusterrolebindingkubelet-bootstrap\ --clusterrole=system:node-bootstrapper\ --user=kubelet-bootstrap clusterrolebinding "kubelet-bootstrap" created #mkdir-p/var/lib/kubelet #ansiblenode-mfile-a'path=/var/lib/kubeletstate=directory' B. 配置 kubelet 启动项并分发至 node 节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 #vim/usr/lib/systemd/system/kubelet.service [Unit] Description=KubernetesKubeletServer Documentation=https: //github .com /kubernetes/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory= /var/lib/kubelet EnvironmentFile=- /etc/kubernetes/config EnvironmentFile=- /etc/kubernetes/kubelet ExecStart= /usr/bin/kubelet \ $KUBE_LOGTOSTDERR\ $KUBE_LOG_LEVEL\ $KUBELET_ADDRESS\ $KUBELET_PORT\ $KUBELET_HOSTNAME\ $KUBE_ALLOW_PRIV\ $KUBELET_POD_INFRA_CONTAINER\ $KUBELET_ARGS Restart=on-failure [Install] WantedBy=multi-user.target ##分发至其他Node节点 #ansiblenode-mcopy-a'src=/usr/lib/systemd/system/kubelet.servicedest=/usr/lib/systemd/system/' C. 配置 kubelet 配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 #exportKUBELET_ADDRESS=192.168.100.102 #exportKUBELET_HOSTNAME=Master #cat<<EOF>kubelet KUBELET_ADDRESS= "--address=$KUBELET_ADDRESS" KUBELET_PORT= "--port=10250" KUBELET_HOSTNAME= "--hostname-override=$KUBELET_HOSTNAME" KUBELET_POD_INFRA_CONTAINER= "--pod-infra-container-image=hub.c.163.com/k8s163/pause-amd64:3.0" KUBELET_ARGS= "--cluster-dns=10.254.0.2--experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig--kubeconfig=/etc/kubernetes/kubelet.kubeconfig--fail-swap-on=false--cert-dir=/etc/kubernetes/ssl--cluster-domain=cluster.local.--serialize-image-pulls=false" EOF #mvkubelet/etc/kubernetes/ ##更换kubelet节点IP和hostname,完成后分发至相应节点 #ansible192.168.100.103-mcopy-a'src=kubeletdest=/etc/kubernetes/' #ansible192.168.100.104-mcopy-a'src=kubeletdest=/etc/kubernetes/' D.启动 kubelet 1 2 3 4 #systemctlstartkubelet #systemctlstatuskubelet #ansiblenode-a'systemctlstartkubelet' #ansiblenode-a'systemctlstatuskubelet' E.将 Node 加入 Kubernetes 集群 kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过 Master 认证才会将该 Node 加入到集群。 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #kubectlgetnodes Noresourcesfound. #kubectlgetcsr###查看未授权的CSR请求 NAMEAGEREQUESTORCONDITION node-csr-ZU39iUu-E9FuadeTq589skubelet-bootstrapPending node-csr-sJXwbG8c9UUGS8iTrV815skubelet-bootstrapPending node-csr-zpol8cIJZfrcU8fd7l415skubelet-bootstrapPending #kubectlcertificateapprovenode-csr-ZU39iphJAYDQfsLssAwMViUu-E9Fua2pKhELMdeTq58###通过CSR请求[其他两个节点也使用同样的命令授权] certificatesigningrequest "node-csr-ZU39iphJAYDQfsLssAwMViUu-E9Fua2pKhELMdeTq58" approved #kubectlgetcsr NAMEAGEREQUESTORCONDITION node-csr-ZU39iUu-E9FuadeTq5850skubelet-bootstrapApproved,Issued node-csr-sJXwbG8c9UUGS8iTrV81mkubelet-bootstrapApproved,Issued node-csr-zpol8cIJZfrcU8fd7l41mkubelet-bootstrapApproved,Issued #kubectlgetnodes##查看nodes NAMESTATUSAGEVERSION masterReady1mv1.8.2 node1Ready2mv1.8.2 node2Ready2mv1.8.2 注:CSR 授权后会自动在 Node 端生成 kubelet kubeconfig 配置文件和公私钥: 1 2 3 4 #ls/etc/kubernetes/kubelet.kubeconfig /etc/kubernetes/kubelet .kubeconfig #ls/etc/kubernetes/ssl/kubelet* /etc/kubernetes/ssl/kubelet-client .crt /etc/kubernetes/ssl/kubelet-client .key /etc/kubernetes/ssl/kubelet .crt /etc/kubernetes/ssl/kubelet .key F. 配置 kube-proxy 启动项并分发至其他 node 节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 #vim/usr/lib/systemd/system/kube-proxy.service [Unit] Description=KubernetesKube-ProxyServer Documentation=https: //github .com /kubernetes/kubernetes After=network.target [Service] EnvironmentFile=- /etc/kubernetes/config EnvironmentFile=- /etc/kubernetes/proxy ExecStart= /usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR\ $KUBE_LOG_LEVEL\ $KUBE_MASTER\ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target ##分发至其他Node节点 #ansiblenode-mcopy-a'src=/usr/lib/systemd/system/kube-proxy.servicedest=/usr/lib/systemd/system/' G. 调整内核参数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 #grep-v^#/etc/sysctl.conf###配置kube-proxy代理模式 net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 #sysctl-p###加载系统参数 net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 #ansiblenode-mcopy-a'src=/etc/sysctl.confdest=/etc/' #ansiblenode-a'sysctl-p' 192.168.100.104|SUCCESS|rc=0>> net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 192.168.100.103|SUCCESS|rc=0>> net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 说明:在这加这个参数是因为 kube-proxy 使用 iptables 来进行数据转发,而Linux系统默认是禁止数据包转发,这里我是因为无法使用nodeport发现的。 H.配置 kube-proxy 配置文件并分发至其他 node 节点 1 2 3 4 5 6 7 8 #exportKUBE_PROXY=192.168.100.102 #cat<<EOF>proxy KUBE_PROXY_ARGS= "--bind-address=$KUBE_PROXY--hostname-override=$KUBE_PROXY--cluster-cidr=10.254.0.0/16--proxy-mode=iptables--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig" EOF #mvproxy/etc/kubernetes/proxy ##更换kube-proxy节点IP,完成后分发至相应节点 #ansible192.168.100.103-mcopy-a'src=proxydest=/etc/kubernetes/' #ansible192.168.100.104-mcopy-a'src=proxydest=/etc/kubernetes/' I. 启动 kube-proxy 1 2 3 4 #systemctlstartkube-proxy #systemctlstatuskube-proxy #ansiblenode-a'systemctlstartkube-proxy' #ansiblenode-a'systemctlstatuskube-proxy' J. 查看节点相关信息 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 #kubectlgetnodes-owide###查看节点相关信息(注:这里容器显示Unknown是因为版本相对较高) NAMESTATUSROLESAGEVERSIONEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIME masterReady<none>3mv1.8.2<none>CentOSLinux7(Core)3.10.0-693.2.2.el7.x86_64docker: //Unknown node1Ready<none>3mv1.8.2<none>CentOSLinux7(Core)3.10.0-693.2.2.el7.x86_64docker: //Unknown node2Ready<none>3mv1.8.2<none>CentOSLinux7(Core)3.10.0-693.2.2.el7.x86_64docker: //Unknown #kubectlgetnode--show-labels=true #kubectlgetnodes--show-labels###查看节点标签 NAMESTATUSROLESAGEVERSIONLABELS masterReady<none>4mv1.8.2beta.kubernetes.io /arch =amd64,beta.kubernetes.io /os =linux,kubernetes.io /hostname =master node1Ready<none>5mv1.8.2beta.kubernetes.io /arch =amd64,beta.kubernetes.io /os =linux,kubernetes.io /hostname =node1 node2Ready<none>5mv1.8.2beta.kubernetes.io /arch =amd64,beta.kubernetes.io /os =linux,kubernetes.io /hostname =node2 #kubectlversion--short###查看kubectl版本信息 ClientVersion:v1.8.2 ServerVersion:v1.8.2 #curl###查看健康状况 ok #kubectlcluster-info###查看集群信息 Kubernetesmasterisrunningathttps: //192 .168.100.102:6443 #kubectlgetns###获取所有命名空间 #kubectlgetnamespace NAMESTATUSAGE defaultActive29m kube-publicActive29m kube-systemActive29m #kubectlgetservices###查看默认service NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE kubernetesClusterIP10.254.0.1<none>443 /TCP 36m #kubectlgetservices--all-namespaces###查看所有service NAMESPACENAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE defaultkubernetesClusterIP10.254.0.1<none>443 /TCP 37m #kubectlgetep###查看endpoints #kubectlgetendpoints NAMEENDPOINTSAGE kubernetes192.168.100.102:644338m #kubectlgetsa###查看serviceaccount #kubectlgetserviceaccount NAMESECRETSAGE default138m 说明:其他命令可使用kubectl --help查看,某些长命令可以缩写(如kubectl get namespace可缩写为kubectl get ns),kubectl命令表参考如下: http://docs.kubernetes.org.cn/683.html 本文转自 结束的伤感 51CTO博客,原文链接:http://blog.51cto.com/wangzhijian/2046124

优秀的个人博客,低调大师

docker分离部署lnmp

以下所需的全部的文件、镜像、软件,如有需要请到我的百度云分享下载: 链接:http://pan.baidu.com/s/1kUVNdsj 密码:an9l 项目需求: 构建lnmp平台。 要求nginx、php、mysql分开布署。 Nginx通过fastcgi方式支持php动态页面 实验完整框架如下: 说明:使用单一进程容器,即一个容器只运行一种服务,而不是把所有服务放在一个容器的设计,让lnmp项目需要的Nginx、PHP、MySQL组件,分别运行在各自镜像创建出来的独立容器中。 实验步骤如下: 1、安装docker1.12并开始服务 1)安装docker1.12 2)开启docker服务并开机自启 3)关闭selinux(一定要关闭) 4)开启路由转发功能 5)下载centos6镜像 (我这里已经下载好,并做成了归档压缩包,只用解压即可) 2、创建实验所用文件夹以及文件 1)分别创建工作目录 2)再分别创建相应目录下的文件和子目录 3、分别编辑nginx、php、mysql的dockerfile文件以及各自的supervisord.conf文件 1)nginx ①编辑nginx的dockerfile文件 #images of nginx FROM centos:centos6 MAINTAINER from zhengpengfei@example.com #install supervisor RUN yum -y install python-setuptools RUN /usr/bin/easy_install supervisor #install nginx RUN yum -y install pcre-devel zlib-devel gcc make ADD ./software/nginx-1.6.2.tar.gz /usr/src RUN useradd -M -s /sbin/nologin nginx RUN cd /usr/src/nginx-1.6.2/ && ./configure --prefix=/usr/local/nginx --with-http_stub_status_module --user=nginx --group=nginx && make && make install #Modify nginx configuration file COPY nginx.conf /usr/local/nginx/conf/ RUN ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin RUN mkdir /usr/local/nginx/html/web #Open nginx service COPY supervisord.conf /etc/supervisor/supervisord.conf EXPOSE 80 CMD ["/usr/bin/supervisord"] ②编写nginx的supervisord.conf配置文件 ③docker build -t命令制作nginx镜像 ④镜像制作完成 2)php ①编辑php的dockerfile文件 #images of php FROM centos:centos6 MAINTAINER from zhengpengfei@example.com #install supervisor RUN yum -y install python-setuptools RUN /usr/bin/easy_install supervisor #install php RUN yum -y install gd libxml2-devel libjpeg-devel libpng-devel mysql-devel gcc make RUN useradd -M -s /sbin/nologin php ADD ./software/php-5.3.28.tar.gz /usr/src RUN cd /usr/src/php-5.3.28/ RUN cp /usr/lib64/mysql/libmysqlclient.so.16.0.0 /usr/lib/libmysqlclient.so RUN cd /usr/src/php-5.3.28/ && ./configure --prefix=/usr/local/php --with-gd --with-zlib --with-mysql --with-mysqli --with-mysql-sock --with-config-file-path=/usr/local/php --enable-mbstring --enable-fpm --with-jpeg-dir=/usr/lib && make && make install #Modify PHP configuration file RUN cp /usr/local/php/etc/php-fpm.conf.default /usr/local/php/etc/php-fpm.conf COPY php-fpm.conf /usr/local/php/etc/ RUN mkdir -p /var/www/html/web #Open php-fpm service ADD ./supervisord.conf /etc/supervisor/supervisord.conf EXPOSE 9000 CMD ["/usr/bin/supervisord"] ②编写php的supervisord.conf配置文件 ③docker build -t命令制作php镜像 ④镜像制作完成 3)mysql ①编辑mysql的dockerfile文件 #image of mysql FROM centos:centos6 MAINTAINER from zhengpengfei@example.com #install supervisor RUN yum -y install python-setuptools RUN /usr/bin/easy_install supervisor #install mysql RUN yum -y install ncurses-devel make gcc gcc-c++ ADD ./software/cmake-2.8.12.tar.gz /usr/src ADD ./software/mysql-5.5.38.tar.gz /usr/src RUN cd /usr/src/cmake-2.8.12 && ./configure && gmake && gmake install RUN cd /usr/src/mysql-5.5.38 && cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DSYSCONFDIR=/etc/ -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all && make && make install #Optimal adjustment mysql WORKDIR /usr/src/mysql-5.5.38/ RUN cp -rf ./support-files/my-medium.cnf /etc/my.cnf RUN cp -rf ./support-files/mysql.server /etc/rc.d/init.d/mysqld RUN chmod +x /etc/rc.d/init.d/mysqld RUN echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile RUN source /etc/profile #Initialize mysql RUN groupadd mysql RUN useradd -M -s /sbin/nologin mysql -g mysql RUN chown -R mysql:mysql /usr/local/mysql RUN /usr/local/mysql/scripts/mysql_install_db --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data/ #service mysql start ADD ./supervisord.conf /etc/supervisor/supervisord.conf EXPOSE 3306 CMD ["/usr/bin/supervisord"] ②编写mysql的supervisord.conf配置文件 ③docker build -t命令制作mysql镜像 ④镜像制作完成 4、编写docker-compose.yml文件 5、安装docker-compose 1)先安装pip 2)再安装compose 6、通过docker-compose启动项目 7、进入mysql容器修改数据库root密码以及创建数据库和创建授权用户 1)修改数据库用户root密码 2)创建数据库 3)创建授权用户 4)给root用户授予全部权限 8、做html、php页面和数据库访问测试 1)制作html、php测试页 2)测试nginx和php的访问处理 3)制作数据库的测试页面 4)测试数据库连接 至此说明nginx、php、mysql三者的协同工作已经没有问题了 8、安装一个电影网站,做最后的lnmp协同工作测试 1)解压缩SKYUC 2)设置权限 分别去nginx和php容器给予权限: php: nginx: 3)宿主机防火墙开启80例外 4)在一台客户机安装SKYUC 本文转自Mr大表哥 博客,原文链接:http://blog.51cto.com/zpf666/1905555 如需转载请自行联系原作者

优秀的个人博客,低调大师

hive 部署UDF函数

一.临时添加UDF函数 1.上传jar包至hive服务器 2.hive shell执行如下命令: 1 2 3 4 5 6 addjar/home/hive/hivejar/billing-on-hive-1.0.jar createtemporaryfunctionstripas'com.tsingzone.bigdata.billing.GetOperator'; 注: strip:自定义函数名 com.tsingzone.bigdata.billing.GetOperator:类名 仅对当前shell生效 3.使用方法: 1 select strip(dest_termi_id) from huadan201601limit10; 二.永久添加UDF函数 1.上传jar包至hdfs中 1 hdfsdfs-puthivejar/billing- on -hive-1.0.jar/ user /hive/hive_jar 2.创建函数 1 create function billing as 'com.tsingzone.bigdata.billing.GetOperator' usingjar 'hdfs:///user/hive/hive_jar/billing-on-hive-1.0.jar' 3.使用 1 hive-S-e "selectbilling(dest_termi_id)fromhuadan201601limit10;" 参考文档:http://blog.csdn.net/liam08/article/details/51311772 本文转自 穿越防火墙 51CTO博客,原文链接:http://blog.51cto.com/sjitwant/1932990

资源下载

更多资源
优质分享App

优质分享App

近一个月的开发和优化,本站点的第一个app全新上线。该app采用极致压缩,本体才4.36MB。系统里面做了大量数据访问、缓存优化。方便用户在手机上查看文章。后续会推出HarmonyOS的适配版本。

Mario

Mario

马里奥是站在游戏界顶峰的超人气多面角色。马里奥靠吃蘑菇成长,特征是大鼻子、头戴帽子、身穿背带裤,还留着胡子。与他的双胞胎兄弟路易基一起,长年担任任天堂的招牌角色。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册