k8s实践6:从解决报错开始入门RBAC
1.
在k8s集群使用过程中,总是遇到各种rbac的权限问题.
记录了几个报错,见下:
报错1:
"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope" "message": "pservices is forbidden: User \"kubernetes\" cannot list resource \"pservices\" in API group \"\" at the cluster scope",
报错2:
[root@k8s-master2 ~]# curl https://192.168.32.127:8443/logs --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"", "reason": "Forbidden", "details": { }, "code": 403
curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"", "reason": "Forbidden", "details": { }, "code": 403
深入学习了解rbac的各种基础知识,相当必要.
2.
从分析报错开始
报错1:
"message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
先看这条报错的命令记录:
[root@k8s-master1 ~]# curl https://192.168.32.127:8443/api/v1/pods --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "pods is forbidden: User \"kubernetes\" cannot list resource \"pods\" in API group \"\" at the cluster scope", "reason": "Forbidden", "details": { "kind": "pods" }, "code": 403
这条报错的意思是什么呢?
字面上理解,用户kubernetes在api Group里没有权限,无法获取资源pod列表.
从解决这个报错开始我们的入门学习.
3.
User kubernetes是从哪冒出来的呢?
这个用户是我们部署apiserver时,生成的api访问etcd的用户.
检索用户kubernetes的权限和绑定的群组,见下:
[root@k8s-master1 ~]# kubectl describe clusterrolebindings |grep -B 9 "User kubernetes " Name: discover-base-url Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"discover-base-url","namespace":""},"roleR... Role: Kind: ClusterRole Name: discover_base_url Subjects: Kind Name Namespace ---- ---- --------- User kubernetes -- Name: kube-apiserver Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"... Role: Kind: ClusterRole Name: kube-apiserver Subjects: Kind Name Namespace ---- ---- --------- User kubernetes
权限:
[root@k8s-master1 ~]# kubectl describe clusterroles discover_base_url Name: discover_base_url Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab... rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- [/] [] [get] [root@k8s-master1 ~]#
##注意这条权限是上篇apiserver里面新增的权限.
[root@k8s-master1 ~]# kubectl describe clusterroles kube-apiserver Name: kube-apiserver Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- nodes/metrics [] [] [get create] nodes/proxy [] [] [get create] [root@k8s-master1 ~]#
##一个用的是Resources
##一个用的是Non-Resource
4.
引出问题1:
Non-Resouce是什么?
google了好久,也只是看到只言片语.以下是我自己的理解:
回头看上篇检索apiserver时显示的信息:
[root@k8s-master1 ~]# curl https://192.168.32.127:8443/ --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/autoscaling/v2beta2", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/coordination.k8s.io", "/apis/coordination.k8s.io/v1beta1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/policy", "/apis/policy/v1beta1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/rbac.authorization.k8s.io/v1beta1", "/apis/scheduling.k8s.io", "/apis/scheduling.k8s.io/v1beta1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/apis/storage.k8s.io/v1beta1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/log", "/healthz/ping", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-available-controller", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/ca-registration", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/rbac/bootstrap-roles", "/healthz/poststarthook/scheduling/bootstrap-system-priority-classes", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-kube-aggregator-informers", "/healthz/poststarthook/start-kube-apiserver-admission-initializer", "/healthz/poststarthook/start-kube-apiserver-informers", "/logs", "/metrics", "/openapi/v2", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger-ui/", "/swagger.json", "/swaggerapi", "/version" ] }[root@k8s-master1 ~]#
从healthz开始的都是Non-resouce,是不是呢?修改clusterroles,测试见下:
[root@k8s-master1 roles]# cat clusterroles1.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: discover_base_url rules: - nonResourceURLs: # - / - /healthz/* verbs: - get [root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles1.yaml clusterrole.rbac.authorization.k8s.io "discover_base_url" configured [root@k8s-master1 roles]# kubectl apply -f clusterrolebindings1.yaml clusterrolebinding.rbac.authorization.k8s.io "discover-base-url" configured [root@k8s-master1 roles]# [root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url Name: discover_base_url Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab... rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- [/healthz/*] [] [get]
##具有Non-Resources /healthz的get权限
[root@k8s-master1 roles]# curl https://192.168.32.127:8443/logs --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"kubernetes\" cannot get path \"/logs\"", "reason": "Forbidden", "details": { }, "code": 403 }[root@k8s-master1 roles]#
[root@k8s-master1 roles]# curl https://192.168.32.127:8443/metrics --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"kubernetes\" cannot get path \"/metrics\"", "reason": "Forbidden", "details": { }, "code": 403 }[root@k8s-master1 roles]#
可以看到除了healthz执行成功,其他全部失败.
修改clusterroles,再测试,见下:
[root@k8s-master1 roles]# kubectl describe clusterroles discover_base_url Name: discover_base_url Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{"rbac.authorization.kubernetes.io/autoupdate":"true"},"lab... rbac.authorization.kubernetes.io/autoupdate=true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- [/healthz/*] [] [get] [/logs] [] [get] [/metrics] [] [get] [/version] [] [get] [root@k8s-master1 roles]#
再执行上面报错的命令,全部正常.
可见,Non-Resourece包含了/healthz/*,/logs,/metrics等等.
5.
引出问题2:
Resource的权限配置?
先来条执行报错的命令:
[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "nodes \"proxy\" is forbidden: User \"kubernetes\" cannot get resource \"nodes\" in API group \"\" at the cluster scope", "reason": "Forbidden", "details": { "name": "proxy", "kind": "nodes" }, "code": 403 }[root@k8s-master1 roles]#
好奇怪,根据我们上面检索的权限,见下:
-- Name: kube-apiserver Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"roleRef"... Role: Kind: ClusterRole Name: kube-apiserver Subjects: Kind Name Namespace ---- ---- --------- User kubernetes
Name: kube-apiserver Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGr... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- nodes/metrics [] [] [get create] nodes/proxy [] [] [get create] [root@k8s-master1 ~]#
按道理是应该可以正常检索得到的.为什么报错呢?先不管,添加权限测试下看看,见下:
获取kube-apiserver这个clusterroles权限的描述,见下:
[root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes/proxy","nodes/metrics"],"verbs":["get","create"]}]} creationTimestamp: 2019-02-28T06:51:53Z name: kube-apiserver resourceVersion: "35075" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver uid: 5519ea8d-3b25-11e9-95a3-000c29383c89 rules: - apiGroups: - "" resources: - nodes/proxy - nodes/metrics verbs: - get - create [root@k8s-master1 roles]#
修改:
[root@k8s-master1 roles]# cat clusterroles2.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kube-apiserver rules: - apiGroups: [""] resources: ["nodes", "nodes/proxy","nodes/metrics"] verbs: ["get", "list","create"] [root@k8s-master1 roles]#
[root@k8s-master1 roles]# kubectl apply -f clusterroles2.yaml clusterrole.rbac.authorization.k8s.io "kube-apiserver" configured [root@k8s-master1 roles]# kubectl get clusterroles kube-apiserver -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kube-apiserver","namespace":""},"rules":[{"apiGroups":[""],"resources":["nodes","nodes/proxy","nodes/metrics"],"verbs":["get","list","create"]}]} creationTimestamp: 2019-02-28T06:51:53Z name: kube-apiserver resourceVersion: "476880" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/kube-apiserver uid: 5519ea8d-3b25-11e9-95a3-000c29383c89 rules: - apiGroups: - "" resources: - nodes - nodes/proxy - nodes/metrics verbs: - get - list - create
再执行前面报错的命令:
[root@k8s-master1 roles]# curl https://192.168.32.127:8443/api/v1/nodes/k8s-master1 --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Node", "apiVersion": "v1", "metadata": { "name": "k8s-master1", "selfLink": "/api/v1/nodes/k8s-master1", "uid": "46a353d3-3b07-11e9-95a3-000c29383c89", "resourceVersion": "477158", "creationTimestamp": "2019-02-28T03:16:44Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "k8s-master1" }, "annotations": { "node.alpha.kubernetes.io/ttl": "0", "volumes.kubernetes.io/controller-managed-attach-detach": "true" } }, "spec": { }, "status": { "capacity": { "cpu": "1", "ephemeral-storage": "17394Mi", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "1867264Ki", "pods": "110" }, "allocatable": { "cpu": "1", "ephemeral-storage": "16415037823", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "1764864Ki", "pods": "110" }, "conditions": [ { "type": "OutOfDisk", "status": "False", "lastHeartbeatTime": "2019-03-18T06:36:47Z", "lastTransitionTime": "2019-03-13T08:07:21Z", "reason": "KubeletHasSufficientDisk", "message": "kubelet has sufficient disk space available" }, { "type": "MemoryPressure", "status": "False", "lastHeartbeatTime": "2019-03-18T06:36:47Z", "lastTransitionTime": "2019-03-13T08:07:21Z", "reason": "KubeletHasSufficientMemory", "message": "kubelet has sufficient memory available" }, { "type": "DiskPressure", "status": "False", "lastHeartbeatTime": "2019-03-18T06:36:47Z", "lastTransitionTime": "2019-03-13T08:07:21Z", "reason": "KubeletHasNoDiskPressure", "message": "kubelet has no disk pressure" }, { "type": "PIDPressure", "status": "False", "lastHeartbeatTime": "2019-03-18T06:36:47Z", "lastTransitionTime": "2019-02-28T03:16:45Z", "reason": "KubeletHasSufficientPID", "message": "kubelet has sufficient PID available" }, { "type": "Ready", "status": "True", "lastHeartbeatTime": "2019-03-18T06:36:47Z", "lastTransitionTime": "2019-03-13T08:07:31Z", "reason": "KubeletReady", "message": "kubelet is posting ready status" } ], "addresses": [ { "type": "InternalIP", "address": "192.168.32.128" }, { "type": "Hostname", "address": "k8s-master1" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "machineID": "d1471d605c074c43bf44cd5581364aea", "systemUUID": "84F64D56-0428-2BBD-7F9E-26CE9C1D7023", "bootID": "c49804b6-0645-49d3-902f-e66b74fed805", "kernelVersion": "3.10.0-514.el7.x86_64", "osImage": "CentOS Linux 7 (Core)", "containerRuntimeVersion": "docker://17.3.1", "kubeletVersion": "v1.12.3", "kubeProxyVersion": "v1.12.3", "operatingSystem": "linux", "architecture": "amd64" }, "images": [ { "names": [ "registry.access.redhat.com/rhel7/pod-infrastructure@sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931", "registry.access.redhat.com/rhel7/pod-infrastructure:latest" ], "sizeBytes": 208612920 }, { "names": [ "tutum/dnsutils@sha256:d2244ad47219529f1003bd1513f5c99e71655353a3a63624ea9cb19f8393d5fe", "tutum/dnsutils:latest" ], "sizeBytes": 199896828 }, { "names": [ "httpd@sha256:5e7992fcdaa214d5e88c4dfde274befe60d5d5b232717862856012bf5ce31086" ], "sizeBytes": 131692150 }, { "names": [ "httpd@sha256:20ead958907f15b638177071afea60faa61d2b6747c216027b8679b5fa58794b", "httpd@sha256:e76e7e1d4d853249e9460577d335154877452937c303ba5abde69785e65723f2", "httpd:latest" ], "sizeBytes": 131679770 } ] } }[root@k8s-master1 roles]#
整个node的数据都读取出来了.
6.
接上面问题的思考,先对比下,修改前和修改后权限的对比,见下:
修改前:
rules: - apiGroups: - "" resources: - nodes/proxy - nodes/metrics verbs: - get - create
修改后:
rules: - apiGroups: - "" resources: - nodes - nodes/proxy - nodes/metrics verbs: - get - list - create
修改的就是resources加上了nodes这个资源.其他pods,svc之类的权限,参考这个权限修改就能够实现访问.
我的理解是:只有具有了访问这个资源的权限之后,才能够访问它的子资源.
7.
遗留问题
还遇到个报错,见下:
[root@k8s-master1 roles]#curl https://192.168.32.127:8443/api/v1/nodes/proxy --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kubernetes.pem --key /etc/kubernetes/cert/kubernetes-key.pem { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "nodes \"proxy\" not found", "reason": "NotFound", "details": { "name": "proxy", "kind": "nodes" }, "code": 404 }[root@k8s-master1 roles]#
这是子资源没有生成的问题.后面再来测试.

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
Django+Django-Celery+Celery的整合实战
本篇文章主要是由于计划使用django写一个计划任务出来,可以定时的轮换值班人员名称或者定时执行脚本等功能,百度无数坑之后,终于可以凑合把这套东西部署上。本人英文不好,英文好或者希望深入学习或使用的人,建议去参考官方文档,而且本篇的记录不一定正确,仅仅实现crontab 的功能而已。 希望深入学习的人可以参考http://docs.jinkan.org/docs/celery/。 首先简单介绍一下,Celery 是一个强大的分布式任务队列,它可以让任务的执行完全脱离主程序,甚至可以被分配到其他主机上运行。我们通常使用它来实现异步任务(async task)和定时任务(crontab)。它的架构组成如下图 可以看到,Celery 主要包含以下几个模块: 任务模块 Task 包含异步任务和定时任务。其中,异步任务通常在业务逻辑中被触发并发往任务队列,而定时任务由 Celery Beat 进程周期性地将任务发往任务队列。 消息中间件 Broker Broker,即为任务调度队列,接收任务生产者发来的消息(即任务),将任务存入队列。Celery 本身不提供队列服务,官方推荐使用 RabbitM...
- 下一篇
终于揪出数据库负载高的元凶:高效云盘
这几天,运营那边老报app卡顿,大量5**报错。通过排查,是mysql数据库卡住了,数据库错误日志看,有大量如下信息: 企图修改数据库选项文件/etc/my.cnf相关的值,来消除问题,效果甚微,修改的两项如下: interactive_timeout=120connect_timeout=120 接着看系统的负载,用top指令,输出如下: Load飙升,IO也很异常。查mysql的连接数,登录mysql,用指令 “show processlist;”查看,峰值在300个左右,远低于设定值3000. 磁盘io为啥会这么高呢?莫非是磁盘性能问题?数据库由一主两从组成,数据存放在单独的高效云盘。还是来测一下所谓高效云盘的读取性能,然后与自有的物理服务器做过对比。 测试磁盘性能我选用hdparm,最小安装的centos 7可能没有这个包,执行指令“yum install hdparm”很容易就安装上了。 ² 系统磁盘挂接情况 ² 测试本地盘读取性能 速度是103M每秒,多测几次,减少偏差。 ² 高效云盘读取性能测试 读取速度130M每秒的样子,感觉这个速度不是很理想。找一台自有的线...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- SpringBoot2更换Tomcat为Jetty,小型站点的福音
- CentOS8安装MyCat,轻松搞定数据库的读写分离、垂直分库、水平分库
- CentOS6,CentOS7官方镜像安装Oracle11G
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装
- SpringBoot2整合MyBatis,连接MySql数据库做增删改查操作
- SpringBoot2全家桶,快速入门学习开发网站教程
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- CentOS7安装Docker,走上虚拟化容器引擎之路
- CentOS8,CentOS7,CentOS6编译安装Redis5.0.7
- CentOS8编译安装MySQL8.0.19