kubesphere 3.0 安装部署填坑记录
一:系统环境
1.1 系统环境初始化
系统:Centos7.8x64
cat /etc/hosts
-----
192.168.100.11 node01.flyfish.cn
192.168.100.12 node02.flyfish.cn
192.168.100.13 node03.flyfish.cn
192.168.100.14 node04.flyfish.cn
192.168.100.15 node05.flyfish.cn
192.168.100.16 node06.flyfish.cn
192.168.100.17 node07.flyfish.cn
192.168.100.18 node08.flyfish.cn
-----
本次安装以 前三台部署
k8s 部署说明
1.2 系统配置初始化
安装基础工具
yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools
关闭防火墙或者阿里云开通安全组端口访问
systemctl stop firewalld
systemctl disable firewalld
执行关闭命令: systemctl stop firewalld.service
再次执行查看防火墙命令:systemctl status firewalld.service
执行开机禁用防火墙自启命令 : systemctl disable firewalld.service
关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
cat /etc/selinux/config
关闭 swap
swapoff -a #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
free -l -h
将桥接的 IPv4 流量传递到 iptables 的链
如果没有/etc/sysctl.conf文件的话直接执行
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1" >> /etc/sysctl.conf
1.3 部署docker
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
在 node01.flyfish,node02.flyfish 与 node03.flyfish 节点上面安装
3.1 解压二进制包
tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
3.2 systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.3 创建配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
registry-mirrors 阿里云镜像加速器
3.4 启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
二 :安装k8s 集群
安装k8s、kubelet、kubeadm、kubectl(所有节点)
配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubelet、kubeadm、kubectl
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
systemctl enable kubelet && systemctl start kubelet
初始化所有节点:
下载镜像脚本:
vim image.sh
----
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
-----
初始化 master节点:
注意,该操作只是在master节点之后构建环境。
kubeadm init \
--apiserver-advertise-address=192.168.100.11 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署网络插件
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
其他节点加入:
kubeadm join 192.168.100.11:6443 --token y28jw9.gxstbcar3m4n5p1a \
--discovery-token-ca-cert-hash sha256:769528577607a4024ead671ae01b694744dba16e0806e57ed1b099eb6c6c9350
三:部署NFS 服务器
yum install -y nfs-utils
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server
exportfs -r
测试Pod直接挂载NFS了(主节点操作)
在opt目录下创建一个nginx.yaml的文件
vim nginx.yaml
----
apiVersion: v1
kind: Pod
metadata:
name: vol-nfs
namespace: default
spec:
volumes:
- name: html
nfs:
path: /nfs/data #1000G
server: 192.168.100.11 #自己的nfs服务器地址
containers:
- name: myapp
image: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
----
kubectl apply -f nginx.yaml
cd /nfs/data/
echo " 11111" >>> index.html
安装客户端工具(node节点操作)
node02.flyfish.cn
showmount -e 192.168.100.11
创建同步文件夹
mkdir /root/nfsmount
将客户端的/root/nfsmount和/nfs/data/做同步(node节点操作)
mount -t nfs 192.168.100.11:/nfs/data/ /root/nfsmount
四:设置动态供应链storageclass
vim nfs-rbac.yaml
----
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: lizhenliang/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: storage.pri/nfs
- name: NFS_SERVER
value: 192.168.100.11
- name: NFS_PATH
value: /nfs/data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.11
path: /nfs/data
----
kubectl apply -f nfs-rbac.yaml
kubectl get pod
创建storageclass
vi storageclass-nfs.yaml
----
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-nfs
provisioner: storage.pri/nfs
reclaimPolicy: Delete
----
kubectl apply -f storageclass-nfs.yaml
#扩展"reclaim policy"有三种方式:Retain、Recycle、Deleted。
Retain
#保护被PVC释放的PV及其上数据,并将PV状态改成"released",不将被其它PVC绑定。集群管理员手动通过如下步骤释放存储资源:
手动删除PV,但与其相关的后端存储资源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
手动清空后端存储volume上的数据。
手动删除后端存储volume,或者重复使用后端volume,为其创建新的PV。
Delete
删除被PVC释放的PV及其后端存储volume。对于动态PV其"reclaim policy"继承自其"storage class",
默认是Delete。集群管理员负责将"storage class"的"reclaim policy"设置成用户期望的形式,否则需要用
户手动为创建后的动态PV编辑"reclaim policy"
Recycle
保留PV,但清空其上数据,已废弃
kubectl get storageclass
改变默认sc
https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-class
kubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
验证nfs动态供应
创建pvc
vim pvc.yaml
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-claim-01
#annotations:
# volume.beta.kubernetes.io/storage-class: "storage-nfs"
spec:
storageClassName: storage-nfs #这个class一定注意要和sc的名字一样
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
-----
kubectl apply -f pvc.yaml
使用pvc
vi testpod.yaml
----
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: pvc-claim-01
-----
kubectl apply -f testpod.yaml
五:安装metrics-server
1、先安装metrics-server(yaml如下,已经改好了镜像和配置,可以直接使用),这样就能监控到pod。node的资源情况(默认只有cpu、memory的资源审计信息哟,更专业的我们后面对接 Prometheus)
vim 2222.yaml
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
kubectl apply -f 2222.yaml
kubetl top nodes
kubectl top nodes
六: 安装 kubesphere
https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/
wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
vim cluster-configuration.yaml
----
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.0.0
spec:
persistence:
storageClass: "" # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.
authentication:
jwtSecret: "" # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster.
etcd:
monitoring: true # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.
endpointIps: 192.168.100.11 # etcd cluster EndpointIps, it can be a bunch of IPs here.
port: 2379 # etcd port
tlsEnable: true
common:
mysqlVolumeSize: 20Gi # MySQL PVC size.
minioVolumeSize: 20Gi # Minio PVC size.
etcdVolumeSize: 20Gi # etcd PVC size.
openldapVolumeSize: 2Gi # openldap PVC size.
redisVolumSize: 2Gi # Redis PVC size.
es: # Storage backend for logging, events and auditing.
# elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
# elasticsearchDataReplicas: 1 # total number of data nodes.
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes.
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
console:
enableMultiLogin: true # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
port: 30880
alerting: # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
enabled: true
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
enabled: true
devops: # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
enabled: true
jenkinsMemoryLim: 2Gi # Jenkins memory limit.
jenkinsMemoryReq: 1500Mi # Jenkins memory request.
jenkinsVolumeSize: 8Gi # Jenkins volume size.
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters.
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
enabled: true
ruler:
enabled: true
replicas: 2
logging: # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
enabled: true
logsidecarReplicas: 2
metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
enabled: false
monitoring:
# prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
prometheusMemoryRequest: 400Mi # Prometheus request memory.
prometheusVolumeSize: 20Gi # Prometheus PVC size.
# alertmanagerReplicas: 1 # AlertManager Replicas.
multicluster:
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster.
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
enabled: true
notification: # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
enabled: true
openpitrix: # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
enabled: true
servicemesh: # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.
enabled: true
----
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration1.yaml
查看安装进度:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
kubectl get pod -A
kubesphere-monitoring-system prometheus-k8s-0 0/3 ContainerCreating 0 7m20s
kubesphere-monitoring-system prometheus-k8s-1 0/3 ContainerCreating 0 7m20s
prometheus-k8s-1 这个一直在 ContainerCreating 这个 状态
kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-system
kube-etcd-client-certs 这个证书没有找到:
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
kubectl get secret -A |grep etcd
kubectl get pod -n kubesphere-monitoring-system
prometheus-k8s-1 这个pod 就变成Running 状态了
下面根据日志提示打开kubesphere 的web 页面:

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
-
上一篇
关于MySQL索引知识与小妙招 — 学到了!
一、索引基本知识 1.1 索引的优点 大大减少了服务器需要扫描的数据量,加快数据库的检索速度 帮助服务器避免排序和临时表 将随机io变成顺序io 1.2 索引的用处 速查找匹配WHERE子句的行 从consideration中消除行,如果可以在多个索引之间进行选择,mysql通常会使用找到最少行的索引 如果表具有多列索引,则优化器可以使用索引的任何最左前缀来查找行 当有表连接的时候,从其他表检索行数据 查找特定索引列的min或max值 如果排序或分组时在可用索引的最左前缀上完成的,则对表进行排序和分组 在某些情况下,可以优化查询以检索值而无需查询数据行 1.3 索引的分类 数据库会默认创建索引,但是并不是给主键建立索引,而是给唯一键建里索引的,因为主键的特性是唯一且非空 主键索引: 是一种特殊的唯一索引,不允许有空值。(主键约束,就是一个主键索引) 唯一索引: 索引列中的值必须是唯一的,但是允许为空值。 普通索引: MySQL中基本索引类型,没有什么限制,允许在定义索引的列中插入重复值和空值,纯粹为了查询数据更快一 点。 全文索引: 只有在MyISAM引擎上才能使用,只能在CHAR,V...
-
下一篇
记录一次事故处理50%kudu表无法进行正常访问
记录一次事故处理50%kudu表无法进行正常访问 测试环境kudu集群事故,影响:测试效果,测试进度,生产发布延迟,需警惕,特此写出过程操作需谨慎!操作需谨慎!操作需谨慎!任务环境都要以生产环境而对待! 事故原因: 昨天于上午10点,业务说kudu表无法使用后,影响测试,无法正常发布。去scm平台发现kudu_tablet挂了5台 运维查看信息日志后,做近一步处理 1.重启kudu—tablet发现无法启动 2.然后重新查看相关日志 Check failed: _s.ok() Bad status: IO error: Unable to write consensus meta file for tablet e99481bc28d84bd69ccf328bed071d6b to path /data/ktwd/consensus-meta/e99481bc28d84bd69ccf328bed071d6b: Call to mkstemp() failed on name template /data/ktwd/consensus-meta/e99481bc28d84bd69ccf3...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- SpringBoot2配置默认Tomcat设置,开启更多高级功能
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- CentOS8编译安装MySQL8.0.19
- MySQL数据库在高并发下的优化方案
- SpringBoot2整合Thymeleaf,官方推荐html解决方案
- SpringBoot2初体验,简单认识spring boot2并且搭建基础工程
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- Dcoker安装(在线仓库),最新的服务器搭配容器使用
- CentOS7,8上快速安装Gitea,搭建Git服务器
- Docker快速安装Oracle11G,搭建oracle11g学习环境