您现在的位置是:首页 > 文章详情

kubernetes安装记录

日期:2019-08-06点击:417

环境准备

机器准备:

10.90.14.125 esb-edi-test master 10.90.15.45 edi1 node1 10.90.15.43 edi2 node2 10.90.15.44 edi3 node3

http代理环境变量:

vi /etc/profile export http_proxy=http://用户名:密码@proxy02.h3c.com:8080/ export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16" source /etc/profile

yum的http代理环境变量:

vi /etc/yum.conf proxy=http://proxy02.h3c.com:8080/ proxy_username=用户名 proxy_password=密码 

使用网易yum仓库:下载指定版本的repo文件,放到/ec/yum.repos.d目录:

yum clean all yum makecache

安装docker:注意安装k8s支持的版本,不要太高

yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce.x86_64 --showduplicates |sort -r yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7 systemctl start docker systemctl enable docker

配置kubernetes.repo为阿里云:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

安装kubelet,kubeadm,kubectl:

yum install -y kubelet kubeadm kubectl systemctl enable kubelet.service

kubeadm初始化集群

Easily bootstrap a secure Kubernetes cluster

kubeadm --help Usage: kubeadm [command] Available Commands: alpha Kubeadm experimental sub-commands completion Output shell completion code for the specified shell (bash or zsh) config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster help Help about any command init Run this command in order to set up the Kubernetes control plane join Run this on any machine you wish to join an existing cluster reset Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join' token Manage bootstrap tokens upgrade Upgrade your cluster smoothly to a newer version with this command version Print the version of kubeadm Flags: -h, --help help for kubeadm --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity Use "kubeadm [command] --help" for more information about a command.

kubeadm init

刚开始接触的时候,看看man文档还是很有帮助的。

[root@esb-edi-test ~]# kubeadm init --help Run this command in order to set up the Kubernetes control plane The "init" command executes the following phases: preflight Run pre-flight checks kubelet-start Write kubelet settings and (re)start the kubelet certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet /etcd-ca Generate the self-signed CA to provision identities for etcd /etcd-server Generate the certificate for serving etcd /etcd-peer Generate the certificate for etcd nodes to communicate with each other /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd /etcd-healthcheck-client Generate the certificate for liveness probes to healtcheck etcd /front-proxy-ca Generate the self-signed CA to provision identities for front proxy /front-proxy-client Generate the certificate for the front proxy client /sa Generate a private key for signing service account tokens along with its public key kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generate a kubeconfig file for the admin to use and for kubeadm itself /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes /controller-manager Generate a kubeconfig file for the controller manager to use /scheduler Generate a kubeconfig file for the scheduler to use control-plane Generate all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest etcd Generate static Pod manifest file for local etcd /local Generate the static Pod manifest file for a local, single-node local etcd instance upload-config Upload the kubeadm and kubelet configuration to a ConfigMap /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap /kubelet Upload the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster addon Install required addons for passing Conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster Usage: kubeadm init [flags] kubeadm init [command] Available Commands: phase Use this command to invoke single phase of the init workflow Flags: --apiserver-advertise-address string The IP address the API Server will advertise it's listening on. If not set the default network interface will be used. --apiserver-bind-port int32 Port for the API Server to bind to. (default 6443) --apiserver-cert-extra-sans strings Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names. --cert-dir string The path where to save and store the certificates. (default "/etc/kubernetes/pki") --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. --config string Path to a kubeadm configuration file. --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. --dry-run Don't apply any changes; just output what would be done. --feature-gates string A set of key=value pairs that describe feature gates for various features. No feature gates are available in this release. -h, --help help for init --ignore-preflight-errors strings A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubernetes-version string Choose a specific Kubernetes version for the control plane. (default "stable-1") --node-name string Specify the node name. --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. --service-cidr string Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") --service-dns-domain string Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local") --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates. --skip-phases strings List of phases to be skipped --skip-token-print Skip printing of the default bootstrap token generated by 'kubeadm init'. --token string The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef --token-ttl duration The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s) --upload-certs Upload control-plane certificates to the kubeadm-certs Secret. Global Flags: --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity Use "kubeadm init [command] --help" for more information about a command.

踩坑开始

[root@esb-edi-test ~]# kubeadm init W0801 13:32:36.665845 100602 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0801 13:32:36.666067 100602 version.go:99] falling back to the local client version: v1.15.1

由于第一次安装,看到警告都慌,干掉它。加上--kubernetes-version v1.15.1参数:

[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING HTTPProxy]: Connection to "https://10.90.14.125" uses proxy "http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/". If that is not intended, adjust your proxy settings [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration

由于我的虚拟机都在内网,设置了http代理,而没有设置白名单,所以内部ip也走了代理。把相关ip加入白名单:包括所有节点ip,集群cidr,pod cidr,nameserver。再来

vi /etc/profile export http_proxy=http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/ export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16" source /etc/profile kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09

docker版本太高,卸载docker,选择合适的版本重装:

yum -y remove docker-ce.x86_64 yum -y remove docker-ce-cli.x86_64 yum -y remove containerd.io.x86_64 rm -rf /var/lib/docker yum list docker-ce.x86_64 --showduplicates |sort -r yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7 systemctl start docker systemctl enable docker

再来:

[init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

关闭所有swap:swapoff -a。
再来:

[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

kubeadm从ks.gcr.id拉取相关镜像失败,被防火墙了。解决方案有:

  • 配置docker的http代理,能访问k8s.gcr.io的代理
  • 使用--image-repository,指定一个可用的仓库
  • 用docker把该有的image都pull下来,打成谷歌的标签

使用第二种方式再来:

[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [esb-edi-test kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.90.14.125] [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.004059 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node esb-edi-test as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node esb-edi-test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: zszy1a.8zcd3a5ah6p7zb19 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.90.14.125:6443 --token zszy1a.8zcd3a5ah6p7zb19 \ --discovery-token-ca-cert-hash sha256:956a63dcf70eb07068f7d9bd676602a4195ae8bee07a8337a206a8eb3447aba8

成功了:再按照他的指示来:

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

检查下组件:

[root@esb-edi-test ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} [root@esb-edi-test ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION esb-edi-test NotReady master 8m28s v1.15.1

master状态为NotReady,需要配置网络插件,流行的有flannel:

[root@esb-edi-test ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created

就这点输出,估计是出错了,看看pod状态:

[root@esb-edi-test ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6967fb4995-btkt6 0/1 ContainerCreating 0 19m coredns-6967fb4995-s85q6 0/1 ContainerCreating 0 19m etcd-esb-edi-test 1/1 Running 0 17m kube-apiserver-esb-edi-test 1/1 Running 0 18m kube-controller-manager-esb-edi-test 1/1 Running 0 18m kube-flannel-ds-amd64-ph85z 0/1 CrashLoopBackOff 4 3m34s kube-proxy-8rsft 1/1 Running 0 19m kube-scheduler-esb-edi-test 1/1 Running 0 18m

果然,coredns和kube-flannel都没ready,怎么办?看下log先:

[root@esb-edi-test kube-flannel]# kubectl --namespace kube-system logs kube-flannel-ds-amd64-ph85z I0801 07:57:57.784315 1 main.go:514] Determining IP address of default interface I0801 07:57:57.876204 1 main.go:527] Using interface with name eth0 and address 10.90.14.125 I0801 07:57:57.876270 1 main.go:544] Defaulting external address to interface address (10.90.14.125) I0801 07:57:57.890683 1 kube.go:126] Waiting 10m0s for node controller to sync I0801 07:57:57.890888 1 kube.go:309] Starting kube subnet manager I0801 07:57:58.890987 1 kube.go:133] Node controller sync successful I0801 07:57:58.891043 1 main.go:244] Created subnet manager: Kubernetes Subnet Manager - esb-edi-test I0801 07:57:58.891054 1 main.go:247] Installing signal handlers I0801 07:57:58.891379 1 main.go:386] Found network config - Backend type: vxlan I0801 07:57:58.891499 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false E0801 07:57:58.892231 1 main.go:289] Error registering network: failed to acquire lease: node "esb-edi-test" pod cidr not assigned I0801 07:57:58.892332 1 main.go:366] Stopping shutdownHandler...

pod cidr not assigned,没有给pod划分子网?Google一番,重新init,这次加上pod cidr,索性把集群cidr也加上:

kubeadm init \ --kubernetes-version v1.15.1 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=0.0.0.0 \

按之前步骤重新安装后的状态:

[root@esb-edi-test ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6967fb4995-8mh8x 1/1 Running 0 15m coredns-6967fb4995-9mp9d 1/1 Running 0 15m etcd-esb-edi-test 1/1 Running 0 14m kube-apiserver-esb-edi-test 1/1 Running 0 14m kube-controller-manager-esb-edi-test 1/1 Running 0 14m kube-flannel-ds-amd64-xpsdj 1/1 Running 0 6m24s kube-proxy-fwwl5 1/1 Running 0 15m kube-scheduler-esb-edi-test 1/1 Running 0 14m

加入节点

加入node可以使用kubeadm join命令,需要两个参数,token和ca

在master节点打印token:

[root@esb-edi-test ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS ydlp5v.hefzcti5tlx8ls1u 52m 2019-08-02T16:40:13+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

在master节点打印ca证书的sha256:

openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1 a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f

在node1使用kubeadm join加入:

[root@edi1 ~]# kubeadm join 10.90.14.125:6443 \ > --token ydlp5v.hefzcti5tlx8ls1u \ > --discovery-token-ca-cert-hash sha256:a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master节点查看nodes信息:

[root@esb-edi-test log]# kubectl get nodes NAME STATUS ROLES AGE VERSION edi1 Ready <none> 7m19s v1.15.1 esb-edi-test Ready master 23h v1.15.1

依次把其它节点也加上:

[root@esb-edi-test log]# kubectl get nodes NAME STATUS ROLES AGE VERSION edi1 Ready <none> 11m v1.15.1 edi2 Ready <none> 61s v1.15.1 edi3 Ready <none> 112s v1.15.1 esb-edi-test Ready master 23h v1.15.1

总结

使用kubeadm安装集群还是很方便的。但其中也有不少坑,说到底还是对底层基础掌握的不够好,特别是容器网络这一块,要好好学习下。

原文链接:https://yq.aliyun.com/articles/713088
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章