kubernetes安装记录
环境准备
机器准备:
10.90.14.125 esb-edi-test master 10.90.15.45 edi1 node1 10.90.15.43 edi2 node2 10.90.15.44 edi3 node3
http代理环境变量:
vi /etc/profile export http_proxy=http://用户名:密码@proxy02.h3c.com:8080/ export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16" source /etc/profile
yum的http代理环境变量:
vi /etc/yum.conf proxy=http://proxy02.h3c.com:8080/ proxy_username=用户名 proxy_password=密码
使用网易yum仓库:下载指定版本的repo文件,放到/ec/yum.repos.d目录:
yum clean all yum makecache
安装docker:注意安装k8s支持的版本,不要太高
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum list docker-ce.x86_64 --showduplicates |sort -r yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7 systemctl start docker systemctl enable docker
配置kubernetes.repo为阿里云:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装kubelet,kubeadm,kubectl:
yum install -y kubelet kubeadm kubectl systemctl enable kubelet.service
kubeadm初始化集群
Easily bootstrap a secure Kubernetes cluster
kubeadm --help Usage: kubeadm [command] Available Commands: alpha Kubeadm experimental sub-commands completion Output shell completion code for the specified shell (bash or zsh) config Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster help Help about any command init Run this command in order to set up the Kubernetes control plane join Run this on any machine you wish to join an existing cluster reset Run this to revert any changes made to this host by 'kubeadm init' or 'kubeadm join' token Manage bootstrap tokens upgrade Upgrade your cluster smoothly to a newer version with this command version Print the version of kubeadm Flags: -h, --help help for kubeadm --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity Use "kubeadm [command] --help" for more information about a command.
kubeadm init
刚开始接触的时候,看看man文档还是很有帮助的。
[root@esb-edi-test ~]# kubeadm init --help Run this command in order to set up the Kubernetes control plane The "init" command executes the following phases: preflight Run pre-flight checks kubelet-start Write kubelet settings and (re)start the kubelet certs Certificate generation /ca Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components /apiserver Generate the certificate for serving the Kubernetes API /apiserver-kubelet-client Generate the certificate for the API server to connect to kubelet /etcd-ca Generate the self-signed CA to provision identities for etcd /etcd-server Generate the certificate for serving etcd /etcd-peer Generate the certificate for etcd nodes to communicate with each other /apiserver-etcd-client Generate the certificate the apiserver uses to access etcd /etcd-healthcheck-client Generate the certificate for liveness probes to healtcheck etcd /front-proxy-ca Generate the self-signed CA to provision identities for front proxy /front-proxy-client Generate the certificate for the front proxy client /sa Generate a private key for signing service account tokens along with its public key kubeconfig Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file /admin Generate a kubeconfig file for the admin to use and for kubeadm itself /kubelet Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes /controller-manager Generate a kubeconfig file for the controller manager to use /scheduler Generate a kubeconfig file for the scheduler to use control-plane Generate all static Pod manifest files necessary to establish the control plane /apiserver Generates the kube-apiserver static Pod manifest /controller-manager Generates the kube-controller-manager static Pod manifest /scheduler Generates the kube-scheduler static Pod manifest etcd Generate static Pod manifest file for local etcd /local Generate the static Pod manifest file for a local, single-node local etcd instance upload-config Upload the kubeadm and kubelet configuration to a ConfigMap /kubeadm Upload the kubeadm ClusterConfiguration to a ConfigMap /kubelet Upload the kubelet component config to a ConfigMap upload-certs Upload certificates to kubeadm-certs mark-control-plane Mark a node as a control-plane bootstrap-token Generates bootstrap tokens used to join a node to a cluster addon Install required addons for passing Conformance tests /coredns Install the CoreDNS addon to a Kubernetes cluster /kube-proxy Install the kube-proxy addon to a Kubernetes cluster Usage: kubeadm init [flags] kubeadm init [command] Available Commands: phase Use this command to invoke single phase of the init workflow Flags: --apiserver-advertise-address string The IP address the API Server will advertise it's listening on. If not set the default network interface will be used. --apiserver-bind-port int32 Port for the API Server to bind to. (default 6443) --apiserver-cert-extra-sans strings Optional extra Subject Alternative Names (SANs) to use for the API Server serving certificate. Can be both IP addresses and DNS names. --cert-dir string The path where to save and store the certificates. (default "/etc/kubernetes/pki") --certificate-key string Key used to encrypt the control-plane certificates in the kubeadm-certs Secret. --config string Path to a kubeadm configuration file. --cri-socket string Path to the CRI socket to connect. If empty kubeadm will try to auto-detect this value; use this option only if you have more than one CRI installed or if you have non-standard CRI socket. --dry-run Don't apply any changes; just output what would be done. --feature-gates string A set of key=value pairs that describe feature gates for various features. No feature gates are available in this release. -h, --help help for init --ignore-preflight-errors strings A list of checks whose errors will be shown as warnings. Example: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks. --image-repository string Choose a container registry to pull control plane images from (default "k8s.gcr.io") --kubernetes-version string Choose a specific Kubernetes version for the control plane. (default "stable-1") --node-name string Specify the node name. --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. --service-cidr string Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") --service-dns-domain string Use alternative domain for services, e.g. "myorg.internal". (default "cluster.local") --skip-certificate-key-print Don't print the key used to encrypt the control-plane certificates. --skip-phases strings List of phases to be skipped --skip-token-print Skip printing of the default bootstrap token generated by 'kubeadm init'. --token string The token to use for establishing bidirectional trust between nodes and control-plane nodes. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef --token-ttl duration The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s) --upload-certs Upload control-plane certificates to the kubeadm-certs Secret. Global Flags: --log-file string If non-empty, use this log file --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages --skip-log-headers If true, avoid headers when opening log files -v, --v Level number for the log level verbosity Use "kubeadm init [command] --help" for more information about a command.
踩坑开始
[root@esb-edi-test ~]# kubeadm init W0801 13:32:36.665845 100602 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W0801 13:32:36.666067 100602 version.go:99] falling back to the local client version: v1.15.1
由于第一次安装,看到警告都慌,干掉它。加上--kubernetes-version v1.15.1参数:
[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING HTTPProxy]: Connection to "https://10.90.14.125" uses proxy "http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/". If that is not intended, adjust your proxy settings [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
由于我的虚拟机都在内网,设置了http代理,而没有设置白名单,所以内部ip也走了代理。把相关ip加入白名单:包括所有节点ip,集群cidr,pod cidr,nameserver。再来
vi /etc/profile export http_proxy=http://z15075:Woyizhiaih3c@proxy02.h3c.com:8080/ export no_proxy="10.90.14.125,10.90.14.124,10.90.14.123,10.72.66.37,10.72.66.36,10.96.0.0/12,10.244.0.0/16" source /etc/profile kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
docker版本太高,卸载docker,选择合适的版本重装:
yum -y remove docker-ce.x86_64 yum -y remove docker-ce-cli.x86_64 yum -y remove containerd.io.x86_64 rm -rf /var/lib/docker yum list docker-ce.x86_64 --showduplicates |sort -r yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7 systemctl start docker systemctl enable docker
再来:
[init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
关闭所有swap:swapoff -a。
再来:
[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: unexpected EOF , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
kubeadm从ks.gcr.id拉取相关镜像失败,被防火墙了。解决方案有:
- 配置docker的http代理,能访问k8s.gcr.io的代理
- 使用--image-repository,指定一个可用的仓库
- 用docker把该有的image都pull下来,打成谷歌的标签
使用第二种方式再来:
[root@esb-edi-test ~]# kubeadm init --kubernetes-version v1.15.1 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [esb-edi-test kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.90.14.125] [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [esb-edi-test localhost] and IPs [10.90.14.125 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.004059 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node esb-edi-test as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node esb-edi-test as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: zszy1a.8zcd3a5ah6p7zb19 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.90.14.125:6443 --token zszy1a.8zcd3a5ah6p7zb19 \ --discovery-token-ca-cert-hash sha256:956a63dcf70eb07068f7d9bd676602a4195ae8bee07a8337a206a8eb3447aba8
成功了:再按照他的指示来:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
检查下组件:
[root@esb-edi-test ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} [root@esb-edi-test ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION esb-edi-test NotReady master 8m28s v1.15.1
master状态为NotReady,需要配置网络插件,流行的有flannel:
[root@esb-edi-test ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
就这点输出,估计是出错了,看看pod状态:
[root@esb-edi-test ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6967fb4995-btkt6 0/1 ContainerCreating 0 19m coredns-6967fb4995-s85q6 0/1 ContainerCreating 0 19m etcd-esb-edi-test 1/1 Running 0 17m kube-apiserver-esb-edi-test 1/1 Running 0 18m kube-controller-manager-esb-edi-test 1/1 Running 0 18m kube-flannel-ds-amd64-ph85z 0/1 CrashLoopBackOff 4 3m34s kube-proxy-8rsft 1/1 Running 0 19m kube-scheduler-esb-edi-test 1/1 Running 0 18m
果然,coredns和kube-flannel都没ready,怎么办?看下log先:
[root@esb-edi-test kube-flannel]# kubectl --namespace kube-system logs kube-flannel-ds-amd64-ph85z I0801 07:57:57.784315 1 main.go:514] Determining IP address of default interface I0801 07:57:57.876204 1 main.go:527] Using interface with name eth0 and address 10.90.14.125 I0801 07:57:57.876270 1 main.go:544] Defaulting external address to interface address (10.90.14.125) I0801 07:57:57.890683 1 kube.go:126] Waiting 10m0s for node controller to sync I0801 07:57:57.890888 1 kube.go:309] Starting kube subnet manager I0801 07:57:58.890987 1 kube.go:133] Node controller sync successful I0801 07:57:58.891043 1 main.go:244] Created subnet manager: Kubernetes Subnet Manager - esb-edi-test I0801 07:57:58.891054 1 main.go:247] Installing signal handlers I0801 07:57:58.891379 1 main.go:386] Found network config - Backend type: vxlan I0801 07:57:58.891499 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false E0801 07:57:58.892231 1 main.go:289] Error registering network: failed to acquire lease: node "esb-edi-test" pod cidr not assigned I0801 07:57:58.892332 1 main.go:366] Stopping shutdownHandler...
pod cidr not assigned,没有给pod划分子网?Google一番,重新init,这次加上pod cidr,索性把集群cidr也加上:
kubeadm init \ --kubernetes-version v1.15.1 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=0.0.0.0 \
按之前步骤重新安装后的状态:
[root@esb-edi-test ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6967fb4995-8mh8x 1/1 Running 0 15m coredns-6967fb4995-9mp9d 1/1 Running 0 15m etcd-esb-edi-test 1/1 Running 0 14m kube-apiserver-esb-edi-test 1/1 Running 0 14m kube-controller-manager-esb-edi-test 1/1 Running 0 14m kube-flannel-ds-amd64-xpsdj 1/1 Running 0 6m24s kube-proxy-fwwl5 1/1 Running 0 15m kube-scheduler-esb-edi-test 1/1 Running 0 14m
加入节点
加入node可以使用kubeadm join命令,需要两个参数,token和ca
在master节点打印token:
[root@esb-edi-test ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS ydlp5v.hefzcti5tlx8ls1u 52m 2019-08-02T16:40:13+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
在master节点打印ca证书的sha256:
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1 a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f
在node1使用kubeadm join加入:
[root@edi1 ~]# kubeadm join 10.90.14.125:6443 \ > --token ydlp5v.hefzcti5tlx8ls1u \ > --discovery-token-ca-cert-hash sha256:a8b4b7fc4965aa8779b1e8caf22949f123e1d13d87f0307e506a2e0a34c68a9f [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master节点查看nodes信息:
[root@esb-edi-test log]# kubectl get nodes NAME STATUS ROLES AGE VERSION edi1 Ready <none> 7m19s v1.15.1 esb-edi-test Ready master 23h v1.15.1
依次把其它节点也加上:
[root@esb-edi-test log]# kubectl get nodes NAME STATUS ROLES AGE VERSION edi1 Ready <none> 11m v1.15.1 edi2 Ready <none> 61s v1.15.1 edi3 Ready <none> 112s v1.15.1 esb-edi-test Ready master 23h v1.15.1
总结
使用kubeadm安装集群还是很方便的。但其中也有不少坑,说到底还是对底层基础掌握的不够好,特别是容器网络这一块,要好好学习下。
低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
上云双引擎(下)- 用DTS从自建数据库迁移至RDS
背景 传统的互联网业务,无论是APP应用还是web应用,很多中小型企业对于云服务还是处于一个“懵懂”时期,基本上所有的应用部署都依然延用的线下的那一套,例如数据库自建,中间件自建,存储自建;如果业务都是单点部署,不管是数据安全、还是业务的稳定性,都难以得到保障。 业务价值 对多台云服务器进行流量分发的多可用区高可用版的负载均衡,可以通过流量分发扩展应用系统对外的服务能力,通过消除单点故障提升应用系统的可用性,自动跨可用区部署更是加强了业务容灾能力。 通过自定义镜像,可以迅速复制出相同应用部署的ECS云服务器实例,同时添加到SLB后端服务器组中,实现业务高可用。 SLB可以同时配置四层和七层监听,及轮循、加权轮循、加权最小连接数三种算法,合理分配后端ECS计算资源。 使用云数据库RDS,针对高并发场景进行特殊优化,同时引入线程池、并行复制、隐含
- 下一篇
阿里云史上最优惠活动:企业云服务器全场2折起!
阿里云作为国内No.1云服务器厂商,占领市场半壁江山,不仅靠的是性能、技术、售后等方面获得企业、站长青睐,更重要的是阿里云是营销狂魔,经常会推出一些促销折扣活动,这不,最近阿里云大张旗鼓地推出云主机爆款活动,全场2折起,话说这次活动相对比较良心的,没有以前那么多套路。如果有意向在阿里云上云,购买云服务器,那么这次活动一定不能错过。 云服务器爆款详情:必抢爆品 精选主机 活动地址:阿里云主机2折爆款活动 活动亮点:阿里云万年难得推出计算网络增强型sn1ne实例钜惠,8核16G 8M带宽配置3年仅需10000元。 计算网络增强型 sn1ne实例应用场景:计算网络增强型 sn1ne 实例适用于各类计算密集型应用,使用Intel Xeon Platinum 8163 处理器,适合高网络收发包场景,如:视频弹幕服务器、中大型 Web 服务器(高并发)、大型多人在线游戏 (MMO) 前端,数据分析和计算、利用 CPU 进行高精度编解码,渲染,基因计算等固定性能计算场景。特点是100%独享系统资源,不存在CPU限制和实例争抢,是真正的独享型服务器。如果你的业务属于企业级、B2B在线电子商务或各类计算...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
-
Docker使用Oracle官方镜像安装(12C,18C,19C)
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- CentOS8编译安装MySQL8.0.19
- Docker快速安装Oracle11G,搭建oracle11g学习环境
- SpringBoot2配置默认Tomcat设置,开启更多高级功能
- MySQL8.0.19开启GTID主从同步CentOS8
- CentOS7,8上快速安装Gitea,搭建Git服务器
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
推荐阅读
最新文章
- SpringBoot2整合Thymeleaf,官方推荐html解决方案
- CentOS关闭SELinux安全模块
- CentOS8编译安装MySQL8.0.19
- CentOS7设置SWAP分区,小内存服务器的救世主
- 设置Eclipse缩进为4个空格,增强代码规范
- CentOS8安装MyCat,轻松搞定数据库的读写分离、垂直分库、水平分库
- CentOS7编译安装Gcc9.2.0,解决mysql等软件编译问题
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池
- CentOS7编译安装Cmake3.16.3,解决mysql等软件编译问题