首页 文章 精选 留言 我的

精选列表

搜索[搭建],共10000篇文章
优秀的个人博客,低调大师

手把手教你在CentOS上搭建Kubernetes集群

作者:ChamPly安装CentOS 1.安装net-tools[root@localhost ~]# yum install -y net-tools2.关闭firewalld[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalldRemoved symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.[root@localhost ~]# setenforce 0[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config安装Docker 如今Docker分为了Docker-CE和Docker-EE两个版本,CE为社区版即免费版,EE为企业版即商业版。我们选择使用CE版。 1.安装yum源工具包[root@localhost ~]# yum install -y yum-utils device-mapper-persistent-data lvm22.下载docker-ce官方的yum源配置文件[root@localhost ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo3.禁用docker-c-edge源配edge是不开发版,不稳定,下载stable版yum-config-manager --disable docker-ce-edge4.更新本地YUM源缓存yum makecache fast5.安装Docker-ce相应版本的yum -y install docker-ce6.运行hello world[root@localhost ~]# systemctl start docker[root@localhost ~]# docker run hello-worldUnable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world9a0669468bf7: Pull completeDigest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fcStatus: Downloaded newer image for hello-world:latest Hello from Docker!This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: The Docker client contacted the Docker daemon. The Docker daemon pulled the "hello-world" image from the Docker Hub. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with:$ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID:https://cloud.docker.com/ For more examples and ideas, visit:https://docs.docker.com/engine/userguide/安装kubelet与kubeadm包 使用kubeadm init命令初始化集群之下载Docker镜像到所有主机的实始化时会下载kubeadm必要的依赖镜像,同时安装etcd,kube-dns,kube-proxy,由于我们GFW防火墙问题我们不能直接访问,因此先通过其它方法下载下面列表中的镜像,然后导入到系统中,再使用kubeadm init来初始化集群 1.使用DaoCloud加速器(可以跳过这一步)[root@localhost ~]# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://0d236e3f.m.daocloud.iodocker version >= 1.12{"registry-mirrors": ["http://0d236e3f.m.daocloud.io"]}Success.You need to restart docker to take effect: sudo systemctl restart docker[root@localhost ~]# systemctl restart docker2.下载镜像,自己通过Dockerfile到dockerhub生成对镜像,也可以克隆我的images=(kube-controller-manager-amd64 etcd-amd64 k8s-dns-sidecar-amd64 kube-proxy-amd64 kube-apiserver-amd64 kube-scheduler-amd64 pause-amd64 k8s-dns-dnsmasq-nanny-amd64 k8s-dns-kube-dns-amd64)for imageName in ${images[@]} ; dodocker pull champly/$imageNamedocker tag champly/$imageName gcr.io/google_containers/$imageNamedocker rmi champly/$imageNamedone3.修改版本docker tag gcr.io/google_containers/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 && \docker rmi gcr.io/google_containers/etcd-amd64 && \docker tag gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 && \docker rmi gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 && \docker tag gcr.io/google_containers/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 && \docker rmi gcr.io/google_containers/k8s-dns-kube-dns-amd64 && \docker tag gcr.io/google_containers/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.2 && \docker rmi gcr.io/google_containers/k8s-dns-sidecar-amd64 && \docker tag gcr.io/google_containers/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-apiserver-amd64 && \docker tag gcr.io/google_containers/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-controller-manager-amd64 && \docker tag gcr.io/google_containers/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.0 && \docker rmi gcr.io/google_containers/kube-proxy-amd64 && \docker tag gcr.io/google_containers/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5 && \docker rmi gcr.io/google_containers/kube-scheduler-amd64 && \docker tag gcr.io/google_containers/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 && \docker rmi gcr.io/google_containers/pause-amd644.添加阿里源[root@localhost ~]# cat >> /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0EOF5.查看kubectl kubelet kubeadm kubernetes-cni列表 [root@localhost ~]# yum list kubectl kubelet kubeadm kubernetes-cni已加载插件:fastestmirrorLoading mirror speeds from cached hostfile base: mirrors.tuna.tsinghua.edu.cn extras: mirrors.sohu.com updates: mirrors.sohu.com可安装的软件包 kubeadm.x86_64 1.7.5-0 kuberneteskubectl.x86_64 1.7.5-0 kuberneteskubelet.x86_64 1.7.5-0 kuberneteskubernetes-cni.x86_64 0.5.1-0 kubernetes[root@localhost ~]# 6.安装kubectl kubelet kubeadm kubernetes-cni[root@localhost ~]# yum install -y kubectl kubelet kubeadm kubernetes-cni修改cgroups vi /etc/systemd/system/kubelet.service.d/10-kubeadm.confupdate KUBELET_CGROUP_ARGS=--cgroup-driver=systemd to KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs 修改kubelet中的cAdvisor监控的端口,默认为0改为4194,这样就可以通过浏器查看kubelet的监控cAdvisor的web页[root@kub-master ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.confEnvironment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194" 启动所有主机上的kubelet服务[root@master ~]# systemctl enable kubelet && systemctl start kubelet初始化mastermaster节点上操作[root@master ~]# kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.0.100 --kubernetes-version=v1.7.5 --pod-network-cidr=10.200.0.0/16[preflight] Running pre-flight checks[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Removing kubernetes-managed containers[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd][reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf][kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.[init] Using Kubernetes version: v1.7.5[init] Using Authorization modes: [Node RBAC][preflight] Running pre-flight checks[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12[preflight] Starting the kubelet service[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)[certificates] Generated CA certificate and key.[certificates] Generated API server certificate and key.[certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.100][certificates] Generated API server kubelet client certificate and key.[certificates] Generated service account token signing key and public key.[certificates] Generated front-proxy CA certificate and key.[certificates] Generated front-proxy client certificate and key.[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 34.002949 seconds[token] Using token: 0696ed.7cd261f787453bd9[apiconfig] Created RBAC rules[addons] Applied essential addon: kube-proxy[addons] Applied essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each nodeas root: kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443 [root@master ~]#kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443这个一定要记住,以后无法重现,添加节点需要 添加节点[root@node1 ~]# kubeadm join --token 0696ed.7cd261f787453bd9 192.168.0.100:6443[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 1.12[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Starting the kubelet service[discovery] Trying to connect to API Server "192.168.0.100:6443"[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.100:6443"[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.100:6443"[discovery] Successfully established connection with API Server "192.168.0.100:6443"[bootstrap] Detected server version: v1.7.10[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request[csr] Received signed certificate from the API server, generating KubeConfig...[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: Certificate signing request sent to master and responsereceived. Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.在master配置kubectl的kubeconfig文件[root@master ~]# mkdir -p $HOME/.kube[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config在Master上安装flanneldocker pull quay.io/coreos/flannel:v0.8.0-amd64kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.ymlkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml查看集群[root@master ~]# kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy {"health": "true"}[root@master ~]# kubectl get nodesNAME STATUS AGE VERSIONmaster Ready 24m v1.7.5node1 NotReady 45s v1.7.5node2 NotReady 7s v1.7.5[root@master ~]# kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEkube-system etcd-master 1/1 Running 0 24mkube-system kube-apiserver-master 1/1 Running 0 24mkube-system kube-controller-manager-master 1/1 Running 0 24mkube-system kube-dns-2425271678-h48rw 0/3 ImagePullBackOff 0 25mkube-system kube-flannel-ds-28n3w 1/2 CrashLoopBackOff 13 24mkube-system kube-flannel-ds-ndspr 0/2 ContainerCreating 0 41skube-system kube-flannel-ds-zvx9j 0/2 ContainerCreating 0 1mkube-system kube-proxy-qxxzr 0/1 ImagePullBackOff 0 41skube-system kube-proxy-shkmx 0/1 ImagePullBackOff 0 25mkube-system kube-proxy-vtk52 0/1 ContainerCreating 0 1mkube-system kube-scheduler-master 1/1 Running 0 24m[root@master ~]#如果出现:The connection to the server localhost:8080 was refused - did you specify the right host or port? 解决办法:为了使用kubectl访问apiserver,在~/.bash_profile中追加下面的环境变量: export KUBECONFIG=/etc/kubernetes/admin.conf source ~/.bash_profile 重新初始化kubectl 作者:ChamPly来源:https://my.oschina.net/ChamPly/blog/1575888

优秀的个人博客,低调大师

海量数据下的舆情分析,该如何搭建

阿里妹导读:互联网的飞速发展促进了很多新媒体的发展,不论是知名的大V,明星还是围观群众都可以通过手机在微博、朋友圈或者点评网站上发表动态,分享自己的所见所想,使得“人人都有了麦克风”。不论是热点新闻还是娱乐八卦,传播速度远超我们的想象,一则信息可以在短短数分钟内,有数万计转发,数百万的阅读。海量信息可以得到爆炸式的传播,那么如何实时把握信息并作出对应的处理呢?是不是真的难以应对?今天,阿里云智能事业群的宇珩来跟我们聊聊大数据舆情系统对数据存储和计算系统会有哪些需求,如何根据需求来进行系统设计。 大数据时代下,除了媒体信息以外,商品在各类电商平台的订单量、用户的购买评论,都会对后续的消费者产生很大的影响。商家的产品设计者需要汇总统计和分析各类平台的数据做为依据,决定后续的产品发展,公司的公关和市场部门也需要根据舆情作出相应的及时处理,而这一切也意味着传统的舆情系统升级成为大数据舆情采集和分析系统。具体细化看下大数据舆情系统,对我们的数据存储和计算系统提出了以下需求: 海量原始数据的实时入库:为了实现一整套舆情系统,需要有上游原始输出的采集,也就是爬虫系统。爬虫需要采集各类门户,自媒体的网页内容。在抓取前需要去重,抓取后还需要分析提取,例如进行子网页的抓取。 原始网页数据的处理:不论是主流门户还是自媒体的网页信息,抓取后我们需要做一定的数据提取,把原始的网页内容转化为结构化数据,例如文章的标题,摘要等,如果是商品点评类消息也需要提取有效的点评。 结构化数据的舆情分析:当各类原始输出变成结构化的数据后,我们需要有一个实时的计算产品把各类输出做合理的分类,进一步对分类后的内容进行情感打标。根据业务的需求这里可能会产生不同的输出,例如品牌当下是否有热点话题,舆情影响力分析,转播路径分析,参与用户统计和画像,舆论情感分析或者是否有重大预警。 舆情分析系统中间和结果数据的存储,交互分析查询:从网页原始数据清洗到最终的舆情报表这中间会产生很多类型的数据。这些数据有的会提供给数据分析同学进行舆情分析系统的调优,有的数据会提供给业务部门根据舆情结果进行决策。这些查询可能会很灵活,需要我们的存储系统具备全文检索,多字段组合灵活的交互分析能力。 重大舆情事件的实时预警:对于舆情的结果除了正常的搜索和展示需求以外,当有重大事件出现我们需要能做到实时的预警。 本文主要是提供架构设计,会先介绍时下主流的大数据计算架构,并分析一些优缺点,然后引入舆情大数据架构。 系统设计 需求分析 结合文章开头对舆情系统的描述,海量大数据舆情分析系统流程图大体如下: 图1 舆情系统业务流程 原始网页存储库,这个库需要能支持海量数据,低成本,低延时写入。网页数据写入后,要做实时结构化提取,提取出来的数据再进行降噪,分词,图片ocr处理等。对分词文本,图片进行情感识别产生舆情数据结果集。传统的离线全量计算很难满足舆情系统的时效性需求。 计算引擎在做数据处理时,可能还需要从存储库中获取一些元数据,例如用户信息,情感词元数据信息等。 除了实时的计算链路,对存量数据定期要做一些聚类,优化我们的情感词识别库,或者上游根据业务需要触发情感处理规则更新,根据新的情感打标库对存量数据做一次舆情计算。 舆情的结果数据集有不同类的使用需求。对于重大舆情,需要做实时的预警。完整的舆情结果数据展示层需要支持全文检索,灵活的属性字段组合查询。业务上可能根据属性字段中的置信度,舆情时间,或者关键词组合进行分析。 根据前面的介绍,舆情大数据分析系统需要两类计算,一类是实时计算,包括海量网页内容实时抽取,情感词分析并进行网页舆情结果存储。另一类是离线计算,系统需要对历史数据进行回溯,结合人工标注等方式优化情感词库,对一些实时计算的结果进行矫正等。所以在系统设计上,需要选择一套既可以做实时计算又能做批量离线计算的系统。在开源大数据解决方案中,Lambda架构恰好可以满足这些需求,下面我们来介绍下Lambda的架构。 Lambda架构 (wiki) 图2 Lambda架构图 Lambda架构可以说是Hadoop,Spark体系下最火的大数据架构。这套架构的最大优势就是在支持海量数据批量计算处理(也就是离线处理)同时也支持流式的实时处理(即热数据处理)。 具体是如何实现的呢,首先上游一般是一个队列服务例如kafka,实时存储数据的写入。kafka队列会有两个订阅者,一个是全量数据即图片中上半部分,全量数据会被存储在类似HDFS这样的存储介质上。当有离线计算任务到来,计算资源(例如Hadoop)会访问存储系统上的全量数据,进行全量批计算的处理逻辑。 经过map/reduce环节后全量的结果会被写入一个结构化的存储引擎例如Hbase中,提供给业务方查询。队列的另一个消费订阅方是流计算引擎,流计算引擎往往会实时的消费队列中的数据进行计算处理,例如Spark Streaming实时订阅Kafka的数据,流计算结果也会写入一个结构化数据引擎。批量计算和流计算的结果写入的结构化存储引擎即上图标注3的"Serving Layer",这一层主要提供结果数据的展示和查询。 在这套架构中,批量计算的特点是需要支持处理海量的数据,并根据业务的需求,关联一些其他业务指标进行计算。批量计算的好处是计算逻辑可以根据业务需求灵活调整,同时计算结果可以反复重算,同样的计算逻辑多次计算结果不会改变。批量计算的缺点是计算周期相对较长,很难满足实时出结果的需求,所以随着大数据计算的演进,提出了实时计算的需求。 实时计算在Lambda架构中是通过实时数据流来实现,相比批处理,数据增量流的处理方式决定了数据往往是最近新产生的数据,也就是热数据。正因为热数据这一特点,流计算可以满足业务对计算的低延时需求,例如在舆情分析系统中,我们往往希望舆情信息可以在网页抓取下来后,分钟级别拿到计算结果,给业务方充足的时间进行舆情反馈。下面我们就来具体看一下,基于Lambda架构的思想如何实现一套完整的舆情大数据架构。 开源舆情大数据方案 通过这个流程图,让我们了解了整个舆情系统的建设过程中,需要经过不同的存储和计算系统。对数据的组织和查询有不同的需求。在业界基于开源的大数据系统并结合Lambda架构,整套系统可以设计如下: 图3 开源舆情架构图 系统的最上游是分布式的爬虫引擎,根据抓取任务抓取订阅的网页原文内容。爬虫会把抓取到的网页内容实时写入Kafka队列,进入Kafka队列的数据根据前面描述的计算需求,会实时流入流计算引擎(例如Spark或者Flink),也会持久化存储在Hbase,进行全量数据的存储。全量网页的存储可以满足网页爬取去重,批量离线计算的需求。 流计算会对原始网页进行结构化提取,将非结构化网页内容转化为结构数据并进行分词,例如提取出网页的标题、作者、摘要等,对正文和摘要内容进行分词。提取和分词结果会写回Hbase。结构化提取和分词后,流计算引擎会结合情感词库进行网页情感分析,判断是否有舆情产生。 流计算引擎分析的舆情结果存储Mysql或者Hbase数据库中,为了方便结果集的搜索查看,需要把数据同步到一个搜索引擎例如Elasticsearch,方便进行属性字段的组合查询。如果是重大的舆情时间,需要写入Kafka队列触发舆情报警。 全量的结构化数据会定期通过Spark系统进行离线计算,更新情感词库或者接受新的计算策略重新计算历史数据修正实时计算的结果。 开源架构分析 上面的舆情大数据架构,通过Kafka对接流计算,Hbase对接批计算来实现Lambda架构中的“batch view”和“real-time view”,整套架构还是比较清晰的,可以很好的满足在线和离线两类计算需求。但是把这一套系统应用在生产并不是一件容易的事情,主要有下面一些原因: 整套架构涉及到非常多的存储和计算系统包括:Kafka,Hbase,Spark,Flink,Elasticsearch。数据会在不同的存储和计算系统中流动,运维好整套架构中的每一个开源产品都是一个很大的挑战。任何一个产品或者是产品间的通道出现故障,对整个舆情分析结果的时效性都会产生影响。 为了实现批计算和流计算,原始的网页需要分别存储在Kafka和Hbase中,离线计算是消费hbase中的数据,流计算消费Kafka的数据,这样会带来存储资源的冗余,同时也导致需要维护两套计算逻辑,计算代码开发和维护成本也会上升。 舆情的计算结果存储在Mysql或者Hbase,为了丰富组合查询语句,需要把数据同步构建到Elasticsearch中。查询的时候可能需要组合Mysql和Elasticsearch的查询结果。这里没有跳过数据库,直接把结果数据写入Elasticsearch这类搜索系统,是因为搜索系统的数据实时写入能力和数据可靠性不如数据库,业界通常是把数据库和搜索系统整合,整合下的系统兼备了数据库和搜索系统的优势,但是两个引擎之间数据的同步和跨系统查询对运维和开发带来很多额外的成本。 新的大数据架构Lambda plus 通过前面的分析,相信大家都会有一个疑问,有没有简化的的大数据架构,在可以满足Lambda对计算需求的假设,又能减少存储计算以及模块的个数呢? Linkedin的Jay Kreps提出了Kappa架构,关于Lambda和Kappa的对比可以参考文末的文献,这里不展开详细对比,简单说下,Kappa为了简化两份存储,取消了全量的数据存储库,通过在Kafka保留更长日志,当有回溯重新计算需求到来时,重新从队列的头部开始订阅数据,再一次用流的方式处理Kafka队列中保存的所有数据。这样设计的好处是解决了需要维护两份存储和两套计算逻辑的痛点,美中不足的地方是队列可以保留的历史数据毕竟有限,难以做到无时间限制的回溯。 分析到这里,我们沿着Kappa针对Lambda的改进思路,向前多思考一些:假如有一个存储引擎,既满足数据库可以高效的写入和随机查询,又能像队列服务,满足先进先出,是不是就可以把Lambda和Kappa架构揉合在一起,打造一个Lambda plus架构呢? 新架构在Lambda的基础上可以提升以下几点: 在支持流计算和批计算的同时,让计算逻辑可以复用,实现“一套代码两类需求”。 统一历史数据全量和在线实时增量数据的存储,实现“一份存储两类计算”。 为了方便舆情结果查询需求,“batch view”和“real-time view”存储在既可以支持高吞吐的实时写入,也可以支持多字段组合搜索和全文检索。 总结起来就是整套新架构的核心是解决存储的问题,以及如何灵活的对接计算。我们希望整套方案是类似下面的架构: 图4 Lambda Plus架构 数据流实时写入一个分布式的数据库,借助于数据库查询能力,全量数据可以轻松的对接批量计算系统进行离线处理。 数据库通过数据库日志接口,支持增量读取,实现对接流计算引擎进行实时计算。 批计算和流计算的结果写回分布式数据库,分布式数据库提供丰富的查询语意,实现计算结果的交互式查询。 整套架构中,存储层面通过结合数据库主表数据和数据库日志来取代大数据架构中的队列服务,计算系统选取天然支持批和流的计算引擎例如Flink或者Spark。这样一来,我们既可以像Lambda进行无限制的历史数据回溯,又可以像Kappa架构一样一套逻辑,存储处理两类计算任务。这样的一套架构我们取名为“Lambda plus”,下面就详细展开如何在阿里云上打造这样的一套大数据架构。 云上舆情系统架构 在阿里云众多存储和计算产品中,贴合上述大数据架构的需求,我们选用两款产品来实现整套舆情大数据系统。存储层面使用阿里云自研的分布式多模型数据库Tablestore,计算层选用Blink来实现流批一体计算。 图5 云上舆情大数据架构 这套架构在存储层面,全部基于Tablestore,一个数据库解决不同存储需求,根据之前舆情系统的介绍,网页爬虫数据在系统流动中会有四个阶段分别是原始网页内容,网页结构化数据,分析规则元数据和舆情结果,舆情结果索引。 我们利用Tablestore宽行和schema free的特性,合并原始网页和网页结构化数据成一张网页数据。网页数据表和计算系统通过Tablestore新功能通道服务进行对接。通道服务基于数据库日志,数据的组织结构按照数据的写入顺序进行存储,正是这一特性,赋能数据库具备了队列流式消费能力。使得存储引擎既可以具备数据库的随机访问,也可以具备队列的按照写入顺序访问,这也就满足我们上面提到整合Lambda和kappa架构的需求。分析规则元数据表由分析规则,情感词库组层,对应实时计算中的维表。 计算系统这里选用阿里云实时流计算产品Blink,Blink是一款支持流计算和批计算一体的实时计算产品。并且类似Tablestore可以很容易的做到分布式水平扩展,让计算资源随着业务数据增长弹性扩容。使用Tablestore + Blink的优势有以下几点: Tablestore已经深度和Blink进行整合,支持源表,维表和目的表,业务无需为数据流动开发代码。 整套架构大幅降低组建个数,从开源产品的6~7个组建减少到2个,Tablestore和Blink都是全托管0运维的产品,并且都能做到很好的水平弹性,业务峰值扩展无压力,使得大数据架构的运维成本大幅降低。 业务方只需要关注数据的处理部分逻辑,和Tablestore的交互逻辑都已经集成在Blink中。 开源方案中,如果数据库源希望对接实时计算,还需要双写一个队列,让流计算引擎消费队列中的数据。我们的架构中数据库既作为数据表,又是队列通道可以实时增量数据消费。大大简化了架构的开发和使用成本。 流批一体,在舆情系统中实时性是至关重要的,所以我们需要一个实时计算引擎,而Blink除了实时计算以外,也支持批处理Tablestore的数据, 在业务低峰期,往往也需要批量处理一些数据并作为反馈结果写回Tablestore,例如情感分析反馈等。那么一套架构既可以支持流处理又可以支持批处理是再好不过。一套架构带来的优势是,一套分析代码既可以做实时流计算又可以离线批处理。 整个计算流程会产生实时的舆情计算结果。重大舆情事件的预警,通过Tablestore和函数计算触发器对接来实现。Tablestore和函数计算做了增量数据的无缝对接,通过结果表写入事件,可以轻松的通过函数计算触发短信或者邮件通知。完整的舆情分析结果和展示搜索利用了Tablestore的新功能多元索引,彻底解决了开源Hbase+Solr 多引擎的痛点: 运维复杂,需要有运维hbase和solr两套系统的能力,同时还需要维护数据同步的链路。 Solr数据一致性不如Hbase,在Hbase和Solr数据语意并不是完全一致,加上Solr/Elasticsearch在数据一致性很难做到像数据库那么严格。在一些极端情况下会出现数据不一致的问题,开源方案也很难做到跨系统的一致性比对。 查询接口需要维护两套API,需要同时使用Hbase client和Solr client,索引中没有的字段需要主动反查Hbase,易用性较差。 参考文献 Lambda大数据架构:https://mapr.com/tech-briefs/stream-processing-mapr/Kappa大数据架构:https://www.oreilly.com/ideas/questioning-the-lambda-architectureLambda和Kappa架构对比:https://www.ericsson.com/en/blog/2015/11/data-processing-architectures--lambda-and-kappa 原文发布时间为:2019-07-12本文作者:宇珩本文来自云栖社区合作伙伴“阿里技术”,了解相关信息可以关注“阿里技术”。

优秀的个人博客,低调大师

K8S自己动手系列 - 1.1 - 集群搭建

准备 作为学习与实战的记录,笔者计划编写一系列实战系列文章,主要根据实际使用场景选取Kubernetes最常用的功能进行实验,并使用当前最流行的kubeadm安装集群。本文用到的所有实验环境均基于笔者个人工作站虚拟化多个VM而来,如果读者有一台性能尚可的工作站台式机,推荐读者参考本文操作过程实战演练一遍,有助于对Kubernetes各项概念及功能的理解。 前期准备: 两台VM,笔者安装的OS为ubuntu 16.04 保证两台VM网络互通,为了使网络拓扑尽可能简单,我使用的虚拟化软件为VirtualBox,宿主机为ubuntu 19.04,网络模式为Bridge 先解决网络问题 Ubuntu APT https://opsx.alibaba.com/mirror 搜索 ubuntu Kubernetes APT Repo https://opsx.alibaba.com/mirror 搜索 Kubernetes Docker Image Repo # 此过程需要主机先安装好docker-daemon,参考集群安装部分有说明 1. 安装/升级Docker客户端 推荐安装1.10.0以上版本的Docker客户端,参考文档 docker-ce 2. 配置镜像加速器 针对Docker客户端版本大于 1.10.0 的用户 您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://ft3ykfyc.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker 集群安装 kubelet kubeadm kubectl apt-get update apt-get install -y kubelet kubeadm kubectl docker # 参考 https://kubernetes.io/docs/setup/cri/ # Install Docker CE ## Set up the repository: ### Install packages to allow apt to use a repository over HTTPS apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common ### Add Docker’s official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - ### Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" ## Install Docker CE. apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker 初始化集群 确保swap关闭 swapoff -a vim /etc/fstab ... # comment this #UUID=2746cf1b-d1ab-41e2-8a31-8c1ed2cca910 none swap sw 0 0 kubeadm init ~ kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=stable I0608 11:05:15.863459 9577 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) I0608 11:05:15.863537 9577 version.go:97] falling back to the local client version: v1.14.3 [init] Using Kubernetes version: v1.14.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' 解决镜像拉取问题 上面的拉取特别慢,所以需要从镜像仓库手工拉取镜像,并打tag替代从官方库拉取 # 查看使用到的镜像 ~ kubeadm config images list k8s.gcr.io/kube-apiserver:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 # 手工拉取镜像 docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.3 docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.14.3 docker pull docker.io/mirrorgooglecontainers/pause:3.1 docker pull docker.io/mirrorgooglecontainers/etcd:3.3.10 docker pull docker.io/coredns/coredns:1.3.1 # 手工打tag docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3 docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3 docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag docker.io/mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 再次执行,终于创建成功,输出如下: ~ kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=stable [init] Using Kubernetes version: v1.14.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [worker01 localhost] and IPs [192.168.101.113 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [worker01 localhost] and IPs [192.168.101.113 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [worker01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.113] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 18.005322 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node worker01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node worker01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ss6flg.csw4u0ok134n2fy1 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.101.113:6443 --token ss6flg.csw4u0ok134n2fy1 \ --discovery-token-ca-cert-hash sha256:bac9a150228342b7cdedf39124ef2108653db1f083e9f547d251e08f03c41945 安装网络插件 For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init. Set /proc/sys/net/bridge/bridge-nf-call-iptables to 1 by running sysctl net.bridge.bridge-nf-call-iptables=1 to pass bridged IPv4 traffic to iptables’ chains. This is a requirement for some CNI plugins to work, for more information please see here. Make sure that your firewall rules allow UDP ports 8285 and 8472 traffic for all hosts participating in the overlay network. see here . Note that flannel works on amd64, arm, arm64, ppc64le and s390x under Linux. Windows (amd64) is claimed as supported in v0.11.0 but the usage is undocumented. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml For more information about flannel, see the CoreOS flannel repository on GitHub . 安装完成后,查看所有组件已经成功运行 ~ kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-fb8b8dccf-vmdsj 1/1 Running 0 24m kube-system pod/coredns-fb8b8dccf-xrhrs 1/1 Running 0 24m kube-system pod/etcd-worker01 1/1 Running 0 23m kube-system pod/kube-apiserver-worker01 1/1 Running 0 23m kube-system pod/kube-controller-manager-worker01 1/1 Running 0 23m kube-system pod/kube-flannel-ds-amd64-cgnnz 1/1 Running 0 4m18s kube-system pod/kube-proxy-vfvkp 1/1 Running 0 24m kube-system pod/kube-scheduler-worker01 1/1 Running 0 23m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 4m18s kube-system daemonset.apps/kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 4m18s kube-system daemonset.apps/kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 4m18s kube-system daemonset.apps/kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 4m18s kube-system daemonset.apps/kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 4m18s kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 24m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 2/2 2 2 24m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-fb8b8dccf 2 2 2 24m run a demo pod ~ kubectl create deployment nginx --image=nginx deployment.apps/nginx created ~ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 0/1 1 0 9s ~ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-65f88748fd-95gkh 0/1 Pending 0 21s ~ kubectl describe pod/nginx-65f88748fd-95gkh Name: nginx-65f88748fd-95gkh Namespace: default Priority: 0 PriorityClassName: <none> Node: <none> Labels: app=nginx pod-template-hash=65f88748fd Annotations: <none> Status: Pending IP: Controlled By: ReplicaSet/nginx-65f88748fd Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-5kf45 (ro) Conditions: Type Status PodScheduled False Volumes: default-token-5kf45: Type: Secret (a volume populated by a Secret) SecretName: default-token-5kf45 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 30s default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. 查看错误原因,写的很清楚,没有可用节点,是因为我们唯一的一个节点worker01是master节点,master节点默认含有taint(污点),默认不可以调度业务pod,我们来去除这个污点,让nginx可以调度上去 ~ kubectl describe node worker01 Name: worker01 Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=worker01 kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"86:f6:8f:29:d7:c7"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 192.168.101.113 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 08 Jun 2019 11:56:28 +0800 Taints: node-role.kubernetes.io/master:NoSchedule ... ~ kubectl taint nodes --all node-role.kubernetes.io/master- node/worker01 untainted ~ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-65f88748fd-95gkh 1/1 Running 0 4m11s 10.244.0.4 worker01 <none> <none> 可以看到pod已经是running状态了,测试一下 ~ curl 10.244.0.4 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 成功!! 加入节点 在worker02上,执行: ~ kubeadm join 192.168.101.113:6443 --token ss6flg.csw4u0ok134n2fy1 \ --discovery-token-ca-cert-hash sha256:bac9a150228342b7cdedf39124ef2108653db1f083e9f547d251e08f03c41945 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 查看 ~ kubectl get nodes NAME STATUS ROLES AGE VERSION worker01 Ready master 52m v1.14.3 worker02 Ready <none> 7m12s v1.14.3 将demo的replica设置为2 ~ kubectl scale deployment.v1.apps/nginx --replicas=2 deployment.apps/nginx scaled ~ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7cffb9df96-8n884 1/1 Running 0 5m2s 10.244.0.6 worker01 <none> <none> nginx-7cffb9df96-rbvsr 1/1 Running 0 3s 10.244.1.10 worker02 <none> <none> ~ http 10.244.1.10 HTTP/1.1 200 OK Accept-Ranges: bytes Connection: keep-alive Content-Length: 612 Content-Type: text/html Date: Sat, 08 Jun 2019 05:03:57 GMT ETag: "5ce409fd-264" Last-Modified: Tue, 21 May 2019 14:23:57 GMT Server: nginx/1.17.0 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 成功!至此我们安装好了两个节点的集群,并基于Flannel网络插件的方式,网络模式为VXLAN

资源下载

更多资源
腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Rocky Linux

Rocky Linux

Rocky Linux(中文名:洛基)是由Gregory Kurtzer于2020年12月发起的企业级Linux发行版,作为CentOS稳定版停止维护后与RHEL(Red Hat Enterprise Linux)完全兼容的开源替代方案,由社区拥有并管理,支持x86_64、aarch64等架构。其通过重新编译RHEL源代码提供长期稳定性,采用模块化包装和SELinux安全架构,默认包含GNOME桌面环境及XFS文件系统,支持十年生命周期更新。

用户登录
用户注册