首页 文章 精选 留言 我的

精选列表

搜索[高并发],共10000篇文章
优秀的个人博客,低调大师

苹果管回应“个性化版 Siri”延期:技术架构限制导致未达到预期标准

在今年的 WWDC25 主题演讲中,苹果的 AI 智能助手 Siri 显然没有占据重要位置。苹果仅简要提到它,并重申了开发进展比预期更慢,集成苹果智能(Apple Intelligence)需要更长时间,预计将在 “明年” 推出。 演讲结束后,苹果软件工程高级副总裁克雷格・费德里吉(Craig Federighi)和全球营销副总裁格雷格・乔斯维亚克(Greg Joswiak)进行了深度对话,解释了苹果在 Apple Intelligence、Siri 和 AI 领域的战略思考。 苹果曾承诺在 2024 年底发布集成 Apple Intelligence 的 Siri 更新,但最终未能如期交付,并在 2025 年春季承认该功能还需要更多时间。至于原因,外界一直未能搞清楚,而苹果一向不会展示那些无法按时交付的技术或产品。 访谈中,费德里吉详细解释了问题的所在,并说明了苹果如何继续推进这一计划。 费德里吉提到在开发过程中意识到可以基于设备端的大语言模型、私人云计算基础设施和设备端的语义索引技术提升 Siri 的智能化水平,并设想通过 V1 架构协调应用意图来触发设备上的更多操作,让 Siri 执行更多任务。例如,利用语义索引中的个人知识,当用户询问特定问题时,Siri 能从消息或邮件中找到相关内容,并通过应用意图执行相关操作。但这些功能在 V1 架构下尚未完全交付。 在苹果致力于开发 Siri 架构 V1 的同时,它也在打造 V2 架构 —— 费德里吉称其为 “更深层次的端到端架构,我们知道这才是最终要实现的架构,是让 Siri 具备完整功能的架构。” 费德里吉说道:“我们花了几个月的时间,不断优化 V1 架构在更多应用意图和搜索功能方面的表现。但从根本上看,我们发现该架构的局限性没有办法达到客户所期望的质量水平。因此,我们决定转向 V2 架构。但当我们意识到这一点时,已经是春季了,于是我们向外界说明,无法按计划发布,并将继续转向新架构。” 费德里吉进一步表示,即便采用第二代架构,苹果仍在不断优化 Siri 功能,确保其达到最佳状态。 苹果市场负责人 Greg Joswiak 在访谈中确认,“未来一年” 指向 2026 年,科技媒体 MacRumors 推测个性化 Siri 功能很可能随 iOS 26.4 版本于 2026 年春季面世。 相关阅读:苹果推迟上线 Siri 中的 AI 相关功能

优秀的个人博客,低调大师

GOTC 2024 高峰论坛,20+ 管,聚焦“开源生态与商业化”、“AIGC 产业前沿”

2024 年 8 月 15 日至 16 日,全球开源技术峰会 GOTC 2024 将于上海张江科学会堂盛大开启。 GOTC 2024 与上海浦东软件园联合举办,结合了 “GOTC(全球开源技术峰会)” 与 “GOGC(全球开源极客嘉年华)” 两大活动品牌。 大会由一个主论坛领航,两大高峰论坛以及六大专题论坛并行,全面深入探讨 AI、数据库、云原生等前沿技术领域,届时将集结全球范围内对开源技术充满热情的开发者、社区成员、创业者、企业领袖、媒体人,以及各开源项目应用场景的产业精英、跨界才俊与年轻力量。 其中,高峰论坛将于 8 月 16 日 举行。 上午,论坛将围绕“开源生态与商业化”这一主题展开。开源生态是发展开源与技术的沃土,商业化是开源走向可持续发展与未来的必然选择。开源生态与商业化高峰论坛聚焦开源生态的建设与发展,论坛上不仅有开源生态伙伴强强联合,也将带来开源可持续发展的商业洞见。 下午,论坛将聚焦“AIGC 产业前沿”。届时,来自 Unity、硅基流动、商汤科技等知名企业的行业专家将齐聚一堂,探讨 AIGC 技术的创新趋势、商业潜力以及对社会的深远影响,话题内容涉及 AIGC 的技术创新与融合、AIGC 在各行业的落地实践、以及 AIGC 内容创造与分发等方面。 GOTC 2024 报名通道现已开启,诚邀全球开源技术爱好者齐聚上海,限时免费获取 499 元专业票,仅限 50 张: https://qaxb95n3g50.feishu.cn/share/base/form/shrcntXjImLZ2L4HsDtd976XXmh

优秀的个人博客,低调大师

苹果管承认扫描iCloud相册等功能引起大众困惑,还保证数据不会被利用

8月13日,苹果公司软件工程高级副总裁Craig Federigh回应了公众对扫描iCloud相册等功能的质疑,他承认苹果发布新功能后的处理很糟糕,也承认这些功能引起了大众的困惑。 Craig Federigh回应的功能是苹果在8月5日公布的儿童安全功能,它包括新版照片甄别系统(Child Sexual Abuse Material,儿童性虐待内容,简称为 "CSAM")、iMessage通信安全功能(通过iMessage通知父母孩子正在浏览的敏感内容)、更新Siri和搜索功能。 过去的两周,公众争议的焦点聚集在CSAM和iMessage通信安全这两项功能。因为CSAM在执行时会扫描用户的iCloud相册,iMessage通信安全功能则涉及对用户聊天图片的检测。 苹果公司软件工程高级副总裁Craig Federigh Craig Federigh向《华尔街日报》表示,苹果执行CSAM时设定了“多级别的审核”,保护苹果的系统,保证这些数据不会被政府或其他第三方利用。 CSAM会在用户照片上传到iCloud之前,就将其与已知CSAM的散列图像进行匹配。而检测到异常的账户将接受苹果公司的人工审查,并可能向国家失踪和受虐儿童中心(NCMEC)报告。 iMessage通信安全功能则在孩子发送或接收色情图片时发挥作用。如果孩子们通过iMessage发送或接收色情图片,他们将在观看之前收到警告,图片将被模糊处理,他们的父母也可以选择收到警告。 Craig Federigh还透露了围绕该系统保障措施的一些新细节。具体而言,只有当用户相册中与CSAM图片库匹配上的儿童色情图片达到30张时,苹果才会获取用户的账户和这些图片的具体情况。他表示苹果只会获取这些与图片库匹配的图片,不会知道用户的其他图片。 苹果将根据国家失踪和被剥削儿童中心(NCMEC)的名单检查iCloud照片,寻找与已知CSAM图片完全匹配的图片。并且,苹果是直接在用户的设备上扫描图片,不是在iCloud服务器上远程进行。 苹果此前曾表示,CASM系统只在美国推出,它将根据具体情况考虑何时在其他国家推出。苹果还告诉媒体,它将在所有国家的操作系统上发布已知CSAM的哈希数据库,但扫描功能目前只用于美国的设备。《华尔街日报》进一步澄清说,将有一个独立的审计机构来核实所涉及的图像。

优秀的个人博客,低调大师

企业级k8s可用集群搭建(无坑版,如有坑请联系解决)

2 --> 目录一.服务器规划二.资源准备(系统配置,内核优化,yum源配置)(所有节点操作)三.keepalived配置(master节点操作)四.haproxy配置(master节点操作)五.docker安装(所有节点操作)六.安装kubeadm,kubelet kubectl(所有节点操作)七.k8s集群安装(在具有vip的master上操作)八.安装集群网络(master节点操作)九.其他节点加入集群十.部署dashboard(k8s-master-01)十一.部署ingress(0.30.0)十二.部署metric十三.部署kubernetes dns缓存十四.k8s缩容扩容维护十五.k8s常见操作十六.参考下一篇,关于应用部署实际应用案例 一.服务器规划(基于腾讯云服务器的规划) k8s-master-01 10.206.16.14 master k8s-master-02 10.206.16.15 master k8s-master-03 10.206.16.16 master k8s-node-01 10.206.16.8 node k8s-node-02 10.206.16.9 node k8s-harbor 10.206.16.4 harbor vip 10.206.16.18 (https://cloud.tencent.com/document/product/215/36694 腾讯云vip申请办法) 服务器系统centos8 k8s版本1.16 机器配置4核16G 二.资源准备 1.设置主机的名字 hostnamectl set-hostname k8s-master-01(分别在对应的主机按照规划设置名字) 2.编辑hosts文件(在所有主机操作) cat <<EOF >>/etc/hosts 10.206.16.18 master.k8s.io k8s-vip 10.206.16.14 master01.k8s.io k8s-master-01 10.206.16.15 master02.k8s.io k8s-master-02 10.206.16.16 master03.k8s.io k8s-master-03 10.206.16.8 node01.k8s.io k8s-node-01 10.206.16.9 node02.k8s.io k8s-node-02 10.206.16.4 k8s-harbor EOF 3.关闭防火墙,selinux,swap(在所有主机操作) systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && swapoff -a sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 4.配置内核参数(所有主机操作) cat >/etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables =1 net.bridge.bridge-nf-call-iptables =1 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf 5.设置资源配置文件(所有主机操作) echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf 6.配置yum源(所有主机操作) yum install -y wget mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos8_base.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo yum clean all && yum makecache 7.配置kubernetes源(所有主机操作) cat <<EOF >/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 8.配置docker源(所有主机操作) wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 9.安装相关包(所有主机操作) yum install -y conntrack-tools libseccomp libtool-ltdl 三.部署keepalived(在3台master机器操作) 1.安装keepalived(在3台master操作) yum install -y conntrack-tools libseccomp libtool-ltdl 2.配置(另外的两台master配置和上面类似,只需要修改对应的state配置为BACKUP,priority权重值不同即可,配置中的其他字段这里不做说明) k8s-master-01配置: cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 250 nopreempt #设置非抢占模式 preempt_delay 10 #抢占延时10分钟 advert_int 1 #检查间隔默认1s authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab11 } unicast_src_ip 10.206.16.14 #设置本机内网ip unicast_peer{ #其他两台master ip 10.206.16.15 10.206.16.16 } virtual_ipaddress { 10.206.16.18 } track_script { check_haproxy } } EOF k8s-master-02配置 cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 nopreempt preempt_delay 10 priority 200 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab11 } unicast_src_ip 10.206.16.15 #设置本机内网ip unicast_peer{ 10.206.16.14 10.206.16.16 } virtual_ipaddress { 10.206.16.18 } track_script { check_haproxy } } EOF k8s-master-03配置: cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id k8s } vrrp_script check_haproxy { script "killall -0 haproxy" interval 3 weight -2 fall 10 rise 2 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 150 nopreempt preempt_delay 10 advert_int 1 authentication { auth_type PASS auth_pass ceb1b3ec013d66163d6ab11 } unicast_src_ip 10.206.16.16 unicast_peer{ 10.206.16.14 10.206.16.15 } virtual_ipaddress { 10.206.16.18 } track_script { check_haproxy } } EOF 3.启动检查 设置开机启动 systemctl enable keepalived.service 启动 systemctl start keepalived.service 查看启动状态 systemctl status keepalived.service 启动后查看k8s-master-01网卡信息 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:00:f2:57:46 brd ff:ff:ff:ff:ff:ff inet 10.206.16.14/20 brd 10.206.31.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 10.206.16.18/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fef2:5746/64 scope link valid_lft forever preferred_lft forever 尝试停掉k8s-master-01的keepalived服务,查看vip是否能漂移到其他的master,并且重新启动k8s-master-01的keepalived服务,查看vip是否能正常漂移回来,证明配置没有问题。 四.haproxy搭建(三台master上操作) 1.安装haproxy(三台master执行) yum install -y haproxy 2.配置(三台master执行) cat > /etc/haproxy/haproxy.cfg << EOF #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # kubernetes apiserver frontend which proxys to the backends #--------------------------------------------------------------------- frontend kubernetes-apiserver mode tcp bind *:16443 option tcplog default_backend kubernetes-apiserver #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend kubernetes-apiserver mode tcp balance roundrobin server master01.k8s.io 10.206.16.14:6443 check server master02.k8s.io 10.206.16.15:6443 check server master03.k8s.io 10.206.16.16:6443 check #--------------------------------------------------------------------- # collection haproxy statistics message #--------------------------------------------------------------------- listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats EOF 3.启动和检查(三台master执行) # 设置开机启动 systemctl enable haproxy # 开启haproxy systemctl start haproxy # 查看启动状态 systemctl status haproxy 4.检查服务端口是否启动 [root@VM-16-14-centos yum.repos.d]# netstat -lntup|grep haproxy tcp 0 0 0.0.0.0:1080 0.0.0.0:* LISTEN 37567/haproxy tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 37567/haproxy udp 0 0 0.0.0.0:48413 0.0.0.0:* 37565/haproxy 五.安装docker(所有节点) 1.安装 # step 1: 安装必要的一些系统工具 $ yum install -y yum-utils device-mapper-persistent-data lvm2 # Step 2: 添加软件源信息 $ sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Step 3: 查找Docker-CE的版本: $ yum list docker-ce.x86_64 --showduplicates | sort -r # Step 4: 安装指定版本的Docker-CE $ yum makecache $ yum install -y docker-ce 2.配置 修改docker的配置文件,目前k8s推荐使用的docker文件驱动是systemd mkdir /etc/docker cat > /etc/docker/daemon.json << EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "insecure-registries" : ["10.206.16.4"] } EOF 修改docker的服务配置文件,指定docker的数据目录为外挂的磁盘--graph /data/docker mkdir /data/docker sed -i "s#containerd.sock#containerd.sock --graph /data/docker#g" /lib/systemd/system/docker.service 添加如下执行语句:(如果pod之间无法通信的问题) mkdir -p /etc/systemd/system/docker.service.d/ cat>/etc/systemd/system/docker.service.d/10-docker.conf<<EOF [Service] ExecStartPost=/sbin/iptables --wait -I FORWARD -s 0.0.0.0/0 -j ACCEPT ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D FORWARD -s 0.0.0.0/0 -j ACCEPT &> /dev/null || :' ExecStartPost=/sbin/iptables --wait -I INPUT -i cni0 -j ACCEPT ExecStopPost=/bin/bash -c '/sbin/iptables --wait -D INPUT -i cni0 -j ACCEPT &> /dev/null || :' EOF 3.启动docker $ systemctl daemon-reload $ systemctl start docker.service $ systemctl enable docker.service $ systemctl status docker.service 六.安装kubeadm,kubelet,和kubectl(所有节点操作) 1.安装 yum install -y kubelet-1.16.3 kubeadm-1.16.3 kubectl-1.16.3 systemctl enable kubelet 2.配置kubectl自动补全功能 source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc 七.安装k8s集群(在具有vip的k8s-master-01上操作) 1.创建配置文件: mkdir /usr/local/kubernetes/manifests -p cd /usr/local/kubernetes/manifests/ cat > kubeadm-config.yaml <<EOF apiServer: certSANs: - k8s-master-01 - k8s-master-02 - k8s-master-03 - master.k8s.io - 10.206.16.14 - 10.206.16.15 - 10.206.16.16 - 10.206.16.18 - 127.0.0.1 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: "master.k8s.io:16443" controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.16.3 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.1.0.0/16 scheduler: {} EOF 2.初始化master节点 [root@VM-16-14-centos manifests]# kubeadm init --config kubeadm-config.yaml [init] Using Kubernetes version: v1.16.3 [preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.k8s.io k8s-master-01 k8s-master-02 k8s-master-03 master.k8s.io] and IPs [10.1.0.1 10.206.16.14 10.206.16.14 10.206.16.15 10.206.16.16 10.206.16.18 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.206.16.14 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master-01 localhost] and IPs [10.206.16.14 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 36.002615 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ee3zom.l4xeahsfqcj9uvvz [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \ --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join master.k8s.io:16443 --token ee3zom.l4xeahsfqcj9uvvz \ --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b 3.按照提示配置环境变量 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 4.查看集群状态 [root@VM-16-14-centos manifests]# kubectl get cs NAME AGE scheduler <unknown> controller-manager <unknown> etcd-0 <unknown> [root@VM-16-14-centos manifests]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-58cc8c89f4-ch8pn 0/1 Pending 0 2m45s coredns-58cc8c89f4-qdz7t 0/1 Pending 0 2m45s etcd-k8s-master-01 1/1 Running 0 113s kube-apiserver-k8s-master-01 1/1 Running 0 99s kube-controller-manager-k8s-master-01 1/1 Running 0 98s kube-proxy-wvp9b 1/1 Running 0 2m45s kube-scheduler-k8s-master-01 1/1 Running 0 2m6s 里处于pending状态的原因是因为还没有安装网络组件 九.安装集群网络(master操作) 1.安装flannel插件 [root@VM-16-14-centos flannel]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created 2.检查 [root@VM-16-14-centos flannel]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-58cc8c89f4-ch8pn 1/1 Running 0 32m coredns-58cc8c89f4-qdz7t 1/1 Running 0 32m etcd-k8s-master-01 1/1 Running 0 31m kube-apiserver-k8s-master-01 1/1 Running 0 31m kube-controller-manager-k8s-master-01 1/1 Running 0 31m kube-flannel-ds-qljzc 1/1 Running 0 63s kube-proxy-wvp9b 1/1 Running 0 32m kube-scheduler-k8s-master-01 1/1 Running 0 32m 十.其他节点加入集群(master,node都需要加入) 1.master加入集群 1.1复制密钥和相关文件(k8s-master-01执行) 建立免登录 ssh-keygen -t rsa ssh-copy-id root@10.206.16.15 ssh-copy-id root@10.206.16.16 复制文件到k8s-master-02 ssh root@10.206.16.15 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@10.206.16.15:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.15:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.15:/etc/kubernetes/pki/etcd 复制文件到k8s-master-03 ssh root@10.206.16.16 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@10.206.16.16:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@10.206.16.16:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@10.206.16.16:/etc/kubernetes/pki/etcd 1.2 master加入集群 分别在其他两个master ,k8s-master-02,k8s-master-03上执行k8s-master-01 init后输出的join命令,如果找不到可以在k8s-master-01执行 kubeadm token create --print-join-command 在k8s-master-02上执行操作,需要带上参数--control-plane表示把master控制节点加入到集群 kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b --control-plane mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 在k8s-master-03上执行join命令 kubeadm join master.k8s.io:16443 --token 13dqfw.8vteayxksdn03mve --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b --control-plane mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 1.3检查master是否加入成功 root@VM-16-14-centos flannel]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 74m v1.16.3 k8s-master-02 Ready master 18m v1.16.3 k8s-master-03 Ready master 87s v1.16.3 [root@VM-16-14-centos flannel]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-58cc8c89f4-ch8pn 1/1 Running 0 74m kube-system coredns-58cc8c89f4-qdz7t 1/1 Running 0 74m kube-system etcd-k8s-master-01 1/1 Running 0 73m kube-system etcd-k8s-master-02 1/1 Running 0 19m kube-system etcd-k8s-master-03 1/1 Running 0 93s kube-system kube-apiserver-k8s-master-01 1/1 Running 0 73m kube-system kube-apiserver-k8s-master-02 1/1 Running 0 19m kube-system kube-apiserver-k8s-master-03 1/1 Running 0 94s kube-system kube-controller-manager-k8s-master-01 1/1 Running 1 73m kube-system kube-controller-manager-k8s-master-02 1/1 Running 0 19m kube-system kube-controller-manager-k8s-master-03 1/1 Running 0 94s kube-system kube-flannel-ds-965w9 1/1 Running 0 94s kube-system kube-flannel-ds-qljzc 1/1 Running 0 42m kube-system kube-flannel-ds-vjn8d 1/1 Running 1 19m kube-system kube-proxy-6w9ch 1/1 Running 0 19m kube-system kube-proxy-p4mt8 1/1 Running 0 94s kube-system kube-proxy-wvp9b 1/1 Running 0 74m kube-system kube-scheduler-k8s-master-01 1/1 Running 1 73m kube-system kube-scheduler-k8s-master-02 1/1 Running 0 19m kube-system kube-scheduler-k8s-master-03 1/1 Running 0 94s 2.node加入到集群(分别在两个node上执行) 2.1配置: kubeadm join master.k8s.io:16443 --token hx67nu.7nlxcsvcsa8uy46o --discovery-token-ca-cert-hash sha256:4c0389dec1204d86c9721a08e2bbdb8503e0ff511b7a9584e747425d71a8f99b 2.2检查: [root@VM-16-14-centos flannel]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 80m v1.16.3 k8s-master-02 Ready master 24m v1.16.3 k8s-master-03 Ready master 7m11s v1.16.3 k8s-node-01 Ready <none> 2m58s v1.16.3 k8s-node-02 Ready <none> 101s v1.16.3 [root@VM-16-14-centos flannel]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-58cc8c89f4-ch8pn 1/1 Running 0 80m coredns-58cc8c89f4-qdz7t 1/1 Running 0 80m etcd-k8s-master-01 1/1 Running 0 79m etcd-k8s-master-02 1/1 Running 0 24m etcd-k8s-master-03 1/1 Running 0 7m20s kube-apiserver-k8s-master-01 1/1 Running 0 79m kube-apiserver-k8s-master-02 1/1 Running 0 24m kube-apiserver-k8s-master-03 1/1 Running 0 7m21s kube-controller-manager-k8s-master-01 1/1 Running 1 79m kube-controller-manager-k8s-master-02 1/1 Running 0 24m kube-controller-manager-k8s-master-03 1/1 Running 0 7m21s kube-flannel-ds-965w9 1/1 Running 0 7m21s kube-flannel-ds-nvdhl 1/1 Running 0 111s kube-flannel-ds-qljzc 1/1 Running 0 48m kube-flannel-ds-vjn8d 1/1 Running 1 24m kube-flannel-ds-z9zc2 1/1 Running 0 3m8s kube-proxy-6w9ch 1/1 Running 0 24m kube-proxy-fswvz 1/1 Running 0 111s kube-proxy-p4mt8 1/1 Running 0 7m21s kube-proxy-wvp9b 1/1 Running 0 80m kube-proxy-z27lw 1/1 Running 0 3m8s kube-scheduler-k8s-master-01 1/1 Running 1 79m kube-scheduler-k8s-master-02 1/1 Running 0 24m kube-scheduler-k8s-master-03 1/1 Running 0 7m21s 十一.部署dashboard(k8s-master-01执行) 部署最新版本v2.0.0-beta6,下载yaml cd /usr/local/kubernetes/manifests/ mkdir dashboard && cd dashboard wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta6/aio/deploy/recommended.yaml # 修改service类型为nodeport vim recommended.yaml ... kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard ... [root@k8s-master-01 dashboard]# kubectl apply -f recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [root@k8s-master-01 dashboard]# kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-76585494d8-62vp9 1/1 Running 0 6m47s kubernetes-dashboard-b65488c4-5t57x 1/1 Running 0 6m48s [root@k8s-master-01 dashboard]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.1.207.27 <none> 8000/TCP 7m6s kubernetes-dashboard NodePort 10.1.207.168 <none> 443:30001/TCP 7m7s # 在node上通过https://nodeip:30001访问是否正常,注意在firefox浏览器执行,使用非安全模式进入 2.创建service account并绑定默认cluster-admin管理员集群角色 vim dashboard-adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard [root@VM-16-14-centos dashboard]# kubectl apply -f dashboard-adminuser.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created 获取token,dashboard通过这个token进入系统 [root@VM-16-14-centos dashboard]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-p7wgc Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 0e9f5406-3c26-4141-a233-ff4eaa841401 Type: kubernetes.io/service-account-token Data ==== namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBDdkJ1VWZFZENYbEd3ZGVrc3FldlhXWG94QU0ySjN1M1Y4ZVRJOUZPd1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXA3d2djIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwZTlmNTQwNi0zYzI2LTQxNDEtYTIzMy1mZjRlYWE4NDE0MDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.jcCnl8hHWrZcFtd5H17gJdoaDiFsPUos_4oNYQexiXSjFDgy972Bk1qgYV-zHZhu7o_UZyESMRLTlzRFl3W5Eqbhq9fouD0j0DH_qnTGewNTEuByQj5n6uPLloPG5VNCOs1y3TINVj8LdG5q_n6DWfozfn76eNhU9eAnSJZVZ97dGKy_LDykpM9QtJQQkpaF9jSnDPCeoSnSd_1ud1FoQlNS3PAenB54khOmL5gbD6Pf4uJOVUjzxoHk_--gKDW7juVAsaDPbbGftuiM1mIfQ3K02VoNMiG1VB2hlzJ5kWeUn7wpqZpmngzrqBtVj5DJWSpnHAZZef_FFCakKMp5TA ca.crt: 1025 bytes 十二.部署ingress控制器 mandatory.yaml apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx data: #为了让pod获取真实ip compute-full-forwarded-for: 'true' use-forwarded-headers: 'true' --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true containers: - name: nginx-ingress-controller image: lizhenliang/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 --- 十三.部署metric(v0.3.6) mandatory.yaml ## ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- ## ClusterRole aggregated-metrics-reader apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:aggregated-metrics-reader labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"] resources: ["pods","nodes"] verbs: ["get","list","watch"] --- ## ClusterRole metrics-server apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: [""] resources: ["pods","nodes","nodes/stats","namespaces","configmaps"] verbs: ["get","list","watch"] --- ## ClusterRoleBinding auth-delegator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- ## RoleBinding metrics-server-auth-reader apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- ## ClusterRoleBinding system:metrics-server apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system ## APIService --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 ## Service --- apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" kubernetes.io/cluster-service: "true" spec: selector: k8s-app: metrics-server ports: - port: 443 targetPort: 4443 --- ## Deployment apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: hostNetwork: true serviceAccountName: metrics-server containers: - name: metrics-server ## 修改镜像源地址 image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6 imagePullPolicy: IfNotPresent args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls ## 增加 - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname ## 增加 ports: - name: main-port containerPort: 4443 protocol: TCP securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 resources: limits: memory: 1Gi cpu: 1000m requests: memory: 1Gi cpu: 1000m volumeMounts: - name: tmp-dir mountPath: /tmp - name: localtime readOnly: true mountPath: /etc/localtime volumes: - name: tmp-dir emptyDir: {} - name: localtime hostPath: type: File path: /etc/localtime nodeSelector: kubernetes.io/os: linux kubernetes.io/arch: "amd64" 十四.安装kubernetes dns缓存,避免延迟问题 kubectlapply-fhttps://github.com/feiskyer/kubernetes-handbook/raw/master/examples/nodelocaldns/nodelocaldns-kubenet.yaml (参考:https://mp.weixin.qq.com/s/t7nt87JPJnWEVCNBS-sBpw) 十五. 集群的扩容,缩容 1.集群扩容 默认情况下加入集群的token是24小时过期,24小时后如果是想要新的node加入到集群,需要重新生成一个token,命令如下# 显示获取token列表$ kubeadm token list# 生成新的token$ kubeadm token create 除token外,join命令还需要一个sha256的值,通过以下方法计算openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 用上面输出的token和sha256的值或者是利用kubeadm token create --print-join-command拼接join命令即可 2.集群的缩容 kubectl cordon <node name> #设置为不可调度 kubectl drain <node name> --delete-local-data --force --ignore-daemonsets 驱逐节点上的pod kubectl delete node <node name> 3.初始化重新加入 需要把原来的配置清空 kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down ip link delete cni0 ip link delete flannel.1 systemctl start docker

优秀的个人博客,低调大师

又一管Twitter账户被入侵,这次被用来宣传虚假的PS5广告

NHS(英国国民医疗服务体系)的主管Helen Bevan 两个约拥有14万粉丝的Twitter账户被黑客窃取,并且被用于宣传虚假的PS5促销广告。 被窃取的两个账号中,一个是专业账号,有9.7万名粉丝与她讨论工作;另一个账号是她的“宠物号”,用来分享自己的猫咪,拥有3.6万人关注。黑客在窃取帐号之后,删除了她原来的推文,并且更改了账户的名字。 现在,Bevan女士已经拿回了账号,但却收到了几十条上当受骗者的信息。她不得不向这些受骗者解释,并表示自己也是受害者。 此外,她还向一个自称可以帮她找回账号的人支付了钱款,但最后发现原来他们也是骗子,可谓是一波三折。 经此一事,Bevan女士表示她必须强调额外安全措施的重要性。 Bevan女士先前误以为自己已经激活了双因素认证(2FA),即要求账户持有人使用两种方式登录。第二种方式通常要求通过短信或电子邮件发送的验证码。 她在提供了自己的手机号以及邮箱之后,以为双因素认证被自动激活,但实际上并没有。因此,黑客在破解她的密码之后,只需要更改与她的账户绑定的手机号和邮箱即可。 相关专业人员就此事件表示,在所有的社交媒体上启用所有的安全设置是绝对有必要的:使用复杂的密码、开启双因素认证,使用Twitter的话,可以启用密码重置保护设置。 并且,专业人员还表示,黑客拥有惊人的企业生态系统,可以迅速找到容易攻击的目标,对用户进行攻击。他们会接管用户账户将其锁定,然后以此来威胁受害者上交赎金来重新拿回账号。 来源:BBC 【责任编辑:赵宁宁 TEL:(010)68476606】

优秀的个人博客,低调大师

谷歌云管预测:2021年这五大数据趋势将会推动业务发展

本文转自雷锋网,如需转载请至雷锋网官网申请授权。 据《福布斯》杂志网络版报道,2020年是一个与众不同的年份,变幻莫测的大环境也将数据质量、速度和洞察力的一线希望带到了企业的最前沿。 在这个出人意料的、常常是无法想象的事件和不断变化的2020年中,一些共同的主题浮出水面。在技术方面,很明显我们几年前认为先进的东西,现在已经变得和电一样重要且习以为常。流畅的视频会议和不间断的无线网络不再只是可有可无,而是成为了常态,支持企业的数据平台也是如此。了解并且第一时间知道员工和客户在2020年需要什么,这意味着收集和使用可用的数据变得非常重要。 《福布斯》认为,谷歌云(Google Cloud)对公众来说很重要的一些内容,比如免费的公共数据集,在今年迅速流行起来,以帮助跟踪和了解这场疫情。到了2020年,无人驾驶、数据之上的业务概念也变得越来越重要,任何人都可以获得所需的数据见解。对数据质量决定企业成败的理解也是如此。适应瞬息万变的客户需求几乎成为零售商和其他企业的当务之急。根据Gartner的2020年数据质量解决方案魔力象限报告,糟糕的数据质量使组织每年平均损失1,290万美元。随着业务环境日益数字化和复杂化,这个数字可能会上升。 因此,《福布斯》对话了谷歌云的数据负责人,他们对企业进入新的一年有什么见解。以下是他们认为2021年最值得关注的五大数据趋势。 1.实时数据分析将帮助你看到未来 Debanjan Saha,谷歌云副总裁 随着向云端的大规模转变,也伴随着向更强大的数据资产和更好的数据分析的转变。面向未来的平台正在围绕数据分析构建,2020年证明了业务敏捷性的重要性。我们看到的一个重大飞跃是实时分析,它只会在2021年变得更加普遍。跟踪过去的数据可以提供信息,但有许多用例需要即时数据,特别是在涉及到对意外事件作出反应时。这会对公司的利润产生巨大的影响。例如,基于实时数据可用性识别和阻止网络安全漏洞可以完全改变风险缓解。 虽然实时数据彻底改变了我们收集数据的速度,但我们所见过的数据分析领域中最出人意料但却非常有用的领域是预测分析。传统上,数据只能从物理世界收集,这意味着计划将要发生的事情的唯一方法是查看物理上可以测试的内容。但是,使用预测模型和BigQuery-ML等AI/ML工具,组织可以基于真实场景和信息运行模拟,为他们提供在物理环境中难以测试、成本高昂甚至不可能测试的环境的数据。 2.到2021年,你将需要更多的数据库 Andi Gutmans,谷歌云副总裁 在这充满挑战的一年里,数字化转型迅速加速。企业正在加快步伐,以确保能够以数据为中心为客户交付产品。40多年来,企业一直在建立数据库。但在未来18个月左右,我们将继续看到部署或迁移数据库到云端的巨大加速,到2022年将达到75%。这不仅意味着按原样迁移数据库,还将重新考虑为业务带来转型所需的需求,这可能包括开发云上原生数据库,并与分析和ML功能更紧密地集成。 数据库一直是每个企业的重要组成部分,但现在比以往任何时候都更重要的是加快创新和增长。分析和操作数据融合在一起,以支持实时业务需求。打破团队和系统之间的孤岛将有助于企业更快地做出决策,发现新的收入机会,更容易地满足不断变化的合规性要求,并节省总体运营成本。 3.分析将不再是仪表盘驱动,它们将通过AI驱动的数据体验来到你的面前 Colin Zima,Looker产品管理总监 我们已经开始远离静态仪表板,静态仪表板向业务团队提供一组特定的数据。这些仪表板曾经是商业智能的常见版本,但它们需要权衡取舍,并且不具备现代企业员工所需要的那种智能和可视性。 接下来是数据体验,员工可以在其现有工作流程中获取所需的数据。这些体验的关键在于它们并非千篇一律,而是针对用户需求量身定制的。因此,对于许多企业而言,这意味着要放弃为员工提供仪表板和数据透视表,而转向为内部使用而构建数据产品。今年,我看到了许多惊人的例子,例如专门为员工设计的触摸界面,用于快速查看有关流服务标题的指标。这种方法带来了产品体验,可以更快地解决员工问题并提高生产率。 最终,该技术可在更广泛的企业市场中使用,以向整个团队(包括业务分析师,销售团队和其他没有专门知识或培训的人员)提供分析功能。易于使用的数据和AI / ML解决方案将与这些新的数据体验相结合,以实现实时的,数据驱动的决策。 4.数据的“位置”也很重要:地理空间数据将成为解锁企业转型的关键 Google Cloud首席技术官Jen Bennett 人们一直非常关注大数据和不断增长的数据量,但是在2021年,不要忘记数据的多样性,数据的多样性将继续成为业务转型的关键推动力。 数字化转型通常是从一个全新的角度看待你的业务(从字面上看)。使用来自卫星和无人机的数据以及具有地理位置属性的数据,正在成为理解你的业务的关键差异。在供应链中,了解原材料,产品或资产的位置,以及更好地预测全球物流中断的能力,对于业务弹性至关重要。在销售和市场营销中,通过带有地理标签的信息更好地了解需求信号可以帮助你优化有限的资源并有效地扩大市场范围。出行信息在管理COVID-19和更广泛地阻止疫情扩散中发挥了重要作用。 随着城市和政府为了响应COVID-19而转向地理空间数据,我们还看到了需求的增长和创新思维,涉及将地理空间数据与其他数据(例如零售)结合时的可能。随着对可持续性的日益重视,事实证明地理空间数据可以解锁许多可持续性计划,例如采购。从历史上看,地理空间数据是保留给专家的。但是,地理空间数据和分析的民主化以及全球范围内的计算使这种曾经专门化的数据可在整个企业中访问。 到2021年,企业将地理空间数据与其他数据融合并在其业务中以及在其整个价值链中进行全球协作的能力将被证明是一个关键的区别。 5.数据湖将智能化,以支持开放和多云基础架构 Google Cloud副总裁Debanjan Saha 如今,数据来源如此之多,而长期以来相互独立的数据类型现在都可以在同一个位置存储和分析。现在,业务数据与日志数据相遇,并且结构化、半结构化和非结构化数据全部组合在一起。数据源跨越了云提供商,并跨越了长期以来的界限。 云的规模使对所有这些数据类型执行高级数据分析成为可能。随着开放和多云计算的进一步发展,更强大的数据湖或仓库将变得更加重要。它们不仅是存储,还应该是企业数据策略的支柱。在云中,它们的形状要么是一个数据仓库(该数据仓库主要存储结构化数据,以便可以轻松搜索所有内容),要么是数据湖(将所有业务数据汇总在一起,而不管结构如何)“湖”与“仓库”之间的界线一直在模糊。这使仓库可以集成这些非结构化数据,并使用AI / ML解决方案使数据湖更易于导航,最终实现更快的洞察力和协作。 如果2021年与2020年差不多,我们将会看到曲线球和我们没有想到的事情。希望2021年不会像2020年那样魔幻...但是万一发生,你可以为意外做好准备。这意味着你可以利用实时数据,对企业数据库有更多的期望,并且组织中的每个人都可以自己获得所需的数据见解和报告。

资源下载

更多资源
腾讯云软件源

腾讯云软件源

为解决软件依赖安装时官方源访问速度慢的问题,腾讯云为一些软件搭建了缓存服务。您可以通过使用腾讯云软件源站来提升依赖包的安装速度。为了方便用户自由搭建服务架构,目前腾讯云软件源站支持公网访问和内网访问。

Nacos

Nacos

Nacos /nɑ:kəʊs/ 是 Dynamic Naming and Configuration Service 的首字母简称,一个易于构建 AI Agent 应用的动态服务发现、配置管理和AI智能体管理平台。Nacos 致力于帮助您发现、配置和管理微服务及AI智能体应用。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态服务发现、服务配置、服务元数据、流量管理。Nacos 帮助您更敏捷和容易地构建、交付和管理微服务平台。

Spring

Spring

Spring框架(Spring Framework)是由Rod Johnson于2002年提出的开源Java企业级应用框架,旨在通过使用JavaBean替代传统EJB实现方式降低企业级编程开发的复杂性。该框架基于简单性、可测试性和松耦合性设计理念,提供核心容器、应用上下文、数据访问集成等模块,支持整合Hibernate、Struts等第三方框架,其适用范围不仅限于服务器端开发,绝大多数Java应用均可从中受益。

Sublime Text

Sublime Text

Sublime Text具有漂亮的用户界面和强大的功能,例如代码缩略图,Python的插件,代码段等。还可自定义键绑定,菜单和工具栏。Sublime Text 的主要功能包括:拼写检查,书签,完整的 Python API , Goto 功能,即时项目切换,多选择,多窗口等等。Sublime Text 是一个跨平台的编辑器,同时支持Windows、Linux、Mac OS X等操作系统。

用户登录
用户注册