容器编排之Kubernetes网络隔离NetworkPolicy
Kubernetes的一个重要特性就是要把不同node节点的pod(container)连接起来,无视物理节点的限制。但是在某些应用环境中,比如公有云,不同租户的pod不应该互通,这个时候就需要网络隔离。幸好,Kubernetes提供了NetworkPolicy,支持按Namespace级别的网络隔离,这篇文章就带你去了解如何使用NetworkPolicy。
需要注意的是,使用NetworkPolicy需要特定的网络解决方案,如果不启用,即使配置了NetworkPolicy也无济于事。我们这里使用Calico解决网络隔离问题。
互通测试
在使用NetworkPolicy之前,我们先验证不使用的情况下,pod是否互通。这里我们的测试环境是这样的:
Namespace:ns-calico1,ns-calico2
Deployment: ns-calico1/calico1-nginx, ns-calico2/busybox
Service: ns-calico1/calico1-nginx
先创建Namespace:
apiVersion: v1 kind: Namespace metadata: name: ns-calico1 labels: user: calico1 --- apiVersion: v1 kind: Namespace metadata: name: ns-calico2
# kubectl create -f namespace.yaml namespace "ns-calico1" created namespace "ns-calico2" created # kubectl get ns NAME STATUS AGE default Active 9d kube-public Active 9d kube-system Active 9d ns-calico1 Active 12s ns-calico2 Active 8s
接着创建ns-calico1/calico1-nginx:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico1-nginx namespace: ns-calico1 spec: replicas: 1 template: metadata: labels: user: calico1 app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: calico1-nginx namespace: ns-calico1 labels: user: calico1 spec: selector: app: nginx ports: - port: 80
# kubectl create -f calico1-nginx.yaml deployment "calico1-nginx" created service "calico1-nginx" created # kubectl get svc -n ns-calico1 NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE calico1-nginx 192.168.3.141 <none> 80/TCP 26s # kubectl get deploy -n ns-calico1 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE calico1-nginx 1 1 1 1 34s
最后创建ns-calico2/calico2-busybox:
apiVersion: v1 kind: Pod metadata: name: calico2-busybox namespace: ns-calico2 spec: containers: - name: busybox image: busybox command: - sleep - "3600"
# kubectl create -f calico2-busybox.yaml pod "calico2-busybox" created # kubectl get pod -n ns-calico2 NAME READY STATUS RESTARTS AGE calico2-busybox 1/1 Running 0 40s
测试服务已经安装完成,现在我们登进calico2-busybox里,看是否能够连通calico1-nginx
# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.141:80)
由此可以看出,在没有设置网络隔离的时候,两个不同Namespace下的Pod是 可以互通的。接下来我们使用Calico进行网络隔离。
网络隔离
先决条件
要想在Kubernetes集群中使用Calico进行网络隔离,必须满足以下条件:
- kube-apiserver必须开启运行时extensions/v1beta1/networkpolicies,即设置启动参数:–runtime-config=extensions/v1beta1/networkpolicies=true
- kubelet必须启用cni网络插件,即设置启动参数:–network-plugin=cni
- kube-proxy必须启用iptables代理模式,这是默认模式,可以不用设置
- kube-proxy不得启用–masquerade-all,这会跟calico冲突
注意:配置Calico之后,之前在集群中运行的Pod都要重新启动
安装calico
首先需要安装Calico网络插件,我们直接在Kubernetes集群中安装,便于管理。
# Calico Version v2.1.4 # http://docs.projectcalico.org/v2.1/releases#v2.1.4 # This manifest includes the following component versions: # calico/node:v1.1.3 # calico/cni:v1.7.0 # calico/kube-policy-controller:v0.5.4 # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Configure this with the location of your etcd cluster. etcd_endpoints: "https://10.1.2.154:2379,https://10.1.2.147:2379" # Configure the Calico backend to use. calico_backend: "bird" # The CNI network configuration to install on each node. cni_network_config: |- { "name": "k8s-pod-network", "type": "calico", "etcd_endpoints": "__ETCD_ENDPOINTS__", "etcd_key_file": "__ETCD_KEY_FILE__", "etcd_cert_file": "__ETCD_CERT_FILE__", "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__", "log_level": "info", "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s", "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__", "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } } # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files. etcd_ca: "/calico-secrets/etcd-ca" # "/calico-secrets/etcd-ca" etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert" etcd_key: "/calico-secrets/etcd-key" # "/calico-secrets/etcd-key" --- # The following contains k8s Secrets for use with a TLS enabled etcd cluster. # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: # Populate the following files with etcd TLS configuration if desired, but leave blank if # not using TLS for etcd. # This self-hosted install expects three files with the following names. The values # should be base64 encoded strings of the entire contents of each file. etcd-key: base64 key.pem etcd-cert: base64 cert.pem etcd-ca: base64 ca.pem --- # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster. apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node template: metadata: labels: k8s-app: calico-node annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}] spec: hostNetwork: true containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: quay.io/calico/node:v1.1.3 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Configure the IP Pool from which Pod IPs will be chosen. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" - name: CALICO_IPV4POOL_IPIP value: "always" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Auto-detect the BGP IP address. - name: IP value: "" securityContext: privileged: true #resources: #requests: #cpu: 250m volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /calico-secrets name: etcd-certs # This container installs the Calico CNI binaries # and CNI network config file on each node. - name: install-cni image: quay.io/calico/cni:v1.7.0 command: ["/install-cni.sh"] env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir - mountPath: /calico-secrets name: etcd-certs volumes: # Used by calico/node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the etcd TLS secrets. - name: etcd-certs secret: secretName: calico-etcd-secrets --- # This manifest deploys the Calico policy controller on Kubernetes. # See https://github.com/projectcalico/k8s-policy apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico-policy-controller namespace: kube-system labels: k8s-app: calico-policy annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, {"key":"CriticalAddonsOnly", "operator":"Exists"}] spec: # The policy controller can only have a single active instance. replicas: 1 strategy: type: Recreate template: metadata: name: calico-policy-controller namespace: kube-system labels: k8s-app: calico-policy spec: # The policy controller must run in the host network namespace so that # it isn't governed by policy that would prevent it from working. hostNetwork: true containers: - name: calico-policy-controller image: quay.io/calico/kube-policy-controller:v0.5.4 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # The location of the Kubernetes API. Use the default Kubernetes # service for API access. - name: K8S_API value: "https://kubernetes.default:443" # Since we're running in the host namespace and might not have KubeDNS # access, configure the container's /etc/hosts to resolve # kubernetes.default to the correct service clusterIP. - name: CONFIGURE_ETC_HOSTS value: "true" volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets name: etcd-certs volumes: # Mount in the etcd TLS secrets. - name: etcd-certs secret: secretName: calico-etcd-secrets
# kubectl create -f calico.yaml configmap "calico-config" created secret "calico-etcd-secrets" created daemonset "calico-node" created deployment "calico-policy-controller" created # kubectl get ds -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE-SELECTOR AGE calico-node 1 1 1 1 1 <none> 52s # kubectl get deploy -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE calico-policy-controller 1 1 1 1 6m
这样就搭建了Calico网络,接下来就可以配置NetworkPolicy了。
配置NetworkPolicy
首先,修改ns-calico1的配置:
apiVersion: v1 kind: Namespace metadata: name: ns-calico1 labels: user: calico1 annotations: net.beta.kubernetes.io/network-policy: | { "ingress": { "isolation": "DefaultDeny" } }
# kubectl apply -f ns-calico1.yaml namespace "ns-calico1" configured
如果这个时候再测试两个pod是否连通,一定会不通:
# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80) wget: download timed out
这就是我们想要的效果,不同Namespace之间的pod不能互通,当然这只是最简单的情况,如果这时候ns-calico1的pod去连接ns-calico2的pod,还是互通的。因为ns-calico2没有设置Namespace annotations。
而且,这时候的ns-calico1会拒绝任何pod的通讯请求。因为,Namespace的annotations只是指定了拒绝所有的通讯请求,还未规定何时接受其他pod的通讯请求。在这里,我们指定只有拥有user=calico1标签的pod可以互联。
apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata: name: calico1-network-policy namespace: ns-calico1 spec: podSelector: matchLabels: user: calico1 ingress: - from: - namespaceSelector: matchLabels: user: calico1 - podSelector: matchLabels: user: calico1 --- apiVersion: v1 kind: Pod metadata: name: calico1-busybox namespace: ns-calico1 labels: user: calico1 spec: containers: - name: busybox image: busybox command: - sleep - "3600"
# kubectl create -f calico1-network-policy.yaml networkpolicy "calico1-network-policy" created # kubectl create -f calico1-busybox.yaml pod "calico1-busybox" created
这时候,如果我通过calico1-busybox连接calico1-nginx,则可以连通。
# kubectl exec -it calico1-busybox -n ns-calico1 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80)
这样我们就实现了Kubernetes的网络隔离。基于NetworkPolicy,可以实现公有云安全组策略。
本文转自中文社区-容器编排之Kubernetes网络隔离NetworkPolicy
低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
视音频解码服务商Bitmovin是如何采用Kubernetes进行多级应用部署
在不同的公有云上运行大规模视频音频编解码平台是非常有挑战的。Bitmoving在过去的平台运维中积累了很多经验,但是也有不少困难。所以,kubernetes首先吸引我们的就是它对底层公有云平台的抽象,并且提供了定义完好的API接口。更为重要的是,kubernetes不只是提供了一种一般程度上的抽象,它是把运行容器平台所需要的工具和概念进行了全面的整合抽象,并且能够无缝的对接各种公有云的基础设施。 在我们的初始测试阶段,我们已经相当熟悉kubernetes的API调用了,当 我们的服务不仅需要部署到公有云中,而且需要部署到客户的私有数据中心中时,我们迅速决定利用kubernetes来完成我们从公有云服务到私有云服务的在技术上的统一。 这个就是我们后来推出的bitmovin Managed On-Premise encoding服务。因为kuberenets对用户来说屏蔽了底层基础设施的不同,我们可以采用一套API来驱动公有云服务或者私有云服务。当然如果要达成这样的目标,我们就无法采用像LoadBalancer Service这样的的组件,因为对企业用户来说,他们通常不愿意把内部端口暴露出...
- 下一篇
为什么Docker容器将占领世界
为什么Docker容器将占领世界 我加入了bieryun,主持了一个关于Docker的网络研讨会,以及您可以使用容器将传统Windows应用程序迁移到云端以及运行开源无服务器平台。 我分享了Docker容器启用的最常用的用例。这些是公司目前在生产中所做的事情。以下是前五个场景,以及我在现场网络研讨会上对问答的所有答案。 将应用迁移到云端 将现有工作负载迁移到云曾经是IaaS和PaaS之间的选择。PaaS选项意味着将您的应用程序的要求与您选择的云的产品目录相匹配,并采用包含所有托管服务的组件的新架构: 这有利于运营成本和效率,但需要一个项目才能实现 - 您需要更改代码并运行完整的回归测试套件。当你上线时,你只能在一个云上运行,所以如果你想要多云或混合云,它将需要另一个项目。 另一种选择是IaaS,这意味着在云中租用虚拟机。由于您只需要启动一组VM并使用现有的部署工件和工具来部署应用程序,因此初始工作量较少: 但是,将VM环境从数据中心复制到云只意味着要复制所有运营和基础架构的低效率。你仍然需要管理你的所有虚拟机,而且它们仍然大量未充分利用,但现在你有一个月度账单显示它的效率是多么低效...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- SpringBoot2整合Thymeleaf,官方推荐html解决方案
- Docker快速安装Oracle11G,搭建oracle11g学习环境
- CentOS8编译安装MySQL8.0.19
- CentOS7,8上快速安装Gitea,搭建Git服务器
- SpringBoot2全家桶,快速入门学习开发网站教程
- Hadoop3单机部署,实现最简伪集群
- CentOS7,CentOS8安装Elasticsearch6.8.6
- Docker安装Oracle12C,快速搭建Oracle学习环境
- CentOS7编译安装Gcc9.2.0,解决mysql等软件编译问题
- SpringBoot2整合Redis,开启缓存,提高访问速度