您现在的位置是:首页 > 文章详情

Nginx Ingress Controller 入门实践

日期:2019-10-20点击:656

Ingress是什么?

在 Kubernetes 集群中,Ingress是授权入站连接到达集群服务的规则集合,提供七层负载均衡能力。可以给 Ingress 配置提供外部可访问的 URL、负载均衡、SSL、基于名称的虚拟主机等。简单来说Ingress说一些规则集合,这些规则可以实现url到Kubernetes中的service这一层的路由功能。既然Ingress只是规则,那么这些规则的具体实现是怎么样的呢?是有Ingress-Controller来实现的,目前比较常见使用较多的是Nginx-Ingress-Controller。

可以这样理解一下,nginx-ingress-controller是一个nginx应用,它能干什么呢?它能代理后端的service,它能根据Ingress的配置,将对应的配置翻译成nginx应用的配置,来实现七层路由的功能。既然nginx-ingress-controller是作为一个类似网关的这么一个应用,那么我的nginx-ingress-controller这个应用本身就是需要在集群外能够访问到的,那么我是需要对外暴露nginx-ingress-controller这个应用的,在k8s中是通过创建一个LoadBalancer的Service:nginx-ingress-lb来暴露nginx-ingress-controller这个应用的。对应的,我们也就知道了nginx-ingress-controller这个应用从外部访问是通过nginx-ingress-lb这个service关联的SLB(负载均衡产品)来进行的。至于对应的slb的相关配置策略可以参考一下之前的文章,关于服务实现的内容。

简单的请求链路如下:

客户端 --> slb --> nginx-ingress-lb service --> nginx-ingress-controller pod --> app service --> app pod

我们使用Ingress来暴露服务,那么需要创建对应的资源,要想功能正常,nginx-ingress-controller的pod需要正常运行,nginx-ingress-lb service和slb监听配置正常才行,Ingress要关联的后端应用服务也是需要配置正确,包括应用pod运行正常,应用的service配置正确。

然后我们需要创建对应的Ingress来实现我们的需求。我们先创建一个简单的Ingress来大体看下具体的功能是什么样子的。我们这条Ingress的目的要实现:请求域名:ingress.test.com,实际请求会到后端的tomcat应用中。

相关配置:

apiVersion: apps/v1beta2 kind: Deployment metadata: labels: app: tomcat name: tomcat namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: tomcat strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: tomcat spec: containers: - image: 'tomcat:latest' imagePullPolicy: Always name: tomcat resources: requests: cpu: 100m memory: 200Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: tomcat-svc namespace: default spec: clusterIP: 172.21.6.143 ports: - port: 8080 protocol: TCP targetPort: 8080 #service常见配置错误的地方,targetPort必须是pod暴露的端口,不能是其他的 selector: app: tomcat sessionAffinity: None type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tomcat namespace: default spec: rules: - host: ingress.test.com http: paths: - backend: serviceName: tomcat-svc servicePort: 8080 path: /

在Ingress创建成功后会自动生成一个端点IP,我们应该将域名:ingress.test.com做A记录解析到这个端点IP。这样我们访问域名:ingress.test.com,实际请求会请求到我们的tomcat应用。

测试结果:

#curl http://端点IP -H "host:ingress.test.com" -I HTTP/1.1 200 Date: Thu, 26 Sep 2019 04:55:39 GMT Content-Type: text/html;charset=UTF-8 Connection: keep-alive Vary: Accept-Encoding 

nginx-ingress-controller 配置分析

yaml如下:

apiVersion: apps/v1beta2 kind: Deployment metadata: labels: app: ingress-nginx name: nginx-ingress-controller namespace: kube-system spec: progressDeadlineSeconds: 600 replicas: 2 revisionHistoryLimit: 10 selector: matchLabels: app: ingress-nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' labels: app: ingress-nginx spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - ingress-nginx topologyKey: kubernetes.io/hostname weight: 100 containers: - args: - /nginx-ingress-controller - '--configmap=$(POD_NAMESPACE)/nginx-configuration' - '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services' - '--udp-services-configmap=$(POD_NAMESPACE)/udp-services' - '--annotations-prefix=nginx.ingress.kubernetes.io' - '--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb' - '--v=2' env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: >- registry-vpc.cn-shenzhen.aliyuncs.com/acs/aliyun-ingress-controller:v0.22.0.5-552e0db-aliyun imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: nginx-ingress-controller ports: - containerPort: 80 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: capabilities: add: - NET_BIND_SERVICE drop: - ALL procMount: Default runAsUser: 33 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/localtime name: localtime readOnly: true dnsPolicy: ClusterFirst initContainers: - command: - /bin/sh - '-c' - | sysctl -w net.core.somaxconn=65535 sysctl -w net.ipv4.ip_local_port_range="1024 65535" sysctl -w fs.file-max=1048576 sysctl -w fs.inotify.max_user_instances=16384 sysctl -w fs.inotify.max_user_watches=524288 sysctl -w fs.inotify.max_queued_events=16384 image: 'registry-vpc.cn-shenzhen.aliyuncs.com/acs/busybox:latest' imagePullPolicy: Always name: init-sysctl resources: {} securityContext: privileged: true procMount: Default terminationMessagePath: /dev/termination-log terminationMessagePolicy: File nodeSelector: beta.kubernetes.io/os: linux restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: nginx-ingress-controller serviceAccountName: nginx-ingress-controller terminationGracePeriodSeconds: 30 volumes: - hostPath: path: /etc/localtime type: File name: localtime --- apiVersion: v1 kind: Service metadata: labels: app: nginx-ingress-lb name: nginx-ingress-lb namespace: kube-system spec: clusterIP: 172.21.11.181 externalTrafficPolicy: Local healthCheckNodePort: 32435 ports: - name: http nodePort: 31184 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 31972 port: 443 protocol: TCP targetPort: 443 selector: app: ingress-nginx sessionAffinity: None type: LoadBalancer

其中需要特别注意的配置是容器的args配置:

  • --configmap=$(POD_NAMESPACE)/nginx-configuration 表明nginx-ingress-controller使用是哪个namespace下configmap来读取nginx-ingress-controller的nginx配置。默认是使用kube-system/nginx-configuration这个configmap。
  • --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb 表明选择使用nginx-ingress-controller的Ingress的端点地址是使用的哪一个LoadBalancer的Service的扩展IP。默认是使用kube-system/nginx-ingress-lb这个Service。
  • --ingress-class=INGRESS_CLASS 这个配置是对nginx-ingress-controller自身的一个标识,表示我是谁,没有配置就是默认的“nginx”。这个配置有什么用呢?是用来让Ingress选择我要使用的ingress-controller是谁,Ingress通过注解:kubernetes.io/ingress.class: ""决定选择哪一个ingress-controller,如果没有配置那么就是选择--ingress-class=“nginx”这个ingress-controller。

如何在一个阿里云kubernetes集群里面部署多套Nginx Ingress Controller参考文档:

https://yq.aliyun.com/articles/645856

之前我们也说了Ingress是规则,会下发到ingress-controller实现相关功能,那我们就来看下下发到ingress-controller的配置究竟是什么样。我们可以进入到nginx-ingress-controller的pod内来查看一下nginx配置。在pod内的/etc/nginx/nginx.conf里面,除了一些公共的配置外,上面的Ingress生成了如下的nginx.conf配置:

## start server ingress.test.com server { server_name ingress.test.com ; listen 80; set $proxy_upstream_name "-"; location / { set $namespace "default"; set $ingress_name "tomcat"; set $service_name "tomcat-svc"; set $service_port "8080"; set $location_path "/"; rewrite_by_lua_block { balancer.rewrite() } access_by_lua_block { balancer.access() } header_filter_by_lua_block { } body_filter_by_lua_block { } log_by_lua_block { balancer.log() monitor.call() } port_in_redirect off; set $proxy_upstream_name "default-tomcat-svc-8080"; set $proxy_host $proxy_upstream_name; client_max_body_size 100m; proxy_set_header Host $best_http_host; # Pass the extracted client certificate to the backend # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $the_real_ip; proxy_set_header X-Forwarded-For $the_real_ip; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Original-URI $request_uri; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; # Custom headers to proxied server proxy_connect_timeout 10s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_buffering off; proxy_buffer_size 4k; proxy_buffers 4 4k; proxy_request_buffering on; proxy_http_version 1.1; proxy_cookie_domain off; proxy_cookie_path off; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout; proxy_next_upstream_tries 3; proxy_pass http://upstream_balancer; proxy_redirect off; } } ## end server ingress.test.com

Nginx.conf里面的配置比较多,这里就不在一一详解,后续我们再针对一些常见的功能配置进行说明。

同时最新版本的nginx-ingress-controller已经默认开启了Upstream的动态更新,可以在nginx-ingress-controller的pod内请求:curl http://127.0.0.1:18080/configuration/backends 查看。具体内容如下:

[{"name":"default-tomcat-svc-8080","service":{"metadata":{"creationTimestamp":null},"spec":{"ports":[{"protocol":"TCP","port":8080,"targetPort":8080}],"selector":{"app":"tomcat"},"clusterIP":"172.21.6.143","type":"ClusterIP","sessionAffinity":"None"},"status":{"loadBalancer":{}}},"port":8080,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"172.20.2.141","port":"8080"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{"upstream-hash-by-subset-size":3},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}},{"name":"upstream-default-backend","port":0,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"127.0.0.1","port":"8181"}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}},"upstreamHashByConfig":{},"noServer":false,"trafficShapingPolicy":{"weight":0,"header":"","cookie":""}}]

我们可以看到这里有ingress关联的service及其endpoint的映射关系,这样可以就可以请求到具体的pod的业务了。

路由配置的动态更新可以参考文档了解一下:https://yq.aliyun.com/articles/692732

在后续的文章我们再详细讲一些常见的使用场景。

原文链接:https://yq.aliyun.com/articles/721569
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章