您现在的位置是:首页 > 文章详情

四, 跨语言微服务框架 - Istio官方示例(超时控制,熔断器,流量复制)

日期:2018-11-23点击:366

基础的Istio环境已经搭建完成,我们需要开始了解Istio提供作为微服务网格的各种机制,也就是本文标题的(超时控制,熔断器,流量复制,速率控制)官方很给力的准备的实例项目也不需要大家自己编写demo来进行测试,那就来时跑跑看吧.

附上:

喵了个咪的博客:w-blog.cn

Istio官方地址:https://preliminary.istio.io/zh

Istio中文文档:https://preliminary.istio.io/zh/docs/

PS : 此处基于当前最新istio版本1.0.3版本进行搭建和演示

一. 超时控制

在真正的请求过程中我们常常会给对应的服务给一个超时时间来保证足够的用户体验,通过硬编码的方式当然不理想,Istio提供对应的超时控制方式:

1. 先还原所有的路由配置:

kubectl apply -n istio-test -f istio-1.0.3/samples/bookinfo/networking/virtual-service-all-v1.yaml 

可以在路由规则的 timeout 字段中来给 http 请求设置请求超时。缺省情况下,超时被设置为 15 秒钟,本文任务中,会把 reviews 服务的超时设置为一秒钟。为了能观察设置的效果,还需要在对 ratings 服务的调用中加入两秒钟的延迟。

2. 先全部指向到v2版本

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 EOF 

3. 在对 ratings 服务的调用中加入两秒钟的延迟:

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percent: 100 fixedDelay: 2s route: - destination: host: ratings subset: v1 EOF 

4. 接下来在目的为 reviews:v2 服务的请求加入一秒钟的请求超时:

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - route: - destination: host: reviews subset: v2 timeout: 0.5s EOF 

在1秒钟之后就会返回(即使超时配置为半秒,响应需要1秒的原因是因为服务中存在硬编码重试productpage,因此它reviews在返回之前调用超时服务两次。)

二, 熔断器

在微服务有重要的服务也有不重要的服务,虽然可以通过K8S控制CPU消耗,但是这种基本控制力度是没法满足对于并发请求数的控制,比如A服务限制并发数100,B服务限制10,这个时候就可以通过并发数来限制,无需通过CPU这种不准确的限制方式

1. 运行测试程序

> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/httpbin.yaml 

2. 创建一个 目标规则,针对 httpbin 服务设置断路器:

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutiveErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 EOF 

3. 这里要使用一个简单的负载测试客户端,名字叫 fortio。这个客户端可以控制连接数量、并发数以及发送 HTTP 请求的延迟。使用这一客户端,能够有效的触发前面在目标规则中设置的熔断策略。

> kubectl apply -n istio-test -f istio-1.0.3/samples/httpbin/sample-client/fortio-deploy.yaml > FORTIO_POD=$(kubectl get -n istio-test pod | grep fortio | awk '{ print $1 }') > kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -curl http://httpbin:8000/get HTTP/1.1 200 OK server: envoy date: Wed, 07 Nov 2018 06:52:32 GMT content-type: application/json access-control-allow-origin: * access-control-allow-credentials: true content-length: 365 x-envoy-upstream-service-time: 113 { "args": {}, "headers": { "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "istio/fortio-1.0.1", "X-B3-Sampled": "1", "X-B3-Spanid": "a708e175c6a077d1", "X-B3-Traceid": "a708e175c6a077d1", "X-Request-Id": "62d09db5-550a-9b81-80d9-6d8f60956386" }, "origin": "127.0.0.1", "url": "http://httpbin:8000/get" } 

4. 在上面的熔断设置中指定了 maxConnections: 1 以及 http1MaxPendingRequests: 1。这意味着如果超过了一个连接同时发起请求,Istio 就会熔断,阻止后续的请求或连接,尝试触发下熔断机制。

> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get 06:54:16 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 20 calls: http://httpbin:8000/get Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0) Ended after 96.058168ms : 20 calls. qps=208.21 Aggregated Function Time : count 20 avg 0.0084172288 +/- 0.004876 min 0.000583248 max 0.016515793 sum 0.168344576 # range, mid point, percentile, count >= 0.000583248 <= 0.001 , 0.000791624 , 5.00, 1 > 0.001 <= 0.002 , 0.0015 , 25.00, 4 > 0.006 <= 0.007 , 0.0065 , 30.00, 1 > 0.007 <= 0.008 , 0.0075 , 35.00, 1 > 0.008 <= 0.009 , 0.0085 , 55.00, 4 > 0.009 <= 0.01 , 0.0095 , 65.00, 2 > 0.01 <= 0.011 , 0.0105 , 75.00, 2 > 0.011 <= 0.012 , 0.0115 , 80.00, 1 > 0.012 <= 0.014 , 0.013 , 85.00, 1 > 0.014 <= 0.016 , 0.015 , 95.00, 2 > 0.016 <= 0.0165158 , 0.0162579 , 100.00, 1 # target 50% 0.00875 # target 75% 0.011 # target 90% 0.015 # target 99% 0.0164126 # target 99.9% 0.0165055 Sockets used: 7 (for perfect keepalive, would be 2) Code 200 : 15 (75.0 %) Code 503 : 5 (25.0 %) Response Header Sizes : count 20 avg 172.7 +/- 99.71 min 0 max 231 sum 3454 Response Body/Total Sizes : count 20 avg 500.7 +/- 163.8 min 217 max 596 sum 10014 All done 20 calls (plus 0 warmup) 8.417 ms avg, 208.2 qps 

这里可以看到,几乎所有请求都通过了。Istio-proxy 允许存在一些误差

Code 200 : 15 (75.0 %) Code 503 : 5 (25.0 %) 

5. 接下来把并发连接数量提高到 3:

> kubectl exec -n istio-test -it $FORTIO_POD -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get 06:55:28 I logger.go:97> Log level is now 3 Warning (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 4->4 procs, for 30 calls: http://httpbin:8000/get Starting at max qps with 3 thread(s) [gomax 4] for exactly 30 calls (10 per thread + 0) Ended after 59.921126ms : 30 calls. qps=500.66 Aggregated Function Time : count 30 avg 0.0052897259 +/- 0.006496 min 0.000633091 max 0.024999538 sum 0.158691777 # range, mid point, percentile, count >= 0.000633091 <= 0.001 , 0.000816546 , 16.67, 5 > 0.001 <= 0.002 , 0.0015 , 63.33, 14 > 0.002 <= 0.003 , 0.0025 , 66.67, 1 > 0.008 <= 0.009 , 0.0085 , 73.33, 2 > 0.009 <= 0.01 , 0.0095 , 80.00, 2 > 0.01 <= 0.011 , 0.0105 , 83.33, 1 > 0.011 <= 0.012 , 0.0115 , 86.67, 1 > 0.012 <= 0.014 , 0.013 , 90.00, 1 > 0.014 <= 0.016 , 0.015 , 93.33, 1 > 0.02 <= 0.0249995 , 0.0224998 , 100.00, 2 # target 50% 0.00171429 # target 75% 0.00925 # target 90% 0.014 # target 99% 0.0242496 # target 99.9% 0.0249245 Sockets used: 22 (for perfect keepalive, would be 3) Code 200 : 10 (33.3 %) Code 503 : 20 (66.7 %) Response Header Sizes : count 30 avg 76.833333 +/- 108.7 min 0 max 231 sum 2305 Response Body/Total Sizes : count 30 avg 343.16667 +/- 178.4 min 217 max 596 sum 10295 All done 30 calls (plus 0 warmup) 5.290 ms avg, 500.7 qps 

这时候会观察到,熔断行为按照之前的设计生效了,只有 33.3% 的请求获得通过,剩余请求被断路器拦截了

我们可以查询 istio-proxy 的状态,获取更多相关信息:

> kubectl exec -n istio-test -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep httpbin | grep pending 

最后清理规则和服务

kubectl delete -n istio-test destinationrule httpbin kubectl delete -n istio-test deploy httpbin fortio-deploy kubectl delete -n istio-test svc httpbin 

三, 流量复制

之前在流量控制里面有提到分流的概览V1和V2都承担50%的流量,但是还有一种场景下会用到Istio流量复制的功能,它是一个以尽可能低的风险为生产带来变化的强大的功能.

当我们需要发布一个我们不太确认的程序的时候我们希望它可以先运行一段时间看看稳定性,但是并不想让用户访问这个不稳定的服务,此时流量复制就起到作用了,流量复制可以吧100%请求给到V1版本并且在其中抽取10%的请求也发送给V2一份但是并不关心它的返回.

1. 我们先创建两个版本的httpbin服务用于实验

httpbin-v1

> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v1 spec: replicas: 1 template: metadata: labels: app: httpbin version: v1 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOF 

httpbin-v2:

> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: httpbin-v2 spec: replicas: 1 template: metadata: labels: app: httpbin version: v2 spec: containers: - image: docker.io/kennethreitz/httpbin imagePullPolicy: IfNotPresent name: httpbin command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"] ports: - containerPort: 80 EOF 

httpbin Kubernetes service:

> kubectl apply -n istio-test -f - <<EOF apiVersion: v1 kind: Service metadata: name: httpbin labels: app: httpbin spec: ports: - name: http port: 8000 targetPort: 80 selector: app: httpbin EOF 

启动 sleep 服务,这样就可以使用 curl 来请求了:

> kubectl apply -n istio-test -f - <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: labels: app: sleep spec: containers: - name: sleep image: tutum/curl command: ["/bin/sleep","infinity"] imagePullPolicy: IfNotPresent EOF 

默认情况下,Kubernetes 在 httpbin 服务的两个版本之间进行负载均衡。在此步骤中会更改该行为,把所有流量都路由到 v1。

创建一个默认路由规则,将所有流量路由到服务的 v1 :

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 weight: 100 --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: httpbin spec: host: httpbin subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 EOF 

向服务发送一些流量:

> export SLEEP_POD=$(kubectl get -n istio-test pod -l app=sleep -o jsonpath={.items..metadata.name}) > kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool { "headers": { "Accept": "*/*", "Content-Length": "0", "Host": "httpbin:8000", "User-Agent": "curl/7.35.0", "X-B3-Sampled": "1", "X-B3-Spanid": "8e32159d042d8a75", "X-B3-Traceid": "8e32159d042d8a75" } } 

查看 httpbin pods 的 v1 和 v2 日志。您可以看到 v1 的访问日志和 v2 为 <none> 的日志:

> export V1_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v1 -o jsonpath={.items..metadata.name}) > kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0" 
> export V2_POD=$(kubectl get -n istio-test pod -l app=httpbin,version=v2 -o jsonpath={.items..metadata.name}) > kubectl logs -n istio-test -f $V2_POD -c httpbin <none> 

2. 镜像流量到 v2运行规则

> kubectl apply -n istio-test -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: httpbin spec: hosts: - httpbin http: - route: - destination: host: httpbin subset: v1 weight: 100 mirror: host: httpbin subset: v2 EOF 

此路由规则将 100% 的流量发送到 v1 。最后一节指定镜像到 httpbin:v2 服务。当流量被镜像时,请求将通过其主机/授权报头发送到镜像服务附上 -shadow。例如,将 cluster-1 变为 cluster-1-shadow。

此外,重点注意这些被镜像的请求是“即发即弃”的,也就是说这些请求引发的响应是会被丢弃的。

3. 发送流量再次尝试

> kubectl exec -n istio-test -it $SLEEP_POD -c sleep -- sh -c 'curl http://httpbin:8000/headers' | python -m json.tool 

4. 这样就可以看到 v1 和 v2 中都有了访问日志。v2 中的访问日志就是由镜像流量产生的,这些请求的实际目标是 v1:

> kubectl logs -n istio-test -f $V1_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:22:50 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0" 127.0.0.1 - - [07/Nov/2018:07:26:58 +0000] "GET /headers HTTP/1.1" 200 241 "-" "curl/7.35.0" 
kubectl logs -n istio-test -f $V2_POD -c httpbin 127.0.0.1 - - [07/Nov/2018:07:28:37 +0000] "GET /headers HTTP/1.1" 200 281 "-" "curl/7.35.0" 

5. 清理

istioctl delete -n istio-test virtualservice httpbin istioctl delete -n istio-test destinationrule httpbin kubectl delete -n istio-test deploy httpbin-v1 httpbin-v2 sleep kubectl delete -n istio-test svc httpbin 
原文链接:https://my.oschina.net/wenzhenxi/blog/2933866
关注公众号

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。

持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。

转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。

文章评论

共有0条评论来说两句吧...

文章二维码

扫描即可查看该文章

点击排行

推荐阅读

最新文章