Using Amazon API Gateway with microservices deployed on Amazon ECS
One convenient way to run microservices is to deploy them as Docker containers. Docker containers are quick to provision, easily portable, and provide process isolation. Amazon EC2 Container Service (Amazon ECS) provides a highly scalable, high performance container management service. This service supports Docker containers and enables you to easily run microservices on a managed cluster of Amazon EC2 instances.
Microservices usually expose REST APIs for use in front ends, third-party applications, and other microservices. A best practice is to manage these APIs with an API gateway. This provides a unique entry point for all of your APIs and also eliminates the need to implement API-specific code for things like security, caching, throttling, and monitoring for each of your microservices. You can implement this pattern in a few minutes using Amazon API Gateway. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
In this post, we’ll explain how to use Amazon API Gateway to expose APIs for microservices running on Amazon ECS by leveraging the HTTP proxy mode of Amazon API Gateway. Amazon API Gateway can make proxy calls to any publicly accessible endpoint; for example, an Elastic Load Balancing load balancer endpoint in front of a microservice that is deployed on Amazon ECS. The following diagram shows the high level architecture described in this article:
You will see how you can benefit from stage variables to dynamically set the endpoint value depending on the stage of the API deployment.
In the first part of this post, we will walk through the AWS Management Console to create the dev
environment (ECS cluster, ELB load balancers, and API Gateway configuration). The second part explains how to automate the creation of a production environment with AWS CloudFormation and AWS CLI.
Creating a dev environment with the AWS Management Console
Let’s begin by provisioning a sample helloworld microservice using the Getting Started wizard.
Sign in to Amazon ECS console. If this is the first time you’re using the Amazon ECS console, you’ll see a welcome page. Otherwise, you’ll see the console home page and the Create Cluster button.
Step 1: Create a task definition
- In the Amazon ECS console, do one of the following:
- If Get Started Now is displayed, choose it.
- If it is not displayed, go to the Getting Started wizard.
- Optional: (depending on the AWS Region) Deselect the Store container images securely with Amazon ECR checkbox and choose Continue.
- For Task definition name, type
ecsconsole-helloworld
. - For Container name, type
helloworld
. - Choose Advanced options and type the following text in the Command field:
/bin/sh -c "echo '{ \"hello\" : \"world\" }' > /usr/local/apache2/htdocs/index.html && httpd-foreground"
- Choose Update and then choose Next step
Step 2: Configure service
- For Service name, type
ecsconsole-service-helloworld
. - For Desired number of tasks, type
2
. - In the Elastic load balancing section, for Container name: host port, choose
helloworld:80
. - For Select IAM role for service, choose Create new role or use an existing
ecsServiceRole
if you already created the required role. - Choose Next Step.
Step 3: Configure cluster
- For Cluster name, type
dev
. - For Number of instances, type
2
. - For Select IAM role for service, choose Create new role or use an existing
ecsInstanceRole
if you already created the required role. - Choose Review and Launch and then choose Launch Instance & Run Service.
At this stage, after a few minutes of pending process, the helloworld microservice will be running in the dev ECS cluster with an ELB load balancer in front of it. Make note of the DNS Name of the ELB load balancer for later use; you can find it in the Load Balancers section of the EC2 console.
Configuring API Gateway
Now, let’s configure API Gateway to expose the APIs of this microservice. Sign in to the API Gateway console. If this is your first time using the API Gateway console, you’ll see a welcome page. Otherwise, you’ll see the API Gateway console home page and the Create API button.
Step 1: Create an API
- In the API Gateway console, do one of the following:
- If Get Started Now is displayed, choose it.
- If Create API is displayed, choose it.
- If neither is displayed, in the secondary navigation bar, choose the API Gateway console home button, and then choose Create API.
- For API name, type
EcsDemoAPI
. - Choose Create API.
Step 2: Create Resources
- In the API Gateway console, choose the root resource (/), and then choose Create Resource.
- For Resource Name, type
HelloWorld
. - For Resource Path, leave the default value of
/helloworld
. - Choose Create Resource.
Step 3: Create GET Methods
- In the Resources pane, choose /helloworld, and then choose Create Method.
- For the HTTP method, choose GET, and then save your choice.
Step 4: Specify Method Settings
- In the Resources pane, in /helloworld, choose GET.
- In the Setup pane, for Integration type, choose HTTP Proxy.
- For HTTP method, choose GET.
- For Endpoint URL, type
http://${stageVariables.helloworldElb}
- Choose Save.
Step 5: Deploy the API
- In the Resources pane, choose Deploy API.
- For Deployment stage, choose New Stage.
- For Stage name, type
dev
. - Choose Deploy.
- In the stage settings page, choose the Stage Variables tab.
- Choose Add Stage Variable, type
helloworldElb
for Name, type the DNS Name of the ELB in the Value field and then save.
Step 6: Test the API
- In the Stage Editor pane, next to Invoke URL, copy the URL to the clipboard. It should look something like this:
https://.execute-api..amazonaws.com/dev
- Paste this URL in the address box of a new browser tab.
- Append
/helloworld
to the URL and validate. You should see the following JSON document:{ "hello": "world" }
Automating prod environment creation
Now we’ll improve this setup by automating the creation of the prod environment. We use AWS CloudFormation to set up the prod ECS cluster, deploy the helloworld service, and create an ELB in front of the service. You can use the template with your preferred method:
Using AWS CLI
aws cloudformation create-stack --stack-name EcsHelloworldProd --template-url https://s3.amazonaws.com/rko-public-bucket/ecs_cluster.template --parameters ParameterKey=AsgMaxSize,ParameterValue=2 ParameterKey=CreateElasticLoadBalancer,ParameterValue=true ParameterKey=EcsInstanceType,ParameterValue=t2.micro
Using AWS console
Launch the AWS CloudFormation stack with the Launch Stack button below and use these parameter values:
- AsgMaxSize:
2
- CreateElasticLoadBalancer:
true
- EcsInstanceType:
t2.micro
Configuring API Gateway with AWS CLI
We’ll use the API Gateway configuration that we created earlier and simply add the prod stage.
Here are the commands to create the prod stage and configure the stage variable to point to the ELB load balancer:
#Retrieve API ID API_ID=$(aws apigateway get-rest-apis --output text --query "items[?name=='EcsDemoAPI'].{ID:id}") #Retrieve ELB DNS name from CloudFormation Stack outputs ELB_DNS=$(aws cloudformation describe-stacks --stack-name EcsHelloworldProd --output text --query "Stacks[0].Outputs[?OutputKey=='EcsElbDnsName'].{DNS:OutputValue}") #Create prod stage and set helloworldElb variable aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod --variables helloworldElb=$ELB_DNS
You can then test the API on the prod stage using this simple cURL command:
AWS_REGION=$(aws configure get region) curl https://$API_ID.execute-api.$AWS_REGION.amazonaws.com/prod/helloworld
You should see { "hello" : "world" }
as the result of the cURL request. If the result is an error message like {"message": "Internal server error"}
, verify that you have healthy instances behind your ELB load balancer. It can take some time to pass the health checks, so you’ll have to wait for a minute before trying again.
From the stage settings page you also have the option to export the API configuration to a Swagger file, including the API Gateway extension. Exporting the API configuration as a Swagger file enables you to keep the definition in your source repository. You can then import it at any time, either by overwriting the existing API or by importing it as a brand new API. The API Gateway import tool helps you parse the Swagger definition and import it into the service.
Conclusion
In this post, we looked at how to use Amazon API Gateway to expose APIs for microservices deployed on Amazon ECS. The integration with the HTTP proxy mode pointing to ELB load balancers is a simple method to ensure the availability and scalability of your microservice architecture. With ELB load balancers, you don’t have to worry about how your containers are deployed on the cluster.
We also saw how stage variables help you connect your APIs on different ELB load balancers, depending on the stage where the API is deployed.
https://aws.amazon.com/cn/blogs/compute/using-amazon-api-gateway-with-microservices-deployed-on-amazon-ecs/
低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
openstack vm_lifecycle
nova instance状态:power_state, vm_state, task_state 2015-09-22 Openstack 185 nova instance有3种状态:power_state, vm_state, task_state,分别对应horizon界面上的Power State,Status,Task Openstack wiki上有介绍: power_state is the hypervisor state, loaded “bottom-up” from compute worker; vm_state reflects the stable state based on API calls, matching user expectation, revised “top-down” within API implementation. task_state reflects the transition state introduced by in-progress API calls. power_state and vm_state may c...
- 下一篇
书评:Docker全攻略+Docker
一直都想了解Docker,但是总没有借口好好学习。看到阿里的免费送书活动,必须赶紧参加(不放过羊毛)。浏览了下书单,觉得《Docker全攻略》这本比较适合入门,于是乎花了个把小时刷了刷。 试读部分:第1章、第7章、第10章 第一章部分介绍Docker的前世今生,Docker是运行基于LXC和AUFS两个技术基础上。LXC主要基于Linux内核调用CGroups和Namespace,实现容器轻量级虚拟化,提供资源限制和隔离的功能。AUFS属于堆栈式的联合文件系统,可将分布在不同地方的目录挂载到同一个虚拟文件系统,解决了容器初始化和写时复制的问题。 点评:学习Docker或者云平台等云技术,最大的前提是有良好的Linux基础,对Linux架构原理熟悉的话对理解其他技术非常有帮助,能对Linux内核清楚那就是锦上添花了。Cgroups是Linux内核提供的一种机制,可以限制、记录、隔离进程组所使用的物理资源(cpu,memory,IO等)。LXC实现虚拟化所使用的资源管理手段就靠它了,没有cgroups就没有LXC。Namespace是一种资源隔离机制。参考大学学的C++,主要为了资源隔离。...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- SpringBoot2初体验,简单认识spring boot2并且搭建基础工程
- Linux系统CentOS6、CentOS7手动修改IP地址
- SpringBoot2全家桶,快速入门学习开发网站教程
- CentOS8编译安装MySQL8.0.19
- CentOS7,CentOS8安装Elasticsearch6.8.6
- SpringBoot2整合Redis,开启缓存,提高访问速度
- Red5直播服务器,属于Java语言的直播服务器
- CentOS6,CentOS7官方镜像安装Oracle11G
- Windows10,CentOS7,CentOS8安装Nodejs环境
- Springboot2将连接池hikari替换为druid,体验最强大的数据库连接池