Application modernization with the cloud is enabling organizations to quickly deliver software while lowering administrative overhead. As companies continue their cloud journey, containerization in the cloud is a modernization pattern that should be leveraged in order to increase scalability and agility in a cost-conscious and secure manner. In this blog, we have listed five guiding questions for organizations to consider while orchestrating containers on the cloud using Amazon Web Services (AWS).
1. Why Use Containers?
Containers allow organizations to streamline the process to build, test and deploy applications across multiple environments. Key benefits include increases in deployment speed, consistency and portability. Using containers improves deployment speed, as developers can take a black- box design approach to mask the complexity of components and remove application conflicts. Containers are lightweight and provide process isolation, which increases application performance speed compared to virtual machines (VMs). Containers also improve application consistency when moving code from varying environments such as development, test and production.
Additionally, application portability is improved between varying operation systems on premise or in the cloud, giving organizations confidence that applications in containers will run in the same manner, regardless of where they are deployed. The ability to define containers as code makes them a natural fit for an organization’s infrastructure as code and DevOps initiatives.
Further, containers often go hand in hand with microservices architecture to break down monolithic applications into loosely coupled services, giving businesses more scalability and teams more autonomy. The tradeoff is that orchestrating containers on a large scale is challenging. Tasks such as coordinating container communication and scaling compute resources across fleets to maintain high availability while ensuring security and minimizing costs can be daunting. Luckily, we can look to the cloud to reduce the effort and management overhead spent on infrastructure management.
2. Why Run Containers in the Cloud?
Running containers in the cloud enables organizations to focus more on application development by shifting additional infrastructure management responsibilities to cloud service providers such as AWS, which manages the underlying infrastructure of the container orchestrator, compute and registry services to provide highly available, fault– tolerant deployment services. Coupled with effective cost optimization techniques such as right–sizing, auto–scaling and usage of instance purchase plans, organizations can reduce management overhead in the cloud.
3. What Container Services Can be Used on AWS, and What Are the Decision Factors?
Choosing the right container services is essential to deployment success. The two main components for container management services are orchestration and compute. Below, we explore the options for each service and common use patterns and outline a quick way to deploy containers on the cloud.
Container orchestrators manage the life cycle of containers. AWS offers ECS for Docker and EKS for Kubernetes. Deciding between the two depends on the organization’s familiarity with container management, level of maturity of DevOps resources, whether the cloud journey involves multi-cloud and hybrid cloud deployment, and cost differentiators.
Amazon Elastic Container Service (ECS) – ECS is a fully managed container orchestration service native to AWS. Organizations that are new to containers or have limited DevOps resources to rearchitect and navigate the complexities of Kubernetes may find ECS easier to adopt, as users can create tasks and run Docker directly in ECS. However, ECS does lack versatility for on–premise and multicloud deployment. For organizations with a large footprint in AWS, ECS is an attractive option. Since ECS is a native AWS solution, it is deeply integrated with other AWS services, giving teams a cohesive and familiar deployment experience. In terms of cost, since users pay for AWS resources such as EC2s that are created by ECS, there is no additional charge for using ECS itself.
Amazon Elastic Kubernetes Service (EKS) – EKS is a fully managed Kubernetes service. If an organization is already running Kubernetes on premise, EKS will likely be an easier option to migrate to the cloud. Since Kubernetes is open source, EKS provides flexibility to run container deployments across multiple infrastructure providers in the cloud and on premise. The EKS pricing model is similar to that of ECS, as users pay for resources that are created, but there is an additional cost of $0.20 per hour for utilizing the EKS control plane.
AWS offers two compute options: AWS Fargate and EC2. Key decision factors include cost, need for host level control and customization, compliance, and governance.
AWS Fargate – Fargate enables users to run containers without managing servers. To run containers, users simply create tasks and leave Fargate to provision, configure auto–scale and manage the instances. AWS has taken measures to promote Fargate in container usage by reducing the cost. In January 2019, AWS Fargate reduced costs up to 50%, and in December 2019, AWS launched Fargate Spot, which offers a discount of up to 70% and is ideal for fault-tolerant use cases such as big data and batch processing. Fargate Spot is supported only for applications orchestrated by Amazon ECS. EKS users may use EC2 Spot Fleet to take advantage of spot instance pricing.
EC2 – Organizations that require control of their EC2 instances for host level customization or for compliance and governance reasons may opt for EC2 as the compute resource. Another reason to use EC2 is for cost optimization by running the container cluster on a mix of on–demand, spot and reserved instances. We recommend that organizations perform a cost analysis based on the workload size to determine whether EC2 or Fargate is more cost efficient.
Score Quick Wins With AWS Elastic Beanstalk
For organizations looking to containerize web applications in the cloud, a quick way to deploy and scale is to use AWS Elastic Beanstalk, which uses Amazon ECS under the hood for multi-container deployment to automatically handle capacity provisioning, load balancing, autoscaling and application health monitoring. The trade-off for its ease of use and quick deployment speed is that Elastic Beanstalk lacks configuration customization compared to ECS and EKS. Elastic Beanstalk is a suitable option for web applications that have short life cycles and for organizations that lack DevOps resources well versed in container infrastructure management.
4. What About Monitoring Containers on AWS?
Choosing the right container monitoring and observability strategy is just as important as choosing the right orchestration and compute resources to ensure that the container environment meets availability requirements and is optimized for cost and performance. Due to the ephemeral nature of containers in the cloud, it is essential for organizations to maintain visibility of the cluster, understand container performance and utilization in order to rightsize, and autoscale to optimize costs and increase automation such as autohealing and scheduled starts to reduce the amount of manual intervention while maintaining application uptime.
Integration with other tools and processes such as incident and event management is also essential to ensure that data is collected and shared across platforms, events are correlated, and subsequent workflows are triggered appropriately. In fall 2019, AWS introduced CloudWatch Container Insight, which collects metric data and monitors containers. Once data is collected, it can also be streamed to data analysis and visualization tools such as AWS Quicksight or other third–party tools for further analysis.
5. What are the Security Considerations for Managing Containers in the Cloud?
A common challenge for organizations pursuing containers in the cloud is container security and governance. Compared to virtual machines, the architecture of containers is more complex, so organizations cannot apply the same security for directly securing VMs for containers. With VMs, the host and guest OS and the guest application environment must be secured. Containerized environments have additional layers such as container runtime, host, container images, orchestrators and registries – all of which need to be secured.
The key for securing containers in the cloud is to understand and apply the shared responsibility model. Although AWS services offer secure services, organizations need to differentiate security measures taken by AWS and actions that organizations are responsible for in order to secure all layers in their container environment. For example, Amazon’s Elastic Container Registry (ECR) offers secure registries by automatically encrypting container images stored in ECR. However, users are still responsible for actions such as ensuring image changes are appropriate and access is restricted to the service. We recommend organizations go through each layer in their container environment and identify the relevant controls. Once the control requirements are identified, next steps include exploring tools to implement the controls and continuously monitor the container environment to ensure control adherence.
Managing containers in the cloud can enable organizations to focus on application development and reduce management overhead. To maximize the benefits, we recommend organizations think about how a container strategy fits into their enterprise digital transformation and cloud journey. If AWS plays a major part in the cloud strategy, we recommend considering ECS with Fargate as the main container solution to scale Docker containers quickly while gaining cost savings.
Next steps include conducting cost analysis to determine the optimal container service usage, leverage spot instances and rightsizing techniques to further optimize costs, gain observability by implementing monitoring, integrate container management with existing processes, and conduct a thorough security evaluation to secure each layer in the container environment. With these steps, any organization can adopt containers and run them on the cloud with confidence.