- Simple AWS
- Posts
- Should I Use ECS or EKS to Deploy Containers on AWS?
Should I Use ECS or EKS to Deploy Containers on AWS?
To ECS, or to EKS, that is the question.
There's two good options for running containerized applications in AWS: Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). EKS is a managed Kubernetes cluster, and ECS is another container orchestrator created by AWS. They're both great options! And of course if you ask any serious person which is best or which you should use, the answer is going to be "it depends".
In this article I'll try to answer what it depends on, and give you the information you need to be able to choose moderately wisely. We'll explore the key components of both ECS and EKS, compare their features and differences, and I'll give you my perspective and some tips on when to choose one over the other. Plus, I wrote an entire poem for you at the end.
Amazon ECS (Elastic Container Service)
Let's start with Amazon ECS, AWS's container orchestrator. ECS allows you to run and manage Docker containers on a cluster of EC2 instances, without the need to install and operate your own container orchestration software. It's tightly integrated with everything AWS, and runs as a fully managed service.
Key features of ECS include:
Fully managed container orchestration
Support for both EC2 and Fargate launch types
Integration with other AWS services (e.g., ALB, VPC, IAM)
Task definitions and services for defining and managing containers
Scheduling and placement strategies for optimal container placement
Let's dive a bit deeper into it. I already wrote two articles about ECS: ECS Basics and Tips and From EC2 to Scalable ECS, but here I'll try to frame things in a way which makes comparison to EKS easier.
ECS Components
These are the key components of ECS:
Cluster: An ECS cluster is a logical grouping of Amazon EC2 instances or AWS Fargate capacity that you can place tasks on. A cluster allows you to aggregate resources and enables you to run tasks and services. Everything in ECS that is running (Tasks, Services, etc) runs inside a cluster.
Task Definition: A task definition is a template for your application to be able to run in ECS. It specifies the Docker image to use, CPU and memory requirements, the networking mode, environment variables, and other configuration details. A Task Definition is then instantiated into Tasks, which are the actual instances of your application running in ECS.
Tasks: A task is the instantiation of a task definition within a cluster. In other words, it's a running container with the settings defined in the task definition. You can think of a Task as a single container, though in reality you can have a main container (this would be your app) and optional sidecar containers. A Task consumes resources (CPU, memory, storage) up to a certain limit, defined in the Task Definition.
Services: An ECS service allows you to run and maintain a specified number of tasks simultaneously in a cluster. It ensures that the desired count of tasks is running and automatically replaces any failed tasks. The Service also exposes that set of tasks as a single endpoint, via an Application Load Balancer (ALB), and handles service discovery, auto scaling, and rolling updates.
That's it. You write a Task Definition with your Docker image and some configurations, create a Cluster, and launch a Service based on that Task Definition. That Service will take care of creating, maintaining and scaling Tasks according to your specifications, and will set up an Application Load Balancer to expose them. But, where are those Tasks actually running? You pick the capacity when creating the cluster, and the following are your two options.
ECS Launch Types: EC2 vs Fargate
With the EC2 launch type, your Tasks run on a fleet of EC2 instances, which you need to create and manage. You can, of course, put an Auto Scaling Group to work to ensure capacity scales horizontally.
However, you still need to configure all networking and security settings for the instances, and keep them up to date with security patches. It is cheaper though, and it allows you to use Reserved Instances and Savings Plans. Plus, you can use any EC2 instance type you want. But it's more work.
Fargate, on the other hand, is a serverless compute engine for containers. With Fargate, you don't need to manage any EC2 instances! AWS takes care of provisioning and scaling the compute resources for you, you just tell it what to run (when creating the ECS Tasks and Services) and those containers are magically run in the magic cloud.
Fargate is much easier, and you still get to use Savings Plans with it. Plus, you don't need to keep idle capacity, and you can scale much faster. Remember that your container might take a few seconds to a minute to start, and with EC2 you're adding a couple more minutes for a new EC2 instance, so scaling takes a bit longer. To mitigate this you typically aim for a utilization level below 100%, to give you room to create a couple of containers in less than a minute, and start launching new EC2 instances before you need them.
One of the disadvantages of Fargate is the price. It's hard to do a side-by-side comparison because you don't pay for idle capacity, but let's run some quick numbers:
That sounds like a lot. But if you're paying $5k per month just for compute, Fargate will cost you $1k more. That's not a lot, considering how much effort you save vs having to manage 170 m7g.medium instances. Of course, at $5,000,000 you'll prefer EC2. My advice is to always start with Fargate, unless you already know you'll use a lot of capacity, or you're already managing EC2 instances well and adding these won't be a big problem for you.
Tip: For workloads that are suitable for EC2 Spot Instances, you can also use Fargate Spot, which is the same idea as Spot Instances but managed by Fargate.
Networking in Amazon ECS
There's a whole lot to say here, so I'll give you the abridged version. If you use the EC2 launch type, you need to configure everything in EC2, just like if you weren't using ECS. Remember to design your VPC well, and if your application needs access to AWS services do so via VPC Endpoints. Tip: You'll at least need access to Amazon Elastic Container Registry (Amazon ECR), to pull the Docker images from there.
If you're using Fargate, you also need to configure your VPC. Contrary to how AWS Lambda works, ECS tasks don't run on an AWS-managed VPC. Instead, when you create a Service or Task you need to select the VPC where it's going to run, which needs to already exist, and which you need to configure properly.
Screenshot of configuring an ECS Service networking
When you create an ECS service, you also choose how to expose it to the network. The most common options are:
Application Load Balancer (ALB): An ALB allows you to distribute traffic across the tasks in a service, and can handle path-based routing and health checks. You can use an internal or external ALB, depending on whether you want a service to be accessible from within the VPC or from the internet.
Network Load Balancer (NLB): An NLB is a layer 4 load balancer that can handle millions of requests per second with ultra-low latency. It doesn't have all the nice features of an ALB though, so I only recommend it when you truly need that level of performance.
AWS Cloud Map: Cloud Map is an AWS service that allows inter-service communication via DNS names, without requiring IP addresses or endpoints. It's not unique to ECS, but you can use it to register your Tasks and allow service discovery without needing an ALB for each ECS Service.
AWS Service Connect: Service Connect is a feature of Amazon ECS that creates a service mesh across ECS services, solving service discover by allowing you to reference service deployments as configurations. It only allows you to reference ECS services.
AWS App Mesh: A more general solution that creates a service mesh of containerized applications using the Envoy proxy as a sidecar container. It's not unique to ECS, and the service mesh you create can include things that are not ECS services.
Another service mesh: The above service mesh solutions are managed by AWS, but you can always manually configure another solution, like Traefik.
I think at this point it would be helpful to define the term service mesh. It's a software layer that handles all communication between services in an application. It solves monitoring, logging, tracing, network discoverability and traffic control, on a different layer than the application code and the infrastructure. That way you segregate across different layers the distinct responsibilities of doing what your application does, handling all of these features for each unit of deployment (ECS Task, container, Lambda function, etc), and providing the underlying infrastructure needed (EC2 instances, Fargate, cluster-level network configurations like VPC Endpoints, etc). It's pretty complex, and honestly deserves a separate article, which I hope to write some day.
My advice: For a single service, go with an ALB, or NLB if you need the throughput. For multiple interconnected microservices, first of all question whether you really need microservices, then read a lot more about this topic, then go with whatever service mesh you have experience, and if you don't have experience in any go with Service Connect if everything is in ECS, or with App Mesh if everything is not in ECS but is in AWS.
Security in Amazon ECS
Aside from network security, the first thing we need to talk about is IAM permissions. You can assign an IAM Role to each ECS Task. It works just like for EC2 instances, where you don't need to include IAM credentials in your code, but instead send unsigned requests to the AWS API via the AWS libraries, and it's automatically signed with short-lived credentials. The only small difference is that ECS Tasks are even more single responsibility than an EC2 instance, so you can tighten those permissions a lot more, following the principle of least privilege.
Another big thing is secrets. It should go without saying that you don't include secrets like database credentials or API keys in your code, but instead use Secrets Manager. What I do want to mention is that you don't even need to pass around the secret name and write your own code to fetch the secret value. With ECS you can set the secret ARN as an environment variable, and ECS will automatically fetch the value for you, as if you had set the secret value as the value for the environment variable, but without the obvious bad practice of doing that. It's really simple to do, here's a guide for it.
Amazon EKS (Elastic Kubernetes Service)
ECS is a container orchestrator by AWS. But there's another, more famous alternative: Kubernetes. Amazon Elastic Kubernetes Service (EKS) is a managed service that gives you a Kubernetes cluster entirely managed by AWS. With EKS all you need to do is set up the capacity (EC2 or Fargate) and everything that you want to run inside the cluster. Which is no easy feat, let me tell you.
Of course, creating your own Kubernetes cluster is always possible. However, using EKS gives you the following benefits:
Fully managed Kubernetes control plane
Easier integration with other AWS services
A simpler way to manage and scale worker nodes
Security and compliance features built into the cluster
EKS Components
These are the key components of Kubernetes that EKS manages for you:
Control Plane: The EKS control plane is a managed Kubernetes master run by AWS. It provides the Kubernetes API server, etcd database, and all the core components that manage the state and orchestration of your containers. It's essentially what you create when you create a Kubernetes cluster. AWS creates and manages it for you, and it's highly available (deployed across multiple Availability Zones).
Worker Nodes: Worker nodes are the EC2 instances that run your actual containers and are registered with the EKS cluster. Each worker node runs the Kubernetes node components, such as the kubelet and kube-proxy, and is responsible for running your application pods. EKS lets you create managed node groups, in which EKS handles the creation, destruction, scaling, etc of worker nodes.
Those are the things specific to EKS. But you'll also need to understand Kubernetes itself.
Kubernetes Components That Are Part of EKS
Let's talk pure Kubernetes. I already wrote an article called Kubernetes on AWS, and another one on how to migrate from EC2 to EKS, but let me give you the lay of the land here.
Pods are the smallest deployable unit in Kubernetes. A pod represents a single instance of a running application in your cluster, and can contain one or more containers that share the same network namespace and storage. They're comparable to ECS Tasks.
Pods are created and managed by Kubernetes Controllers, such as Deployments and StatefulSets. These controllers ensure that the desired number of pods are running and automatically replace failed pods to maintain the desired state. They include the necessary configurations to specify how many pods should be running, how that number scales, and what strategies should be used for scaling.
Pods are exposed via Services. A Kubernetes service is an abstraction that defines a logical set of pods and a policy for accessing them. Services provide a stable IP address and DNS name for a set of pods, allowing them to be accessed reliably by other pods or external clients.
There are different types of services, depending on how the pods are exposed:
ClusterIP: Exposes the service on a cluster-internal IP, making it only reachable from within the cluster.
NodePort: Exposes the service on each worker node's IP at a static port, making it accessible from outside the cluster by referencing a node's IP address and the port that corresponds to the service.
LoadBalancer: Creates an external load balancer (ELB) in AWS and exposes the service externally using the load balancer's DNS name.
I could go on for 50 more paragraphs, and everything I know would fall very short of everything there is to know about Kubernetes. I recommend you check out the official documentation, which is incredibly complete and amazingly boring to read, but seriously useful. And the best way I know to learn Kubernetes is with the EKS Workshop.
Kubernetes Networking in EKS
As I said, a serious treatise about Kubernetes is way out of scope for this article. However, I wanted to write a bit about how EKS implements the networking aspects of Kubernetes, especially so we can compare it to ECS.
When you create an EKS cluster, you need to specify the VPC and subnets where your worker nodes and pods will run. You can use public or private subnets for your worker nodes, depending on whether you want them to be accessible from the internet or only from within your VPC. You can (and should!) also use multiple subnets across different Availability Zones to enable high availability.
I mentioned Kubernetes services in the section above, and I guess the impression is that a service of type LoadBalancer is the best way to go. It is, if you're running a single service in your cluster. However, for multiple services you'll want an Ingress Controller and Ingresses.
An Ingress is a Kubernetes API object that manages external access to services in a cluster. It provides a way to define rules for routing external traffic to specific services based on the host or path of the request. Each Service will have its own Ingress
An Ingress is just the set of rules, and it's implemented by an Ingress Controller, which is responsible for fulfilling the Ingress rules and routing the traffic to the appropriate services. A single Ingress Controller can implement multiple Ingresses. EKS supports several options for Ingress Controllers, such as AWS ALB Ingress Controller and NGINX Ingress Controller.
The ALB Ingress Controller creates an AWS Application Load Balancer for each Ingress Controller (not for each Ingress), and creates a Listener and Target Group on that ALB for each Ingress. This way, the ALB can route traffic to the appropriate Kubernetes services based on the Ingress rules.
Of course, AWS Security Groups also play a role in networking. By default, EKS creates a security group for your worker nodes that allows all traffic between pods in the same cluster. In addition to that, you can assign security groups directly to pods, giving you a much more fine-grained control of network permissions.
You can also use the native Kubernetes option: Network Policies. They let you define rules for controlling inbound and outbound traffic to your pods, natively in Kubernetes, in a cloud-agnostic way. Network Policies are implemented by a network plugin, such as Calico or Weave Net, which enforces the policies at the network level.
Kubernetes Security in EKS
Good network permissions is an important aspect of security in EKS. Let's talk about the other aspects.
In Kubernetes, you grant permissions to a pod by assigning it a Kubernetes Service Account. Typically that wouldn't mean anything to AWS, but you can use an EKS feature called IAM Roles for Service Accounts (IRSA) to associate an IAM Role with a Kubernetes Service Account, and indirectly with all pods using that Service Account. This enables pods running under that service account to access AWS resources and services using the permissions defined in the IAM Role.
There's also the secrets aspect, and in EKS you have two options: Storing Secrets in Kubernetes, or using AWS Secrets Manager. The native Kubernetes Secrets element will let you store secrets as base64-encoded strings, which can be mounted into pods as environment variables or files. It's convenient, but they're stored in the etcd database, which is not encrypted by default. Also, you don't get automatic rotation or versioning.
You can also use Secrets Manager to store your secrets and access them from your Kubernetes pods using the AWS Secrets Manager CSI driver. The CSI driver allows you to mount secrets stored in Secrets Manager as files or environment variables in your pods.
Another important topic is Namespaces. They're a way to divide cluster resources and isolate different teams, projects, or environments. From a security perspective, namespaces provide a way to enforce isolation and access control within your cluster. You can use namespaces to:
Limit the scope of user and service account permissions
Enforce network policies and segmentation
Manage resource quotas and limits
Implement multi-tenancy
Avoid naming conflicts
By deploying resources in separate namespaces, you can ensure that they are isolated from each other and can only communicate if explicitly allowed by a network policy or service. For that reason, it's a best practice to use namespaces to organize everything you create in a cluster, and enforce separation of concerns. For example, you might create separate namespaces for different environments (e.g., dev, staging, prod), teams (e.g., frontend, backend, data), or applications (e.g., web, api, database).
You can also use namespaces to implement role-based access control (RBAC) and limit the permissions of users and service accounts to specific namespaces. For example, you might create a developer
role that has read-only access to the dev
namespace, and an ops
role that has admin access to the prod
namespace.
ECS vs EKS: Key Differences and Considerations
Now that we've gone over the key aspects of ECS and EKS, let's compare them side by side and highlight the main differences and considerations.
First, let me give you a side-by-side comparison of the components. Keep in mind that for EKS you define everything in YAML files.
ECS | EKS |
Cluster | Cluster |
Task Definition | - |
Task | Pod |
Service | Deployment/Service |
Container Instance | Worker Node |
Task Placement | Pod Scheduling |
Service Discovery | Service Discovery |
Load Balancing | Load Balancing |
IAM Roles for Tasks | IAM Roles for Service Accounts |
Now let's talk about the differences.
Aspect | ECS | EKS |
Orchestration | Proprietary, AWS-specific orchestration | Kubernetes orchestration, industry standard |
Architecture | Simpler, more tightly integrated with AWS services | More complex, leverages Kubernetes abstractions |
Configuration | Task definitions, JSON format | Kubernetes YAML manifests |
Service Exposure | Uses ALB/NLB, automatic service discovery and routing | Requires Kubernetes services, ingress for external access |
Networking | Integrates with VPC, security groups | Kubernetes networking, network policies |
Ecosystem | AWS-specific tools and integrations | Large Kubernetes ecosystem, third-party tools and add-ons |
Portability | Specific to AWS, vendor lock-in | Portable across cloud providers and on-premises |
Pricing | Pay for EC2 instances or Fargate resources, no additional cost for control plane | Pay for EKS control plane and worker node EC2 instances |
ECS and EKS Learning Curve
Learning curve is an important factor to consider when choosing between ECS and EKS. This will impact the cost of adoption for your existing team, as well as the cost of hiring new team members.
ECS has a relatively gentle learning curve, especially if you're already familiar with Docker and AWS. ECS handles a lot of the necessary configurations for you, and integrates seamlessly with other AWS services like ALB, VPC, and IAM. I'm sure you noticed how much shorter the section about ECS was, compared to the one about EKS.
However, ECS is specific to AWS. If you're already locked in with AWS, this shouldn't be a problem. Heck, this shouldn't be a problem for 99% of companies! But if you need to keep your options open, including going bare metal as an option, or if you're considering a multi-cloud strategy, ECS is probably not the right choice. If you want to go hybrid with high AWS lock in, keep in mind that you can run ECS with all of its features on AWS Outposts.
On the other hand we have EKS. It has a steeper learning curve, not because of EKS itself but because of Kubernetes. YAML is really simple, and every single resource you define in Kubernetes will be simple. But there are a lot of things to configure, a lot of options, a lot of ways in which different resources need to interact or need to not interact, and to get it right you need to understand everything.
Kubernetes is an industry-standard orchestration platform that is widely used across different cloud providers and on-premises environments. At a company level, this means you aren't tied in with AWS, even if you're using EKS. You can easily go to Google Cloud and get a managed Kubernetes cluster with their Kubernetes Engine service, or to Azure and AKS, or to a manually created cluster on premises. On a personal level this means that investing in Kubernetes skills and expertise can be valuable beyond just AWS, and can help you build a more portable career.
In addition to that, the Kubernetes community is large and active, with a wide range of third-party tools, add-ons, and integrations available. This rich ecosystem of tools and resources is fantastic, and I can tell you from experience that it's really nice when you can just deploy an application using Helm (a package manager for Kubernetes) instead of only getting the container images and having to configure everything else yourself. However, this adds even more complexity.
Overall, I'd say EKS is only worth it if you as a company already know Kubernetes, or decide that you definitely need to learn it.
Pricing for ECS and EKS
Pricing won't be a big deciding factor, but it's worth talking about it a bit. For both ECS and EKS by far the highest cost will be the compute capacity. In both cases it can come from EC2 instances or Fargate, and in both cases you can use Savings Plans and Reserved Instances. Do pay attention to that! But it won't help you decide between ECS and EKS.
In addition to the compute resources, you may also incur costs for other AWS services that your ECS or EKS resources use, such as:
Elastic Load Balancing (ALB/NLB)
Elastic Block Store (EBS) volumes
Elastic File System (EFS) storage
Data transfer between AZs or out of AWS
There may be some differences there, because ECS and Kubernetes manage those resources a bit differently, but they will be very specific to each application, so I can't even give you a rule of thumb.
The only difference that you can calculate beforehand is the cluster price. ECS clusters are free, while EKS clusters cost $72/month. However, that's only two engineering hours a month, so complexity will play a much bigger factor.
Vendor Lock-In and Being Cloud Agnostic
Vendor lock-in is a common concern when choosing a cloud provider and platform. It refers to the situation where you become dependent on a specific vendor's proprietary services and APIs, making it difficult or costly to switch to another vendor or platform in the future.
And it's not important for 99% of companies.
ECS is a proprietary service that is specific to AWS, so you'll definitely be locked-in with AWS. On the other hand, EKS only solves for you the Kubernetes cluster and the integrations with other AWS services, but everything that you deploy inside the EKS cluster is 99% pure Kubernetes, and is vendor-agnostic. I say 99% pure because there will be a few specific things, like an Ingress Controller being of type ALB, which is obviously not available in other cloud providers. But the major cloud providers' offers of managed Kubernetes clusters are similar enough that moving an EKS cluster to another cloud will be by far the easiest part of that migration.
The key point here is how much is being cloud agnostic really worth, and how much does vendor lock-in with a cloud provider really hurt you. I believe that's very opinion-based, but my opinion is that you shouldn't be scared to be locked in with any major cloud provider. They won't go out of business any time soon, they won't deploy non-backwards-compatible changes or deprecate services without a significant notice period (typically a year or more), and they won't change how they work.
Don't get me wrong, I do think vendor lock-in is something that you should make reasonable efforts to avoid. But understand that not every product or company that you're locked in with is the same. For example, there's a reason nobody arguing for EKS over ECS is mentioning that you'd be locked in with Kubernetes. Being vendor locked in means you can't migrate easily, so let's think about the reasons why you'd migrate away from a technology, product, or company.
The first reason should be deprecation. If the company ceases to exist, or ceases to offer the product or service, and you're highly dependent on it, you're in deep trouble. Any company and product can disappear, but what you need to analyze is the probability of it disappearing. Any particular npm library is likely to disappear or at least stop getting support in a few years, but major cloud providers are highly unlikely to disappear soon, and have a track record of not sunsetting services often.
The second reason is security. Maybe it doesn't disappear, but it introduces a big security risk for you. I'm talking about zero-day vulnerabilities, serious exploits that are hard to patch, and that sort of thing. Again, a random npm library maintained by a single person is likely to suffer from this, but a major cloud provider isn't.
The third reason is price, or more specifically price increases. It is definitely possible for cloud providers to increase their price, but I'd argue it's unlikely to happen, considering it hasn't happened in the past (well, except for IPv4 IP addresses).
If being cloud agnostic was free, I'd 100% recommend it! But it isn't. It comes with a lot more effort, and sometimes higher costs (other than the cost of engineering effort). What does that buy you? The ability to move away from a cloud provider much faster and much cheaper. Because keep in mind that even if you're very locked in, you can always move, it's just harder and more expensive. So my question is, how likely are you to move away from your cloud provider in the mid term? For 99% of companies, I'd say very unlikely.
When to Choose ECS
I'll keep this short:
If you know AWS and don't know Kubernetes, use ECS
If you are 100% sure you need Kubernetes, don't use ECS
If you need and/or want to be cloud agnostic, don't use ECS
If you know Kubernetes and use a lot of Helm charts, don't use ECS
Reasoning: ECS is very powerful, and it's simpler. So my advice is to default to ECS.
When to Choose EKS
Again, I'll keep it simple:
If you are a Kubernetes expert, use EKS
If you need to be cloud agnostic, use EKS
If you need some Kubernetes features, use EKS
If you are 100% sure you need Kubernetes, use EKS
If you can greatly benefit from using Helm charts, use EKS
EKS is more complex. That's not necessarily a bad thing, just make sure paying for that complexity is worth it. Also, keep in mind that if just one person knows Kubernetes, they become a single point of failure.
Conclusion
Choosing the right container orchestration platform requires a lot of knowledge and thought. I have you the long version, with the basics for both platforms, some comparisons, and my opinion about when to choose each.
ECS is simpler, nearly as powerful, and specific to AWS. EKS is more complex (because Kubernetes is complex), a bit more powerful in the very fine grained details, and cloud agnostic. Being cloud agnostic isn't always worth it though.
So, my final piece of advice, and my answer to the title question is: Default to ECS, only use EKS if you already know Kubernetes, or absolutely need it.
And here's the poem I promised you:
To ECS, or to EKS, that is the question.
Whether 'tis nobler in the mind to suffer
The higher complexity of Kubernetes,
Or to take arms against a sea of YAML,
And with ECS end them: to wreck your brains
No more; and by AWS, to say we end
The headache, and the thousand YAML files
That Kubernetes is heir to? 'Tis a consummation
Devoutly to be wished. To rest, to dream
To dream, perchance of simplicity; aye, there's the rub,
For in that simplicity, what microservices may come,
When we have shuffled off this YAML hell,
Must give us pause. There's the respect
That makes architecture so troublesome:
For who would bear the whips and scorns of time,
The Product Owner's wrong, the PM's arrogance,
The pangs of changing requirements, Conway's Lawβs bane,
The insolence of the Chief Technology Office, and the spurns
That patient merit of th'unworthy takes,
When he himself might his deprecation make
With a bare pull request? Who would the EOL support bear
To grunt and sweat under a weary life,
But that the dread of something after deprecation,
The undiscovered legacy client, from whose account
No engineer returns, puzzles the will,
And makes us rather bear those ills we have,
Than fly to others that we know not of?
Thus architecture does make cowards of us all,
And thus the native hue of resolution
Is sicklied o'er, with the pale cast of thought,
And enterprises of great pitch and funding,
With this regard their market currents turn awry,
And lose the name of action. Soft you now,
The Simple AWS? Newsletter, in thy articles,
Be all my sins remember'd.
Did you like this issue? |
Reply