- Simple AWS
- Posts
- Simple Serverless Application with Lambda and DynamoDB
Simple Serverless Application with Lambda and DynamoDB
Understanding a simple serverless application with AWS Lambda and Amazon DynamoDB, with 25 best practices for it.
Imagine you're building a web application, and you're not sure whether you'll get 1 or 1000 users on your first week. Traffic is not going to be consistent, because your app depends on trends you can't predict. You think serverless is a good choice (and you're right for this case!), you know the details about Lambda, S3, API Gateway and DynamoDB, but you're not sure how everything fits together.
We're going to use the following AWS services:
Lambda: Our serverless compute layer. You put your code there, and it automagically runs a new instance for every request, scaling really fast and really well.
S3: Very cheap and durable storage. We'll use it to store our frontend code in this case.
API Gateway: Exposes endpoints to the world, with security and lots of good engineering features. We'll use it to expose our Lambda functions.
DynamoDB: Managed No-SQL database. We'll use it to store our data.
Steps to Create a Simple Serverless Application
Step 1: Design your application
Before you start building, it's essential to have a clear understanding of how your application's different components will fit together. In a serverless architecture, your application can be split into multiple smaller functions, each with its own specific task, which can be executed independently.
So, the first step is to identify the things your app needs to do, group them into functions, and understand how they talk to each other. This process is called (micro)service design, and I could write a whole book on it (maybe some day I will!). But for now, let's leave it at that and focus on the AWS side.
In our example, we'll just have a single Lambda function that talks to DynamoDB.
Step 2: Design your DynamoDB tables
No-SQL doesn't mean no structure (it doesn't even mean non-relational, or non-ACID-compliant). Understand what data you'll need to store and how you're going to read it. Decide on a structure, a partition key and sort key, and Global and Local Secondary Indexes wherever they're needed.
In our example, we'll just have a single DynamoDB table with partition key id and no sort key, to keep things simple. However, if you want to design the data in DynamoDB a more complex application, read DynamoDB Database Design.
Step 3: Host your static website on S3
The gist of it is that you're hosting your static website's files on an S3 bucket, and using it as a web server. S3 doesn't support HTTPS, so you'll use CloudFront for that and as a CDN. I won't bore you with the details, there's a tutorial for that.
In our example, the details don't matter much, but it'll look just like in the tutorial: S3 bucket with the files, domain in Route 53, CloudFront used to expose the site with an SSL certificate from ACM.
Step 4: Create your DynamoDB table
You designed the data part. Now create the table, and configure the details such as capacity. In our example, we'll leave it as On Demand. It's basically the full serverless mode, it costs like 7x more per request but scales instantaneously. We're picking on demand because we don't know our traffic patterns and because it's simpler, we can optimize later.
Step 5: Create your Lambda function
Give it a name, give it an IAM Role with permissions to access your DynamoDB table, put the table name in an environment variable, and put your code in.
Step 6: Create your API Gateway API
First, create an HTTP API. Then create the routes. Then create an integration with your Lambda function. Finally, attach that integration to your routes.
In our example, you should create the routes GET /items/{id}
, GET /items
, PUT /items
and DELETE /items/{id}
.
Read more about API Gateway on Monitoring and Securing Serverless Endpoints with API Gateway.
Key Points About Serverless vs Serverful
We're building a standard serverless app, so there's not much to discuss other than serverless vs serverful. Here are the key points on that:
Operations: In serverless you don't manage the servers, so there's way less ops work.
Scalability: It's pretty easy to make serverful scale with an Auto Scaling Group. But there's always that delay in scaling, even when using ECS or EKS on Fargate. In serverless there's also a delay (called cold start), but it's significantly smaller.
Cost: It's more expensive per request, period. The final bill can come out cheaper because you have 0 unused capacity (unused capacity is what you waste when your EC2 instance is using 5% of the CPU and you're paying for 100%). Unused capacity tends to decrease a lot as apps grow, because we understand traffic patterns better and because traffic variations are not as proportionally big (it's easier to go from 10.000 to 11.000 than from 0 to 1.000, even though the increase is 1.000 in both cases).
Development speed: There's still servers, but AWS takes care of them for us. That removes a lot of work on our side, which means we can develop faster.
Developer experience: The infrastructure work that remains is typically pushed to developers. Serverless developers usually like this. Non-serverless developers either hate it and only want to write application code, or want to become serverless developers, there's no middle ground. Keep this in mind when hiring.
Optimization: There's usually a lot to optimize, based on the fine implementation details. I try to give readers the tools to do this, but you should consider hiring a consultant for a couple of hours a week.
Best Practices for Serverless Applications in AWS
I'll try to keep the best practices focused on this solution. For best practices specific about each service, check 20 Advanced Tips for AWS Lambda, Handling Data at Scale with DynamoDB and Monitoring and Securing Serverless Endpoints with API Gateway.
Operational Excellence
Always use Infrastructure as Code: You knew it was coming. I'll never stop saying it until the AWS web Console is deprecated (ok, that might be a bit extreme). For serverless solutions you can check out AWS SAM, which builds on top of CloudFormation, or the Serverless Framework, a great declarative option. Or use your regular favorite tool.
Use asynchronous processing: To improve scalability and reduce costs, consider using asynchronous processing for things that don't need to happen in real-time. For example, you can use Lambda functions triggered by S3 events to process uploaded files in the background, or you can use SQS to queue up messages for later processing by Lambda functions.
Monitor the app: Use CloudWatch to monitor your app's metrics, logs, and alarms. Set up X-Ray to trace requests through your application and identify bottlenecks and errors.
Automate deployment and testing: Basically, use a CI/CD pipeline. Since there's going to be multiple functions, you'll want a pipeline for each. Use the same CI/CD practices as for microservices, even if your Lambdas are not microservices.
Security
Set up Authentication: Use API Gateway to set up authentication, so your API endpoints are not public. You can use Amazon Cognito, or your own custom authorizer.
Use Web Application Firewall: Set up AWS Web Application Firewall (WAF) in your API Gateway APIs.
Encrypt data at rest: Both S3 and DynamoDB encrypt data by default, so you're probably good with this. AWS manages the encryption key though, you can change the configuration to use a key you manage.
Encrypt data in transit: DynamoDB already encrypts data in transit. For your static website in S3, set up CloudFront with an SSL certificate.
Implement least privilege access: Give your Lambda functions an IAM Role that lets them access the resources they need, such as DynamoDB. Only give them the permissions they actually need, for example give the role read permissions on table1, instead of giving it
*
permissions on all DynamoDB.Control access to your AWS account: Here are 7 Must-Do Security Best Practices for your AWS Account.
Reliability
Implement retries and circuit breakers: There's a lot that can (and will) go wrong, such as network errors or a service throttling. To recover from these failures, implementing retries in your code (use randomized exponential backoff). To prevent these failures from cascading, implement circuit breakers.
Use Lambda versions: You can create versions of your Lambda functions, and point API Gateway routes to a specific version. That way, you can deploy and test a new version without disrupting your prod environment, and make the switch once you're confident it works. You can also roll back.
Use canary releases: Versions also allow you to have API Gateway send a small part of the traffic to the new version, so you can test it with real data without impacting all of your users (it's better if it fails for 1% of the users than for 100%).
Back up DynamoDB: Stuff happens, and data can get lost. To protect from that, use DynamoDB backups. If it does happen, use point-in-time recovery to restore the data.
Performance Efficiency
Use caching: To reduce latency and improve performance, consider adding caches. For example, you can use CloudFront to cache your static website, and even for API Gateway responses. DynamoDB usually has a really fast response time, but if it's not fast enough for you, you can use DAX.
Optimize database access: Like I said above, there's a lot to talk about here. Use indexes whenever needed, always use query instead of scan, and don't fetch all attributes if you don't need them. Check Handling Data at Scale with DynamoDB to understand how these things work.
Understand how DynamoDB scales: I wrote an entire post about it, titled Understanding how DynamoDB scales.
Rightsize your Lambdas: You get to pick the amount of memory a Lambda function has, and the CPU power (and the price!) is tied to that. Too small, and your Lambda runs slow. Too big, and it gets super expensive. You can use a profiling tool on your code running on local to determine how much memory it needs, but often it's simpler to just try different values until you find the sweet spot. Do this semi-regularly though, since these requirements can change as your code evolves.
Minimize response payload: You can improve response times by sending less data in the response. If you have multiple use cases for the data, and some need a lot of it and some just a summary, it's a good idea to create a new endpoint for the summary, even if you could solve it by querying the other endpoint and summarizing the data in the front end.
Optimize cold starts: Lambda functions actually have a few instances running, and launch more when needed. The time for a new Lambda instance to launch is called cold start. It's the code that sits outside the handler function (plus some things AWS needs to do). Anything you put there will be run only once per instance, and the values are cached for all future invocations that use that instance. So, put all initialization code there, but try to optimize it so your new Lambda instances don't take too long to start.
Cost Optimization
Rightsize your Lambdas (again): Finding the right size can save you a significant amount of money, on top of saving you a significant amount of headaches from Lambdas not performing well.
Monitor usage: Use CloudWatch to monitor and analyze usage patterns. Use Cost Explorer to monitor costs and figure out where your optimization efforts can have the most impact.
Use DynamoDB Provisioned mode: It's much cheaper per operation! You just need to figure out how much capacity you actually need, and deal with throttling due to insufficient capacity (while you're waiting for it to scale). More often than not, you should start with On Demand because it's easier, then consider moving to Provisioned because once you get some traffic it makes a significant difference. Not all workloads are suited for Provisioned mode, but most are.
Go serverful: Wait, what? We were talking about serverless! Yeah, but not every workload is well suited for serverless. I intentionally proposed a scenario that is, but that might not be your case. Here's the trick though: you don't have to go all in on serverless or on serverful. You can split your workload, and use whatever makes more sense for each part. This will mean increased operational efforts though, because you're effectively maintaining two different architectures. And if they mingle, there are scenarios to consider, such as a Lambda function scaling to 1000 concurrent invocations in 10 seconds, which all try to hit a service in a poor EC2 instance. Just keep it in mind, you're not married to serverless just because you've been using it for a few years. For a longer discussion about this, read Architecting with AWS Lambda.
Use Savings Plans: Commit to a certain usage, do a zero, partial or total upfront payment, and enjoy great discounts on your compute resources. Yes, it does work for Lambda.
Use Provisioned Concurrency: This is like serverful serverless. You pay to keep a certain number of Lambda instances always running, but you pay a lower price. Furthermore, it's great for ensuring a minimum capacity.
Recommended Tools and Resources
Still working on my Security Specialty cert, using this course. There's courses for other certs as well.
Check out these re:Invent talks:
For design patterns for DynamoDB: DynamoDB deep dive: Advanced design patterns (from re:Invent 2021). (Note: I was in the audience for this one!)
For a discussion about data models: Deploy modern and effective data models with Amazon DynamoDB (from re:Invent 2022)
For a serverless, event-driven application: The basics here: Get started building your first serverless, event-driven application (from re:Invent 2022); and the Serverlespresso Workshop to get practical.
I've also got two workshops for you: Help Wild Rydes build a unicorn ride sharing platform using serverless architectures, or build and deploy a completely serverless web application with The Innovator Island (they're awesome workshops).
Did you like this issue? |
Reply