Simple AWS: 20 Advanced Tips for Lambda
From the basics of what is AWS Lambda, to 20 advanced tips, 4 tools and a workshop
Welcome to Simple AWS! A free newsletter that helps you build on AWS without being an expert. This is issue #8. Shall we?
AWS Service: Lambda
tl;dr: Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. It's the most basic serverless building block, especially for event-driven architectures.
Here's how it works:
- You create a function and write the code that goes in it.
- You set up a trigger for that function, such as an HTTP request
- You configure CPU and memory, and give that function an execution role with IAM permissions
- When the trigger event occurs, an isolated execution starts, receives the event, runs the code, and returns
- You only pay for the time the code was actually running
Obviously, that code runs somewhere. The point is that you don't manage or care where ('cause it's serverless, you see). Every time a request comes in, Lambda will either use an available execution environment or start a new one. That means Lambda scales up and down automatically and nearly instantly.
Here's the fine details:
- Supported languages are: Node.js, TypeScript, Python, Ruby, Java, Go, C# and PowerShell. Use a custom runtime for other languages.
- Lambda functions can be invoked from HTTP requests, in response to events from other services, or at defined time intervals (cron jobs).
- Billing is actually calculated as execution time * assigned memory (GB-seconds), plus a fixed charge per invocation. CPU is tied to memory.
- Lambdas aren't actually instantaneous, there's a cold start (time to start the execution environment). Check the tips below for how to mitigate it.
- Logs are automatically generated and sent to CloudWatch Logs.
The most important tip is that you don't need to do everything in this list, and you don't need to do everything right now. But take your time to read it, I bet there's at least one thing in there that you should be doing but aren't.
- Lambdas don't run in a VPC, unless you configure them for that. You need to do that if you want to access VPC resources, such as an RDS or Aurora database.
- Use environment variables.
- If you need secrets, put them in Secrets Manager and put the secret name in an environment variable.
- As always, grant minimum permissions only.
- Use versioning and aliases, so you can do canary deployments, blue-green deployments and rollbacks.
- Use Lambda layers to reuse code and libraries.
- For constant traffic, Lambda is more expensive than anything serverful (e.g. ECS). The benefit of Lambda is in scaling out extremely fast, and scaling in to 0 (i.e. if there's no traffic you don't pay).
- Not everything needs to be serverless. Choose the best runtime for each service.
- Use Lambda Power Tuning to optimize memory and concurrency settings for better performance and cost efficiency.
- Set provisioned concurrency to guarantee a min number of hot execution environments. It's not free.
- There's an account limit for concurrent Lambda executions. To ensure a particular function always has available limit, use reserved concurrency. It also serves as a maximum concurrency for that function.
- Use compute savings plans to save money.
- Already using containers? Run your containerized apps in Lambda. You could also consider ECS Fargate.
- Don't use function URLs, if you want to trigger functions from HTTP(s) requests use API Gateway instead. Here's a tutorial, and we'll talk more about it next week.
- If you have Lambdas that call other Lambdas, monitoring and tracing is a pain, unless you use AWS X-Ray.
- Use SnapStart to improve cold starts by 10x (only in Java, for now...)
- Code outside the handler only runs on environment initialization, not on every invocation. Put there everything you can, such as initializing the SDK.
- Reduce request latency with global accelerators.
- If you're processing data streams from Kinesis or DynamoDB, configure parallelization factor to process a shard with more than one simultaneous Lambda invocation.
Lambda is supported by every IaC tool out there. But if you're working with serverless, you'll want to check out these options (and pick one, don't mix them):
- AWS SAM: It's like CloudFormation, but built for serverless. In fact, to deploy an app your SAM template is translated to CFN (or you can use Accelerate). And it lets you run Lambda functions locally.
- Serverless Framework: Cloud-agnostic IaC tool specifically built for serverless. Works great with Lambda and many other serverless AWS services such as SQS, SNS, API Gateway, DynamoDB and more. The bad news is that, if a service is not supported, you can't work around that. Also lets you run your apps locally (though I've encountered a few problems with SNS and SQS).
- AWS CDK: An IaC tool built for programmers. Most tools are declarative, you write a config file. CDK is imperative, you use a programming language and declare variables, control structures and loops. It's not specific for serverless, but it's a lot more dev-friendly than most. Also supports locally running your apps.
All of the above are great, but a bit limited for running things locally. LocalStack is better (though some features are paid). Try to use your IaC tool's capabilities, but if you hit a wall, definitely give LocalStack a shot.
Serverless Land is a place with a ton of serverless resources.
And check out the Serverlespressso AWS Workshop. They built a serverless app to serve coffee, and set up a coffee shop at the expo in AWS Re:Invent 2021.
Code without servers,
Scaling at the speed of light,
That's Lambda for you.
It was about time for Lambda, right? I mean, we even talked about using X-Ray with Lambda! My first thought was that you either already know about Lambda or intentionally skipped it for a reason. But a lot of the customers for whom I've consulted weren't using the best features of Lambda, so I figured there was a need to cover it here.
With that said, I'm going to make a few changes to Simple AWS:
- It's going to run only once a week, on Mondays.
- It won't be focused on a service anymore, but on a use case (which will be tied to a service or two).
- It will be a bit longer than the previous issues. Actually, this issue is a good example.
- The tips will be less like "check this out" and more like "do this".
Why the changes? Because the goal was always to help you build, not help you learn. Running Simple AWS twice a week and keeping issues under 5 minutes required a brevity that I find highly valuable (I hope you did as well), but which resulted in every issue being a tl;dr and a feature dump. Then it was up to you to figure out how to improve your stuff.
What I want to do instead is show you what a good architecture looks like, and how to get there. How does that sound?
Have a fantastic New Year celebration, and a great 2023!
As the new year approaches, so do new year's resolutions. If one of yours is to get AWS Certified, I highly recommend Adrian Cantrill's courses. With their mix of theory and practice, they're the best I've seen. I've literally bought them all (haven't watched them all yet).
The above is an affiliate link, I get paid a commission if you purchase through there. Regardless, I highly recommend Adrian's courses. I'm just being open about this.
Thank you for reading! See ya on the next issue.