AWS now charges you for the INIT phase, which means you pay for each cold start1.
This blog provides tips on optimizing your Lambda to minimize cold starts and outlines the necessary configuration settings to reduce costs.
Why do we have a Cold Start in AWS Lambda?
Let me show you this beautiful animation that explains what's happening at a Cold Start2
As you can see, preparing the Lambda environment requires downloading and starting the execution environment.
The following section demonstrates how to minimize cold start and thereby reduce execution time.
Reducing the Cold Start
We can keep Cold Start small by minimizing the Lambda package.
Remember when we had to wait for the pirate movie to be downloaded because it was too big?
Also, AWS Lambda needs to download your code before executing it.
Reduce dependencies
For that, we can remove unnecessary packages, such as dev dependencies or the AWS SDK, which is already included in the execution environment.
The execution environment would check the dependencies file. In NodeJS, it is the package.json; in Python, it would be the requirements.txt
, or go.mod
for Go. For Python, I recommend having a requirements.dev.txt
that includes both requirements.txt
and the developer dependencies.
# in requirements.dev.txt
-r requirements.txt
pytest
pytest-cov
black
flake8
boto3
I recommend using NodeJS, not because it fits the event-driven, non-blocking model, but because it is very well-suited for reducing file size by bundling the code.
Esbuild is a great tool to bundle your Lambda. It is already included in the CDK construct NodeJSFunction
when using AWS CDK, and you don't need source maps, at least not in production.
Here is the script on how to bundle your Lambda for a NodeJS runtime:
esbuild my-lambda.js --bundle --minify --platform=node
With this command, you get a single minified file. For example, for the Express app below, the bundle is only 790KB 🤯
I don't think that you can beat that with Python's Flask (17 MB)
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => {
res.send('Hello World!')
})
app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
But what if I told you that you can reduce the size even further for your NodeJS Lambda?
Choose the right Package
Especially in the Node.js ecosystem, there are numerous alternatives to choose from. When selecting an Express alternative, check out HonoJS. Not because the syntax is similar to ExpressJS, but the bundle is only 19 KB 🤯
When using Zod 4, consider Zod mini.
Use ESM
This one is not straightforward, but you can reap numerous benefits by using ESM.
ESM stands for ECMAScript Modules, and due to their statistically analyzable nature, it results in smaller JavaScript bundles3.
ECMAScript is also the JavaScript standard and is adopted by Node.js since version 16.
When using TypeScript, you are subconsciously using ESM syntax (import {} from 'my-module'
)
Let's compare the AWS SDK for S3. JavaScript v3 supports both ESM and CommonJS. In the following example, we initialize the S3 client using both the CJS and ESM approaches.
// CommonJS -> s3-get-files.cjs
const s3 = require('@aws-sdk/client-s3');
new s3.S3Client({ region: 'eu-central-1' });
// ESM -> s3-get-files.mjs
import { S3Client } from '@aws-sdk/client-s3';
new S3Client({ region: 'eu-central-1' });
Now, let's bundle the files
# for CJS
npx esbuild src/s3.cjs --bundle --format=cjs --outfile=s3-bundle.cjs --platform=node
# for ESM
npx esbuild src/s3.mjs --bundle --format=esm --outfile=s3-bundle.mjs --platform=node --main-fields=module,main
One note: We need to specify --main-fields=module,main
to tell esbuild to check the module first; if it's not found, then use main
from the package.json
as a fallback.
The results of the bundles
# For CJS
s3-bundle.cjs 1.4mb
# For ESM
s3-bundle.mjs 703.0kb
The bundle of ESM results into half of the CJS- size. That also means less time to download, and faster execution time:
hyperfine --warmup 10 --style color 'node s3-bundle.cjs' 'node s3-bundle.mjs'
Benchmark 1: node s3-bundle.cjs
Time (mean ± σ): 62.1 ms ± 2.5 ms [User: 53.8 ms, System: 6.7 ms]
Range (min … max): 59.5 ms … 74.5 ms 45 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: node s3-bundle.mjs
Time (mean ± σ): 45.3 ms ± 2.2 ms [User: 38.1 ms, System: 5.6 ms]
Range (min … max): 43.0 ms … 59.2 ms 62 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Summary
node s3-bundle.mjs ran
1.37 ± 0.09 times faster than node s3-bundle.cjs
A win-win here 🚀
Use CDK
As I mentioned, the NodeJsFunction
comes with esbuild out of the box, but bundles your functions to CJS by default.
Luckily, you only need to tweak the configuration a bit to achieve ESM first.
You can specify it in the bundling
- option
const lambdaFunction = new NodejsFunction(scope, 'LambdaFn', {
runtime: Runtime.NODEJS_22_X, // <~~ Recommended
entry: `${handlerPath}/${props.handler}.ts`,
bundling: {
minify: true, // the default is false
format: OutputFormat.ESM, // <~~ Change this!
target: 'node22', // Default is the node version of the runtime
mainFields: ['module', 'main'], // <~~ Change this and use this order!
}
});
Unfortunately, ESM is not a silver bullet, and I ran into some issues. For example, running the mysql2
in ESM was not possible. Make sure to test everything properly. I wrote a more extensive blog post about ESM vs CJS.
Choose the optimal configuration for cost efficiency
You learned how to reduce the cold start (INIT-Phase) above. In this section, I want to show you a tool that helps you use the correct configuration of your Lambda: AWS Lambda Power Tuner.
In every IaC tool, you have to specify the Memory size, but how do you know what number to put in?
Yan Cui posted on LinkedIn that a reasonable default value is 1024 MB.
The general formula
Monthly Cost = (Number of Requests x Cost per Request) + (Total Compute GBS x Cost per GBS)
Total Compute GBS = Sum of all (Memory Allocated (GB) / 1024 MB/GB) * Execution Time (ms)/ 1000 ms/s))
Wow, quite a formula. Luckily, all values except the Memory Size are fixed.
Here is the list of pricing of ARM in eu-central-1
:
Memory (MB) | Price per 1ms |
---|---|
128 | $0.0000000017 |
512 | $0.0000000067 |
1024 | $0.0000000133 |
1536 | $0.0000000200 |
2048 | $0.0000000267 |
4096 | $0.0000000533 |
If you configure your Lambda in eu-central-1
, and you know you have 1 Mio requests and your Lambda runs for 200 ms for 128 MB, then your Lambda costs you
Monthly Cost = (1000000 x 0.0000000017) + (128MB/1024MB x 200ms/1000ms/s) = $0.61
On top of that, you would pay the cold start.
Let's say we follow Yan Cui, and the execution time should also decrease to 50 ms, the calculation appears as follows.
sh
`
Monthly Cost = (1000000 x 0.0000000133) + (1024MB/1024MB x 50ms/1000ms/s) = $0.87
However, as Yan also mentioned, we would not use that in Production, and it's most likely that in development, we would reach 1 Mio requests (not that we cannot do that :D).
Okay, based on that, we can make an educated guess! But we still don’t know the execution time for sure!
Let's explore Lambda Power Tuning to get the cost-efficient values for your memory in your Lambda.
Lambda Power Tuning
Lambda Power Tuning is a fascinating project that operates your Lambda with varying memory sizes and presents a chart. It displays the execution time and costs based on the memory size (see below).
How to interpret the chart above: execution time goes from 35s with 128MB to less than 3s with 1.5GB, while being 14% cheaper to run4.
It deploys a little state machine, and with the execution script, it outputs the result directly in your terminal.
The more executions you perform, the more accurate your memory setting becomes. Interestingly, having more memory doesn’t always mean faster execution times.
Conclusion
AWS now charges you for the 'Cold Start!' 😱 But don’t worry, there are some awesome techniques to optimize your Lambda before the INIT phase! 🎉Â
Let’s think about reducing dependencies by moving development dependencies outside of the actual package and exploring smaller alternatives! 🚀 Plus, bundling to ECMAScript can significantly shrink the size! Smaller bundles mean faster execution, which leads to lower costs! Win-win! 🙌
Now, during the execution phase, it’s tricky to determine the best memory size for our Lambda. That’s where AWS Lambda Power Tuning comes in—what a fantastic tool to help us find that cost-efficient configuration! 🎯
In conclusion, by optimizing our Lambda functions and embracing tools like AWS Lambda Power Tuning, we can tackle challenges head-on while keeping costs down—let’s get optimizing and make our cloud journey even more rewarding! 🚀✨
-
https://aws.amazon.com/blogs/compute/aws-lambda-standardizes-billing-for-init-phase/ ↩
-
https://awsfundamentals.com/animations/lambda_cold-starts ↩
-
https://medium.com/@jolodev/oh-commonjs-why-are-you-mesming-with-6fb12a84bd77 ↩
-
https://github.com/alexcasalboni/aws-lambda-power-tuning?tab=readme-ov-file#what-can-i-expect-from-aws-lambda-power-tuning ↩
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.