Hit the 6MB Lambda payload limit? Here’s what you can do.

You can become a serverless blackbelt. Enrol in my course Learn you some Lambda best practice for great good! and learn best practices for performance, cost, security, resilience, observability and scalability. By the end of this course, you should be able to make informed decisions on which AWS service to use with Lambda and how to build highly scalable, resilient and cost-efficient serverless applications.

So you have built a serverless application, that, amongst other things, lets you upload images and files to S3.

The set-up is very simple: API Gateway, Lambda and S3.

It took you no time to implement and it works like a dream. You pat yourself on the back for another job well done.

Until one day, a customer complained that he couldn’t upload pictures of Winston to your app.

Oh, Winston, you’re such a good boy, do you want a belly rub?

You check your Lambda logs and you saw an error right away.

Execution failed: 6294149 byte payload is too large for the RequestResponse invocation type (limit 6291456 bytes)

What the hell?

You turn around to your best friend Dusty, and you ask, “what the hell?”

Dusty doesn’t care. The only thing on his mind right now is “what’s for dinner?”.

You google the error message and soon realise that you’ve hit the 6MB invocation payload limit for synchronous Lambda invocations.

https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html

Turns out you can’t POST more than 6MB of data to Lambda through API Gateway. Maybe Winston can lose a few pixels as well as a few pounds? But no! Winston deserves to be immortalised in all the High Definition glory.

What do you do now?

Well, two options pop to mind.

Option 1: use API Gateway service proxy

You can remove Lambda from the equation and go straight from API Gateway to S3 using API Gateway service proxies.

To learn more about API Gateway service proxies and why you should use them, please read my previous post on the topic.

This approach doesn’t require any client changes. If you’re using the Serverless framework then the serverless-apigateway-service-proxy plugin makes it easy to configure this S3 integration.

custom:
  apiGatewayServiceProxies:
    - s3:
        path: /upload/{fileName}
        method: post
        action: PutObject
        bucket:
          Ref: S3Bucket
        key:
          pathParam: fileName
        cors: true

The problem with this approach is that you’re limited by the API Gateway payload limit of 10MB.

https://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html

I mean, it’s more than 6MB, but not by that much that you no longer have to worry about hitting it.

Option 2: use pre-signed S3 URL instead

You can also rearchitect your application slightly so that uploading images becomes a two-step process:

  1. The client makes an HTTP GET request to API Gateway, and the Lambda function generates and returns a presigned S3 URL.
  2. The client uploads the image to S3 directly, using the resigned S3 URL.

This approach requires you to update both the client and the server. But it’s a simple code change on both sides.

Since the client will upload the files to S3 directly, you will not be bound by payload size limits imposed by API Gateway or Lambda. And with presigned S3 URLs, you can do this securely without having to open up access to the S3 bucket itself.

Option 3: Lambda@Edge to redirect to S3 (updated 11/04/2020)

Thank you to Timo Schilling for this idea.

First, set up a CloudFront distribution and point it to an invalid domain.

Then attach a Lambda@Edge function to perform any authentication and authorization as necessary, and redirect valid requests to the S3 bucket using a presigned S3 URL.

Compared to option 2, this approach is much more developer friendly for the caller. As far as the caller is concerned, it’s just a plain cost POST HTTP endpoint. Which is great when you’re working with 3rd party/external developers.

However, using Lambda@Edge brings some operational overhead:

  • Updates to Lambda@Edge functions takes a few minutes (should be around 5 mins following recent improvements to CloudFront deployment times) to propagate to all AWS regions.
  • Lambda@Edge functions actually execute in a region closest to the edge location, not at the edge location itself. And the kicker is that their logs would be sent to CloudWatch Logs in that same region. So if you need to monitor your Lambda@Edge functions, then you need to check the logs in ALL the regions where they could have run. And if you’re ingesting Lambda logs to a centralised logging platform (e.g. logz.io) then you’d need to set up the ingestion process in all of these regions too.

Option 4: use pre-signed POST instead (updated 13/04/2020)

Thank you to Zac Charles for this idea.

This is similar to option 2, but you can also specify a POST Policy to restrict the POST content to specific content type and/or size. There are also some other differences to option 2, e.g. you need to use a HTTP POST instead of PUT, and you need to pass in a number of HTTP headers returned by the createPresignedPost request.

Zac’s post has a lot more detail about caveats you need to look out for, please go and give that a read.

Wrap up

So there you have it. The 6MB Lambda payload limit is one of those things that tend to creep up on you as it is one of the less talked about limits.

Generally speaking, I prefer option 2 as it eliminates the size limit altogether. At the expense of requiring changes to both the frontend and backend.

If your application needs to impose some size limit on the payload in the first place, then it might not be the right solution for you. As API Gateway and Lambda’s payload size limit is a built-in defensive mechanism for you in that case. And you can enable WAF with API Gateway, which can enforce even more granular payload limits without having to implement them in your own code (and maintain those code).

Liked this article? Support me on Patreon and get direct help from me via a private Slack channel or 1-2-1 mentoring.
Subscribe to my newsletter


Hi, I’m Yan. I’m an AWS Serverless Hero and the author of Production-Ready Serverless.

I specialise in rapidly transitioning teams to serverless and building production-ready services on AWS.

Are you struggling with serverless or need guidance on best practices? Do you want someone to review your architecture and help you avoid costly mistakes down the line? Whatever the case, I’m here to help.

Hire me.


Check out my new podcast Real-World Serverless where I talk with engineers who are building amazing things with serverless technologies and discuss the real-world use cases and challenges they face. If you’re interested in what people are actually doing with serverless and what it’s really like to be working with serverless day-to-day, then this is the podcast for you.


Check out my new course, Learn you some Lambda best practice for great good! In this course, you will learn best practices for working with AWS Lambda in terms of performance, cost, security, scalability, resilience and observability. We will also cover latest features from re:Invent 2019 such as Provisioned Concurrency and Lambda Destinations. Enrol now and start learning!


Check out my video course, Complete Guide to AWS Step Functions. In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. There is something for everyone from beginners to more advanced users looking for design patterns and best practices. Enrol now and start learning!


Are you working with Serverless and looking for expert training to level-up your skills? Or are you looking for a solid foundation to start from? Look no further, register for my Production-Ready Serverless workshop to learn how to build production-grade Serverless applications!

Find a workshop near you