Using Protocol Buffers with API Gateway and AWS Lambda

AWS announced bina­ry sup­port for API Gate­way in late 2016, which opened up the door for you to use more effi­cient bina­ry for­mats such as Google’s Pro­to­col Buffers and Apache Thrift.


Com­pared to JSON — which is the bread and but­ter for APIs built with API Gate­way and Lamb­da — these bina­ry for­mats can pro­duce sig­nif­i­cant­ly small­er pay­loads.

At scale, they can make a big dif­fer­ence to your band­width cost.

In restrict­ed envi­ron­ments such as low-end devices or in coun­tries with poor mobile con­nec­tions, send­ing small­er pay­loads can also improve your user expe­ri­ence by improv­ing the end-to-end net­work laten­cy, and pos­si­bly pro­cess­ing time on the device too.

Com­par­i­son of seri­al­iz­er per­for­mance between Pro­to Buffers and JSON in .Net


Fol­low these 3 sim­ple steps (assum­ing you’re using Server­less frame­work):

  1. install the awe­some server­less-apigw-bina­ry plu­g­in
  2. add application/x-protobuf to bina­ry media types (see screen­shot below)
  3. add func­tion that returns Pro­to­col Buffers as base64 encod­ed response

The server­less-apigw-bina­ry plu­g­in has made it real­ly easy to add bina­ry sup­port to API Gate­way

To encode & decode Pro­to­col Buffers pay­load in Node­js, you can use the pro­to­bufjs pack­age from NPM.

It lets you work with your exist­ing .proto files, or you can use JSON descrip­tors. Give the docs a read to see how you can get start­ed.

In the demo project (link at the bot­tom of the post) you’ll find a Lamb­da func­tion that always returns a response in Pro­to­col Buffers.

Cou­ple of things to note from this func­tion:

  • we set the Content-Type head­er to application/x-protobuf
  • body is base64 encod­ed rep­re­sen­ta­tion of the Pro­to­col Buffers pay­load
  • isBase64Encoded is set to true

you need to do all 3 of these things to make API Gate­way return the response as bina­ry data.

Con­sid­er them the mag­ic incan­ta­tion for mak­ing API Gate­way return bina­ry data, and, the caller also has to set the Accept head­er to application/x-protobuf.

In the same project, there’s also a JSON end­point that returns the same pay­load as com­par­i­son.

The response from this JSON end­point looks like this:

{"players":[{"id":"eb66db14992e06b36282d607cf0134ce4fe45f50","name":"Calvin Ortiz","scores":[57,12,100,56,47,78,20,37,32,48]},{"id":"7b9b38e535453d120e706ff57fef41f6fee991cb","name":"Marcus Cummings","scores":[40,57,24,15,45,54,25,67,59,23]},{"id":"db34a2a5f4d16e77a6d3d6154a8b8bb6760b3b99","name":"Harry James","scores":[61,85,14,70,8,80,14,22,76,87]},{"id":"e21018c4f43eef10771e0fa71bc54156b00a64dd","name":"Gregory Bishop","scores":[51,31,27,47,72,75,61,28,100,41]},{"id":"b3ee29ee49b640ce15be1737d0dca60e48108ee1","name":"Ann Evans","scores":[69,17,48,99,85,8,75,55,78,46]},{"id":"9c1e6d4d46bb0c0d2c92bab11e5dbd5f4ab0c619","name":"Juan Perez","scores":[71,34,60,84,21,98,60,8,91,92]},{"id":"d8de89222633c61393931457c1e72558eba48639","name":"Loretta Harvey","scores":[15,40,73,92,42,65,58,30,26,84]},{"id":"141dad672ec559431f808964391d128d2c3274bf","name":"Ian Powell","scores":[17,21,14,84,64,14,22,22,34,92]},{"id":"8a97e85e2e5385c45fc31f24bfe781c26f78c0b7","name":"Steve Gibson","scores":[33,97,6,1,20,1,78,3,77,19]},{"id":"6b3ca6924e17cd5fd9d91b36d49b36a5d542c9ea","name":"Harold Ferguson","scores":[31,32,4,10,37,85,46,86,39,17]}]}

As you can see, it’s just a bunch of ran­dom­ly gen­er­at­ed names and GUIDs, and inte­gers. The same response in Pro­to­col Buffers is near­ly 40% small­er.

Problem with the protobufjs package

Before we move on, there is one impor­tant detail about using the pro­to­bufjspacakge in a Lamb­da func­tion — you need to npm install the pack­age on a Lin­ux sys­tem.

This is because it has a depen­den­cy that is dis­trib­uted as native bina­ries, so if you installed the pack­aged on OSX then the bina­ries that are pack­aged and deployed to Lamb­da will not run on the Lamb­da exe­cu­tion envi­ron­ment.

I had sim­i­lar prob­lems with oth­er Google libraries in the past. I find the best way to deal with this is to take a leaf out of aws-server­less-go-shim’s approach and deploy your code inside a Dock­er con­tain­er.

This way, you would local­ly install a com­pat­i­ble ver­sion of the native bina­ries for your OS so you can con­tin­ue to run and debug your func­tion with sls invoke local (see this post for details).

But, dur­ing deploy­ment, a script would run npm install --force in a Dock­er con­tain­er run­ning a com­pat­i­ble Lin­ux dis­tri­b­u­tion. This would then install a ver­sion of the native bina­ries that can be exe­cut­ed in the Lamb­da exe­cu­tion envi­ron­ment. The script would then use sls deploy to deploy the func­tion.

The deploy­ment script can be some­thing sim­ple like this:

In the demo project, I also have a docker-compose.yml file:

The Server­less frame­work requires my AWS cre­den­tials, hence why I’ve attached the $HOME/.aws direc­to­ry to the con­tain­er for the AWSSDK to find at run­time.

To deploy, run docker-compose up.

Use HTTP content negotiation

Whilst bina­ry for­mats are more effi­cient when it comes to pay­load size, they do have one major prob­lem: they’re real­ly hard to debug.

Imag­ine the sce­nario — you have observed a bug, but you’re not sure if the prob­lem is in the client app or the serv­er. But hey, let’s just observe the HTTP con­ver­sa­tion with a HTTP proxy such as Charles or Fid­dler.

This work­flow works great for JSON but breaks down when it comes to bina­ry for­mats such as Pro­to­col Buffers as the pay­loads are not human read­able.

As we have dis­cussed in this post, the human read­abil­i­ty of JSON comes with the cost of heav­ier band­width usage. For most net­work com­mu­ni­ca­tions, be it ser­vice-to-ser­vice, or ser­vice-to-client, unless a human is active­ly “read­ing” the pay­loads it’s not worth pay­ing the cost. But when a human is try­ing to read it, that human read­abil­i­ty is very valu­able.

For­tu­nate­ly, HTTP’s con­tent nego­ti­a­tion mech­a­nism means we can have the best of both worlds.

In the demo project, there is a contentNegotiated func­tion which returns either JSON or Pro­to­col Buffers pay­loads based on what the Accept head­er.

By default, you should use Pro­to­col Buffers for all your net­work com­mu­ni­ca­tions to min­imise band­width use.

But, you should build in a mech­a­nism for tog­gling the com­mu­ni­ca­tion to JSON when you need to observe the com­mu­ni­ca­tions. This might mean:

  • for debug builds of your mobile app, allow super users (devs, QA, etc.) the abil­i­ty to turn on debug mode, which would switch the net­work­ing lay­er to send Accept head­er as application/json
  • for ser­vices, include a con­fig­u­ra­tion option to turn on debug mode (see this post on con­fig­ur­ing func­tions with SSM para­me­ters and cache client for hot-swap­ping) to make ser­vice-to-ser­vice calls use JSON too, so you can cap­ture and ana­lyze the request and respons­es more eas­i­ly

As usu­al, you can try out the demo code your­self, the repo is avail­able here.

Like what you’re read­ing? Check out my video course Pro­duc­tion-Ready Server­less and learn the essen­tials of how to run a server­less appli­ca­tion in pro­duc­tion.

We will cov­er top­ics includ­ing:

  • authen­ti­ca­tion & autho­riza­tion with API Gate­way & Cog­ni­to
  • test­ing & run­ning func­tions local­ly
  • CI/CD
  • log aggre­ga­tion
  • mon­i­tor­ing best prac­tices
  • dis­trib­uted trac­ing with X-Ray
  • track­ing cor­re­la­tion IDs
  • per­for­mance & cost opti­miza­tion
  • error han­dling
  • con­fig man­age­ment
  • canary deploy­ment
  • VPC
  • secu­ri­ty
  • lead­ing prac­tices for Lamb­da, Kine­sis, and API Gate­way

You can also get 40% off the face price with the code ytcui. Hur­ry though, this dis­count is only avail­able while we’re in Manning’s Ear­ly Access Pro­gram (MEAP).