Yan Cui
I help clients go faster for less using serverless technologies.
Serialization Overhead
When it comes to serializing/deserializing objects for transport through the wire, you will most likely incur some overhead in the serialized message though the amount of overhead varies depends on the data interchange format used – XML is overly verbose where as JSON is much more light-weight:
public class MyClass { public int MyProperty { get; set; } } var myClass = new MyClass { MyProperty = 10 };
XML representation:
<MyClass> <MyProperty>10</MyProperty> </MyClass>
JSON representation:
{“MyProperty”:10}
As you can see, a simple 4-byte object (MyProperty is a 32-bit integer) can take up over 11 times the space after it’s serialized into XML format:
Binary | XML | JSON |
4 byte | 46 bytes | 17 bytes |
This overhead translates to cost in terms of both bandwidth as well as performance, and if persistence is involved then there’s also the storage cost. For example, if you need to persist objects onto an Amazon S3 bucket, then not only would you be paying for the wastage introduced by the serialization process (extra space needed for storage) but also the additional bandwidth needed to get the serialized data in and out of S3, not to mention the performance penalty for transferring more data.
Using Compression
An easy way to cut down on your cost is to introduce compression to the equation, considering that the serialized message in XML/JSON is text which can be easily compressed into 10-15% of its original size, there’s a compelling case to do it!
There are a number of 3rd party compression libraries out there to help you do this, for instance:
- SharpZipLib – a widely used library with support for Zip, GZip, Tar and BZip2 formats.
- SeverZipSharp – codeplex project which provides a wrapper for the native 7Zip library to provide data (self-)extraction and compression in all 7-ziop formats.
- UnRAR.dll – native library from the developer of WinRAR to help you work with the RAR format.
The .Net framework also provides two classes for you to use – DeflateStream and GZipStream – which both uses the Deflate algorithm (GZipStream inherits from the DeflateStream class) to provide lossless compression and decompression. Please note you can’t use these classes to compress files larger than 4GB though.
Here’s two extension methods to help you compress/decompress a string using the framework’s DeflateStream class:
public static class CompressionExtensions { /// <summary> /// Returns the byte array of a compressed string /// </summary> public static byte[] ToCompressedByteArray(this string source) { // convert the source string into a memory stream using ( MemoryStream inMemStream = new MemoryStream(Encoding.ASCII.GetBytes(source)), outMemStream = new MemoryStream()) { // create a compression stream with the output stream using (var zipStream = new DeflateStream(outMemStream, CompressionMode.Compress, true)) // copy the source string into the compression stream inMemStream.WriteTo(zipStream); // return the compressed bytes in the output stream return outMemStream.ToArray(); } } /// <summary> /// Returns the base64 encoded string for the compressed byte array of the source string /// </summary> public static string ToCompressedBase64String(this string source) { return Convert.ToBase64String(source.ToCompressedByteArray()); } /// <summary> /// Returns the original string for a compressed base64 encoded string /// </summary> public static string ToUncompressedString(this string source) { // get the byte array representation for the compressed string var compressedBytes = Convert.FromBase64String(source); // load the byte array into a memory stream using (var inMemStream = new MemoryStream(compressedBytes)) // and decompress the memory stream into the original string using (var decompressionStream = new DeflateStream(inMemStream, CompressionMode.Decompress)) using (var streamReader = new StreamReader(decompressionStream)) return streamReader.ReadToEnd(); } }
Please NOTE that the compressed string can be longer than the uncompressed string when the uncompressed string is very short, as always you should make a judgement based on your situation whether compression is worthwhile given that it also requires additional CPU cycles for the compression/decompression steps.
The good news is, as serialized messages tend to blow up fairly quickly (especially when there are arrays involved), in almost all cases you should see a significant saving on the size of the serialized message and therefore storage and bandwidth cost as well!
Whenever you’re ready, here are 3 ways I can help you:
- Production-Ready Serverless: Join 20+ AWS Heroes & Community Builders and 1000+ other students in levelling up your serverless game. This is your one-stop shop for quickly levelling up your serverless skills.
- I help clients launch product ideas, improve their development processes and upskill their teams. If you’d like to work together, then let’s get in touch.
- Join my community on Discord, ask questions, and join the discussion on all things AWS and Serverless.
Pingback: LINQPad query to parse JSON DateTime string | theburningmonk.com
Seems to me that you are compressing using ASCII encoding and uncompressing using UTF8, which is the default for the StreamReader constructor. Am I missing something?
Yup you’re right, that encoding should really be using UTF8 instead.