WCF and gzip compression

posted Sep 25, 2012, 3:13 PM by Tyler Akins
I was helping to diagnose a problem where web requests to a service were being troublesome.  It always enabled compression on the output stream, whether or not the client asked for it.  Normally that is not a problem.  We were using PHP to make SOAP calls and tied that to PHP's curl library because we had some special requirements regarding request and response headers that were necessary.

PHP's SOAP library (when fetching via the curl module) was saying that there was no response or that there were problems decompressing the stream.  Wget did not work.  The curl command-line tool worked.  Using a sniffer on the network showed me that data was coming across the wire.  When that data was written to disk, gzip would not decompress it but zcat would.

Everything worked like a charm when compression was disabled, but it was absolutely necessary that the compression was enabled and forced on in our production environment.

We more carefully analyzed the responses from the server and found that there was random-ish looking data (as is expected) for most of the response and then perhaps about 1/3 is NULL bytes or (even worse) XML from some sort of SOAP request.  It looks like we're leaking memory contents.  Very undesirable.

We obtained the source code at about the time that I noticed all response lengths were powers of 2:  256 bytes, 512 bytes, 1k, 2k, 4k, 8k.  We're sending back some sort of buffer that was allocated.  Here's the code that was affected -- you may notice it looks a lot like many other copies of this code on the web.

//Helper method to compress an array of bytes
static ArraySegment<byte> CompressBuffer(ArraySegment<byte> buffer, BufferManager bufferManager, int messageOffset)
{
MemoryStream memoryStream = new MemoryStream();
memoryStream.Write(buffer.Array, 0, messageOffset);

using (GZipStream gzStream = new GZipStream(memoryStream, CompressionMode.Compress, true))
{
gzStream.Write(buffer.Array, messageOffset, buffer.Count);
}


byte[] compressedBytes = memoryStream.ToArray();
byte[] bufferedBytes = bufferManager.TakeBuffer(compressedBytes.Length);

Array.Copy(compressedBytes, 0, bufferedBytes, 0, compressedBytes.Length);

bufferManager.ReturnBuffer(buffer.Array);
ArraySegment<byte> byteArray = new ArraySegment<byte>(bufferedBytes, messageOffset, bufferedBytes.Length - messageOffset);
return  byteArray;
}

This actually comes from one version of an example that Microsoft produced.  In our case we thought it was Iconic.Zlib but the above code uses System.IO.Compression.GZipStream, so it isn't related to the compression library.  That works like a charm.  What's broken about this code is the byteArray and how many bytes are copied to it.  That last line should instead look like this:

ArraySegment<byte> byteArray = new ArraySegment<byte>(bufferedBytes, messageOffset, compressedBytes.Length);

Once you make this change, your HTTP responses should no longer be exactly equal to powers of 2.  You can double-check this by looking for the Content-Length headers when you sniff the traffic or use some tool that will show you the full response headers.

I hope that others can spread this good knowledge out to the various other forums for when people have problems with this.  I believe that this is the reason that Chrome has issues with compressed data when people are doing things like this.  I found forum postings mentioning that Chrome is extra picky about compressed data and how compressed data from some C# services were not working in Chrome.
Comments