Yan Cui
I help clients go faster for less using serverless technologies.
This article is brought to you by
Don’t reinvent the patterns. Catalyst gives you consistent APIs for messaging, data, and workflow with key microservice patterns like circuit-breakers and retries for free.
When you’re using the Amazon S3 client, have you come across the occasional exception that says something like one of these exception messages:
“The request was aborted: The connection was closed unexpectedly”
“Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall”
“Unable to read data from the transport connection: An established connection was aborted by the software in your host machine “
If you do, then you’re probably attempting to return the response stream directly back to the rest of your application with something like this:
1: var response = _s3Client.GetObject(request);
2: return response.ResponseStream;
However, because the stream is coming from the Amazon S3 service and is fed to your code in chunks, your code needs to ensure that the connection to S3 stays open until all the data has been received. So as mentioned in the S3 documentation (which incidentally, most of us don’t read in great details…) here, you should be wrapping the response you get from the GetObject method in a using clause.
Depends on what it is you want to do with the stream, you might have to handle it differently. For instance, if you just want to read the string content of a text file, you might want to do this:
1: using (var response = _s3Client.GetObject(request))
2: {
3: using (var reader = new StreamReader(response.ResponseStream))
4: {
5: return reader.ReadToEnd();
6: }
7: }
Alternatively, if you want to return the response stream itself, you’ll need to first load the stream in its entirety and return the loaded stream. Unfortunately, at the time of this writing, the AWSSDK library still hasn’t been migrated to .Net 4 and therefore doesn’t have the uber useful CopyTo method added in .Net 4, so you will most likely have to do the heavy lifting yourself and read the data out manually into a memory stream:
1: using (var response = _s3Client.GetObject(request))
2: {
3: var binaryData = ReadFully(response.ResponseStream);
4: return new MemoryStream(binaryData);
5: }
6:
7: /// <summary>
8: /// See Jon Skeet's article on reading binary data:
9: /// http://www.yoda.arachsys.com/csharp/readbinary.html
10: /// </summary>
11: public static byte[] ReadFully (Stream stream, int initialLength = -1)
12: {
13: // If we've been passed an unhelpful initial length, just
14: // use 32K.
15: if (initialLength < 1)
16: {
17: initialLength = 32768;
18: }
19:
20: byte[] buffer = new byte[initialLength];
21: int read=0;
22:
23: int chunk;
24: while ( (chunk = stream.Read(buffer, read, buffer.Length-read)) > 0)
25: {
26: read += chunk;
27:
28: // If we've reached the end of our buffer, check to see if there's
29: // any more information
30: if (read == buffer.Length)
31: {
32: int nextByte = stream.ReadByte();
33:
34: // End of stream? If so, we're done
35: if (nextByte==-1)
36: {
37: return buffer;
38: }
39:
40: // Nope. Resize the buffer, put in the byte we've just
41: // read, and continue
42: byte[] newBuffer = new byte[buffer.Length*2];
43: Array.Copy(buffer, newBuffer, buffer.Length);
44: newBuffer[read]=(byte)nextByte;
45: buffer = newBuffer;
46: read++;
47: }
48: }
49: // Buffer is now too big. Shrink it.
50: byte[] ret = new byte[read];
51: Array.Copy(buffer, ret, read);
52: return ret;
53: }
Whenever you’re ready, here are 3 ways I can help you:
- Production-Ready Serverless: Join 20+ AWS Heroes & Community Builders and 1000+ other students in levelling up your serverless game. This is your one-stop shop for quickly levelling up your serverless skills.
- I help clients launch product ideas, improve their development processes and upskill their teams. If you’d like to work together, then let’s get in touch.
- Join my community on Discord, ask questions, and join the discussion on all things AWS and Serverless.
Thank you very much.
I’m now using a MemoryStream and copy the ResponseStream using the custom CopyTo method I wrote a while ago.
MemoryStream mem = new MemoryStream();
Tools.CopyStream(response.ResponseStream, mem);
mem.Position = 0;
response.ResponseStream.Close();
Hi,
Thank on your comment. It solve my broblem.
But when I try to make it number of times it is sometimes failed, sometimes not.
(Exception : “The request was aborted: The connection was closed unexpectedly” )
My experience with big size image .
You can to help me ?
@Shira – when working with S3 (or any AWS service for that matter) the service has a self-protection mechanism against over-usage/DNS-attacks, if you’re making too many concurrent requests against S3 then the service is able to handle you’ll start to see an elevated rate of errors such as the you described.
The .Net AWSSDK already has retry mechanism built-in (which you can configure and tweak) but we found it to be beneficial in some cases to add another layer of retries with exponential backoff.
Furthermore, the S3 service automatically partitions objects by prefix, and with some simple tricks you can get a lot more out of S3 and potentially cure the errors you get if they’re load-related.
Have a read of these two posts on sharding:
http://highscalability.com/blog/2012/3/7/scale-indefinitely-on-s3-with-these-secrets-of-the-s3-master.html
http://aws.typepad.com/aws/2012/03/amazon-s3-performance-tips-tricks-seattle-hiring-event.html
Lastly, if you’re working with large files then make sure your timeout setting is suitable (the default timeout for .Net AWSSDK is 20 mins I believe).
Thank for your reply,
I will try it.