browse by category or date

I was writing this small program to download a file from Internet. The original code was as follows:

            long content_lenght;
            int buffer_length = 4096;
            int read_count = 0;
            byte[] buffer;
            WebRequest req = WebRequest.Create(
                "http:/images/photobucket/CAP_5288Japan-Posters.jpg");
            WebResponse resp = req.GetResponse();
            FileStream output = new FileStream("output.jpg", FileMode.Create);
            Stream strm = resp.GetResponseStream();
            content_lenght = resp.ContentLength;
            while (read_count < content_lenght)
            {
                if (read_count + buffer_length > content_lenght)
                    buffer = new byte[(int)content_lenght - read_count];
                else
                    buffer = new byte[buffer_length];
                read_count += strm.Read(buffer, 0, buffer.Length);
                output.Write(buffer, 0, buffer.Length);
            }
            strm.Close();
            output.Flush();
            output.Close();

To my dismay, when the program finished its execution, the output file (output.jpg) become a scrambled image. So I thought it could be caused by the buffer size (buffer_length) that is too huge. So I reduced buffer_length from 4096 to 1024. It did worked. But when I run the program for a few more times, suddenly the image become scrambled again. What kind of problem is this?

Then something struck me, apparently I forgot that Internet is an unreliable medium. The bytes available inside the input buffer maybe not be as much as the number that I tried to read. Initially I have this thought that Stream.Read(…) will pause the execution thread waiting for the input buffer to have at least the number of bytes that I requested before allowing the read process to commence. So it seems the fact is contrary to what I think. Or does the waiting process only happen when we only read one single byte? So I modified my program into something like this:

            long content_lenght;
            int read_count = 0;            
            WebRequest req = WebRequest.Create("http:/images/photobucket/CAP_5288Japan-Posters.jpg");
            WebResponse resp = req.GetResponse();
            FileStream output = new FileStream("output.jpg", FileMode.Create);
            Stream strm = resp.GetResponseStream();
            content_lenght = resp.ContentLength;
            byte[] buffer= new byte[content_lenght];
            while (read_count < content_lenght)
            {
                buffer[read_count] = (byte)strm.ReadByte();
                ++read_count;
            }
            output.Write(buffer, 0, buffer.Length);
            strm.Close();
            output.Flush();
            output.Close();

It works like a charm (OOT: The statement is quite oxymoron. I mean, can anyone show me what kind of charm that scientifically proven? Okay, perhaps I should have said 'It works as I expected' 😀 ). Tragedy struck when I found out that there is even a simpler way to achieve above code's purpose:

            WebClient wec = new WebClient();
            wec.DownloadFile("http:/images/photobucket/CAP_5288Japan-Posters.jpg", 
                  "output.jpg");

So what do you think?

GD Star Rating
loading...

Possibly relevant:

About Hardono

Howdy! I'm Hardono. I am working as a Software Developer. I am working mostly in Windows, dealing with .NET, conversing in C#. But I know a bit of Linux, mainly because I need to keep this blog operational. I've been working in Logistics/Transport industry for more than 11 years.

Incoming Search

.net, c#

1 Trackback

 

2 comments so far

Add Your Comment
  1. Well, at least it can be taken as a learning experience 🙂 Anyway, when we need code for a “fairly common” (I know, gray area here) operation in a modern platform, I guess it’s pretty safe to start with assumption that someone else already came across the issue and came up with a solution, and start by searching library docs, google, etc. Otherwise we risk falling into “doing hours of coding to save few minutes of reading” trap (^_^”)

  2. Hehe… Reinventing the wheel eh?