If I understand it correct, there is a big overhead while making small reads. On the other hand I think that the actual max chunk size of 1 MB increases the overhead in download-time for big files, because the maximal number of download threads is limited so it adds one Round-Trip-Time to the download-time for every additional Request.
There can be yes, because of the added protocol and header info.
However, the other issue here is latency. While it may be more bandwidth efficient to use larger chunks, it adds additional access time to the drive itself. Prefetching may help, but that's for SEQUENTIAL reads.
But what happens when you have a bunch of random reads? Like what is common for NTFS info access? Then you have to download each large chunk, and wait on it. If the system has to wait too long, it can (will) lock up the system. it's something that we've seen. That is why there is a timeout value, and why there is a limit on the number of errors that can occur before the drive itself is unmounted.
The problem is that we have to balance web traffic an disk I/O to prevent issues on either end, and that's not exactly an easy act. ESPECIALLY when the web provider doesn't allow partial reads.
The other thing, is that we'll download the entire file rather than chunks, in certain cases (eg, when needed, when it makes sense). But otherwise, downloading the entire chunk is not just inefficient, but wasteful.
Microsoft MVP for Windows Home Server 2009-2012
Lead Moderator for We Got Served
Moderator for Home Server Show
This is my server
Lots of "Other" data on your pool? Read about what it is here.