Jump to content

612WharfAvenue

Members
  • Posts

    2
  • Joined

  • Last visited

612WharfAvenue's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. I did find the chunk size setting, and as noted in my initial post, it's capped at 1MB for ACD. If it had a larger range of options I could experiment and find the ideal tradeoff between throughput for large files and latency. The issue I'm running into (and I suspect anyone wiht a higher bandwidth upload is) is that Amazon is throttling things, seemingly on transaction count. Larger chunk size would mitigate this, albeit at the cost of requiring a minimum >1MB read to pull any amount of data down. Upload verification seems like it would require both uploading and downloading all data, whether a theoretical 500MB file is divided into 1MB or 10MB chunks, 500MB is going up and coming down - the only tradeoff would be a bigger penalty for a chunk that fails upload verification - this would seem a small price to pay to avoid throttling, however.
  2. I've been testing out CloudDrive with Amazon cloud drive over the past couple days, and have run into the "Server is throttling us" log message with some frequency (alongside "Thread was being aborted") I noticed that the maximum storage chunk size for the ACD provider is 1MB, as opposed to much higher elsewhere. It seems the throttling is based on transaction volume rather than data size, as I'm able to upload multi-gig test files at full speed with no errors. Is there a technical reason for the 1MB limit? It seems that a larger chunk size would reduce transaction count for the same amount of data and drop overhead, possibly allowing moving a large file to an ACD-backed CloudDrive to approach a raw upload in terms of speed.
×
×
  • Create New...