I did find the chunk size setting, and as noted in my initial post, it's capped at 1MB for ACD. If it had a larger range of options I could experiment and find the ideal tradeoff between throughput for large files and latency. The issue I'm running into (and I suspect anyone wiht a higher bandwidth upload is) is that Amazon is throttling things, seemingly on transaction count. Larger chunk size would mitigate this, albeit at the cost of requiring a minimum >1MB read to pull any amount of data down. Upload verification seems like it would require both uploading and downloading all data, whether a theoretical 500MB file is divided into 1MB or 10MB chunks, 500MB is going up and coming down - the only tradeoff would be a bigger penalty for a chunk that fails upload verification - this would seem a small price to pay to avoid throttling, however.