Jump to content
  • 0

Return of chunks above 20 MB?


steffenmand

Question

As internet speeds have increased over the last couple of years, is it maybe possible to increase the chunk size up to 100 MB again ?

With a 10 gbit line on my server, my bottleneck is pretty much the 20 MB chunks that is slowing down speeds. An increase to 100 MB could really make an impact here speedwise. Especially because the current size builds up the Disk Queue which is causing the bottleneck.

I'm fully aware that this isn't for the average user - but it could be added as an "advanced" feature with a warning about high speeds being required!

Besides that - i still do love the product as always :)

Link to comment
Share on other sites

7 answers to this question

Recommended Posts

  • 0

I believe the 20MB limit was because larger chunks were causing problems with Google's per-file access limitations (as a result of successive reads), not a matter of bandwidth requirements. The larger chunk sizes were being accessed more frequently to retrieve any given set of data, and it was causing data to be locked on Google's end. I don't know if those API limitations have changed. 

Link to comment
Share on other sites

  • 0
1 hour ago, srcrist said:

I believe the 20MB limit was because larger chunks were causing problems with Google's per-file access limitations (as a result of successive reads), not a matter of bandwidth requirements. The larger chunk sizes were being accessed more frequently to retrieve any given set of data, and it was causing data to be locked on Google's end. I don't know if those API limitations have changed. 

But this would be avoided by having large minimum required download size. The purpose is to download 100 MB at a time instead of 20 MB

Link to comment
Share on other sites

  • 0
On 6/17/2020 at 6:08 PM, steffenmand said:

But this would be avoided by having large minimum required download size. The purpose is to download 100 MB at a time instead of 20 MB

No, I think this is sort of missing the point of the problem.

If you have, say, 1 GB of data, and you divide that data up into 100MB chunks, each of those chunks will necessarily be accessed more than, say, a bunch of 10MB chunks, no matter how small or large the minimum download size, proportional to the number of requested reads. The problem was that CloudDrive was running up against Google's limits on the number of times any given file can be accessed, and the minimum download size wouldn't change that because the data still lives in the same chunk no matter what portion of it you download at a time. Though a larger minimum download will help in cases where a single contiguous read pass might have to read the same file more often, it wouldn't help in cases wherein any arbitrary number of reads has to access the same chunk file more often--and my understanding was that it was the latter that was the problem. File system data, in particular, is an area where I see this being an issue no matter how large the minimum download.

In any case, they obviously could add the ability for users to work around this. My point was just that it had nothing to do with bandwidth limitations, so an increase in available user-end bandwidth wouldn't be likely to impact the problem.

Link to comment
Share on other sites

  • 0
6 minutes ago, srcrist said:

No, I think this is sort of missing the point of the problem.

If you have, say, 1 GB of data, and you divide that data up into 100MB chunks, each of those chunks will necessarily be accessed more than, say, a bunch of 10MB chunks, no matter how small or large the minimum download size, proportional to the number of requested reads. The problem was that CloudDrive was running up against Google's limits on the number of times any given file can be accessed, and the minimum download size wouldn't change that because the data still lives in the same chunk no matter what portion of it you download at a time. Though a larger minimum download will help in cases where a single contiguous read pass might have to read the same file more often, it wouldn't help in cases wherein any arbitrary number of reads has to access the same chunk file more often--and my understanding was that it was the latter that was the problem. File system data, in particular, is an area where I see this being an issue no matter how large the minimum download.

In any case, they obviously could add the ability for users to work around this. My point was just that it had nothing to do with bandwidth limitations, so an increase in available user-end bandwidth wouldn't be likely to impact the problem.

I cant see how its different from 20 MB vs 100 MB. The content needed is usually within a chunk download anyway. Your problem is more if you do PARTIAL reads of a chunk in which you end up having too many actions on a file - however i always download full chunks. Thus i agree that it shouldnt be possible to download partials on such a chunk size, but if i can finish 100 MB in 1 sec anyway, then it doesnt really matter for me

Link to comment
Share on other sites

  • 0

I'm not really going to get bogged down with another technical discussion with you. I'm sorry.

I can only tell you why this change was originally implemented and that the circumstances really had nothing to do with bandwidth. If you'd like to make a more formal feature request, the feedback submission form on the site is probably the best way to do so. They add feature requests to the tracker alongside bugs, as far as I know.

Link to comment
Share on other sites

  • 0
1 hour ago, srcrist said:

I'm not really going to get bogged down with another technical discussion with you. I'm sorry.

I can only tell you why this change was originally implemented and that the circumstances really had nothing to do with bandwidth. If you'd like to make a more formal feature request, the feedback submission form on the site is probably the best way to do so. They add feature requests to the tracker alongside bugs, as far as I know.

I know the reason - i was here as well you know :) All back in the days when the system was single-threaded and only 1 MB chunks.

Also never said the issue was bandwidth - i just said that internet speeds increased and that more people could potentially download bugger chunks at a time. EDIT: with the chunks being the bottleneck as my HDDs hit their max IOPS. Thus 100 MB chunks could result in less I/O and thus higher speeds.

I completely agree that partial reads is a no go - which is why i mentioned increasing minimal download size (this could be to the chunk size you know) 
So I agree that there is no reason to discuss it as we both know the reasons why and when it happened - im just justifying that that bigger chunks could be utilized in a good way now

 

and P.S. i do utilize the support, however sometimes it is good to bring it into public discussion to get other peoples oppinion as well

 

Will be trying a 3D Xpoint drive as cache drive soon to see if that can give some more "juice" on the speed :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...