Jump to content
Covecube Inc.
  • 0

Question regarding larger chunk sizes


I've currently settled on chunk sizes of 50MB as being the best performing for me at the moment, but have a question.


If I create a 1MB file on my drive, what gets uploaded? A 50MB object that is 49MB of zeroes or do we cache until we see 50MB of new data and then upload?

Link to post
Share on other sites

4 answers to this question

Recommended Posts

  • 0

If you haven't already, upgrade to the 421 build.  Alex has massively redesigned the chunk cache stuff.  \



* For the Local Disk provider, checksumming can now be optionally enabled for chunks up to 100 MB in size.

* Chunks up to 100 MB in size are now enabled for:
    - Amazon Cloud Drive (experimental)
    - Amazon S3
    - Box
    - Dropbox
    - Google Cloud Storage
    - Google Drive
    - Microsoft Azure Storage
    - OneDrive (experimental)
* Added a new checksumming algorithm and chunk I/O pipeline that supports partial read checksumming of read unit sizes. This works 
  together with large chunk sizes (> 1 MB) and partial reads (for providers that support that) and allows for checksumming to take 
  place with large chunks.
    - This checksumming algorithm (CRC32-Block) supersedes the old (CRC32) algorithm and will be used for all new drives by default.
    - The old algorithm is still supported and will work transparently for older drives (that were limited to 1 MB chunks).



This may completely changed how the upload is handled

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...