Jump to content

Biggest storage chunk size on google drive


inertiauk

Recommended Posts

I have a very large drive setup on google drive.  It has ~50Tb of data with duplication on.  The drive i set as the max 256TB (50% usable wit duplication)

I recently had to move it from the main VM it was on to a new VM.  It is taking hours and hours (over 15 so far) to finish indexing the Chunk ID's into SQL Lite.

I have a very fast connection (4gb/s up and down) and the VM has lots of resources with 12 5900x v-cores and 16GB RAM

The drive is used mostly for large files several gigabytes  in size.

The storage chunk size of the drive being built is at the default (10MB), but I do plan on migrating (slowly @ ~250gb / day) to a newly built drive if I can improve it with a bigger chunk size,  so it has less chunks to index if/.when it needs to. (plus it should improve through put in theory).

However the highest option in the GUI is 20MB.  In theory that should help, and the rebuild should take half as long for the same data.  But my data is growing and I have a lot of bandwidth available for overheads.  Would it be possible to use a much bigger storage chunk size, say 50mb or 100mb on the new drive ? This will increase overheads quite a lot, but it will drastically reduce reindex times and should make a big difference to throughput too ?

What's the reason its capped at 20MB in the GUI ? Can I over ride it in a config file or registry entry ?

TIA

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...