Jump to content

Search the Community

Showing results for tags 'chunk'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 2 results

  1. I have a very large drive setup on google drive. It has ~50Tb of data with duplication on. The drive i set as the max 256TB (50% usable wit duplication) I recently had to move it from the main VM it was on to a new VM. It is taking hours and hours (over 15 so far) to finish indexing the Chunk ID's into SQL Lite. I have a very fast connection (4gb/s up and down) and the VM has lots of resources with 12 5900x v-cores and 16GB RAM The drive is used mostly for large files several gigabytes in size. The storage chunk size of the drive being built is at the default (10MB), but I do plan on migrating (slowly @ ~250gb / day) to a newly built drive if I can improve it with a bigger chunk size, so it has less chunks to index if/.when it needs to. (plus it should improve through put in theory). However the highest option in the GUI is 20MB. In theory that should help, and the rebuild should take half as long for the same data. But my data is growing and I have a lot of bandwidth available for overheads. Would it be possible to use a much bigger storage chunk size, say 50mb or 100mb on the new drive ? This will increase overheads quite a lot, but it will drastically reduce reindex times and should make a big difference to throughput too ? What's the reason its capped at 20MB in the GUI ? Can I over ride it in a config file or registry entry ? TIA
  2. Hello, System: Windows 10 intel i4770 CloudDrive version: BETA 64bit Background/Issue Created a local CloudDrive pointing to Amazon S3. I chose full encryption. The volume/bucket created successfully on S3 as well as the formatting and drive assignment for my local drive. I could see the METADATA GUID, and a few chunk files were created from the process on the S3 bucket. Next, I upload one file and noticed that additional "chunk" files were created in the bucket. After the Stablebit GUI status indicated that all file transfers were completed, I deleted the file from my local CloudDrive. After awhile, I refreshed the S3 bucket and saw that all the chunks were still there...including the new ones that were created when I transferred the file. Here's my newb questions: 1. Am I correct in stating that when I delete a local file which is on my local Stablebit CloudDrive pointing to Amazon S3, it should remove the applicable "chunks" from Amazon S3? 2. How can I be sure that when I delete a large file from my local CloudDrive, the applicable S3 storage bucket size reduces accordingly? 3. Am I off base to think that the two file systems should stay in sync? During my tests, and after waiting quite some time, the S3 bucket never decreased in size even though I deleted the large file(s). Which means Amazon boosts their bottom line at my expense. Thanks, and I searched for this but could not find any discussion on point.
  • Create New...