Jump to content

inertiauk

Members
  • Posts

    6
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

inertiauk's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. I switched to Dropbox 'as much as you need' teams account. Currently migrating my 350TB of google data across. Dropbox is much much faster, and no 750gb per day upload limit. I spoke to their support and told them how I will use the space and that my data is encrypted. They were fine with it. So it feels safer than Google to me as well as faster. I'm willing to share one of my 2 unused accounts for a third of my bill so long as its used exclusively for stabelbit cloud drive (and you encrypt your volumes)
  2. It usually happens when you have uploaded over 750GB of data in a 24 hour period. You need to limit upload speeds so that it cant do that (75mb/s works well) so that you cant hit this limit. It can happen for other reasons, but this is the most common. Hope that helps.
  3. The time taken is because the drive wasn't closed gracefully and you have a large cache for it to check against the online data. Firstly, reduce your cache size to only what you need for it to perform ok, this means resynchs will take a lot less time. Then figure out why your system is force closing SBCD when it 'turns off' (you are shutting down, not just turning off aren't you?) and rectify that so you wont have the problem in future. Hope that helps.
  4. Hi, I have a drive which i setup backup in 2019. I made a 256TB Drive, but I set the cluster size too low to use it as a single filesystem. I didn't know what I was doing back then, the drive in disk manager is a Dynamic disk, and it has 2 separate ~128TB partitions on it. Drives I have created since then have a correct cluster size and are single partition basic disks. The drive has lots of directories and file son it now, and is very slow to list them / acknowledge they exist when I browse through some of them, especially a directory that has lots of files/ directories in it. With Meta pinning turned on, the pinned data never goes over 8.0kb , and when Directory pinning is turned on it still doesn't go over 8.0kb (so I assume its not pinning any data at all for directories, and probably none for Meta, which explains the performance. I presume its the way the filesystem is setup that means SBCD cant pin what it needs to, perhaps it doesn't understand what needs to pin as it expects a flat single partition on a basic disk ? Is there any way to rectify this without migrating the data to a new drive ? This would take Months to do. Probably longer because of this performance problem. TIA
  5. You could remove the drive letters from your mounted drives, then use stable bit drive pool to pool them all into 1
  6. I have a very large drive setup on google drive. It has ~50Tb of data with duplication on. The drive i set as the max 256TB (50% usable wit duplication) I recently had to move it from the main VM it was on to a new VM. It is taking hours and hours (over 15 so far) to finish indexing the Chunk ID's into SQL Lite. I have a very fast connection (4gb/s up and down) and the VM has lots of resources with 12 5900x v-cores and 16GB RAM The drive is used mostly for large files several gigabytes in size. The storage chunk size of the drive being built is at the default (10MB), but I do plan on migrating (slowly @ ~250gb / day) to a newly built drive if I can improve it with a bigger chunk size, so it has less chunks to index if/.when it needs to. (plus it should improve through put in theory). However the highest option in the GUI is 20MB. In theory that should help, and the rebuild should take half as long for the same data. But my data is growing and I have a lot of bandwidth available for overheads. Would it be possible to use a much bigger storage chunk size, say 50mb or 100mb on the new drive ? This will increase overheads quite a lot, but it will drastically reduce reindex times and should make a big difference to throughput too ? What's the reason its capped at 20MB in the GUI ? Can I over ride it in a config file or registry entry ? TIA
×
×
  • Create New...