Jump to content

Niloc

Members
  • Posts

    5
  • Joined

  • Last visited

Niloc's Achievements

Newbie

Newbie (1/3)

0

Reputation

  1. I had a DrivePool consisting of a local hard drive that was backing up data to a CloudDrive through the duplication setting. I removed the CloudDrive from this pool (so the pool now only contains the local hard drive) and now the CloudDrive shows up as empty in Windows explorer, but all of the encrypted data is still there on my cloud provider. How can I get the data back onto my mounted CloudDrive without having to re-duplicate everything from my local hard drive and then re-upload all of the data to my cloud provider? I have a slow internet connection and it took months to initially upload my 6TB of data.
  2. Thanks for explaining. Does this include moving the .covefs folder?
  3. For the step where you're moving the data from the hidden PoolPart folder on your E: drive, are you moving the data to the new D: drive pool, or to the F: drive you created from the new 55TB of space? I have a slow upload speed, so I really don't want to wait months to reupload 7TB of data. And what is the purpose of creating the F: drive? Why create a new drive rather than just expand the size of the E: drive and add it to the D: pool all on its own? Is there an advantage to having 2 partitions rather than just 1?
  4. So does that mean then if I duplicate my entire local drivepool to my clouddrive, then drivepool software will show my entire local drivepool as containing only duplicated data? I've also noticed that two of my HDD's are showing as having over 100GB+ of "other" data on them, however when examining the drives using WinDirStat, only the unduplicated data can be seen and the extra "other" data is no where to be seen on the drives. Is this a Drivepool visual bug? The total disk usage reported from WinDirStat matches the numbers reported in Drivepool excluding the 'other' data. So basically WinDirStat sees all of the normal data contained in my drive, however the extra 100GB of other data is no where to be seen.
  5. So I have an 8TB local drive pool composed of 4 different HDD's of varying size. I would like to backup several TB's of this data to my google drive using Drivepool's folder duplication. When I select a folder to duplicate, the folder is copied into CloudDrive's upload queue and Drivepool shows the duplicated data as being present on my CloudDrive and another duplicated copy is also split among the local HDD's making up the local pool (I assume because the duplicated data still needs to be uploaded), essentially showing two duplicated copies of the amount of data I selected instead of just one. However, once a folder has finished being duplicated and CloudDrive's "to-upload" queue is 0 bytes, Drivepool still shows that the duplicated files are being stored on my local HDD's in addition to my Clouddrive. How can I get the duplicated data to be stored only on my CloudDrive, and have the duplicated folders be deleted locally after they are uploaded in order to prevent having any duplicated data stored on my local drives outside of my CloudDrive cache? What I've done: - Disabled plug-in settings that can force immediate balancing - Set my CloudDrive to only contain duplicated data while the local drives can only store unduplicated data under Drive Usage Limiter settings. Given that my 8TB capacity drive pool is about 6TB full, if I try to duplicate 2TB of data to my CloudDrive then my local Drivepool will be totally full because a second copy of the 2TB duplicated data is being stored on my local drivepool too. Edit: I solved my question by using pool hierarchy. This article helped a lot: https://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/
×
×
  • Create New...