Jump to content

Niloc

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by Niloc

  1. I had a DrivePool consisting of a local hard drive that was backing up data to a CloudDrive through the duplication setting. I removed the CloudDrive from this pool (so the pool now only contains the local hard drive) and now the CloudDrive shows up as empty in Windows explorer, but all of the encrypted data is still there on my cloud provider. How can I get the data back onto my mounted CloudDrive without having to re-duplicate everything from my local hard drive and then re-upload all of the data to my cloud provider? I have a slow internet connection and it took months to initially upload my 6TB of data.

  2. 8 hours ago, srcrist said:

    If you're following the instructions correctly, you are simply reshuffling data around on the same drives. They are file system level changes, and will not require any data to be reuploaded. It should complete in a matter of seconds.

    Thanks for explaining. Does this include moving the .covefs folder?

  3. On 1/22/2019 at 4:35 PM, srcrist said:

    So let's say you have an existing CloudDrive volume at E:.

    • First you'll use DrivePool to create a new pool, D:, and add E:
    • Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space.
    • Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive.
    • Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F:
    • Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool.
    • Then just navigate to D: and all of your content will be there, as well as plenty of room for more.
    • You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. 
    • NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. 

    For the step where you're moving the data from the hidden PoolPart folder on your E: drive, are you moving the data to the new D: drive pool, or to the F: drive you created from the new 55TB of space? I have a slow upload speed, so I really don't want to wait months to reupload 7TB of data.

     

    And what is the purpose of creating the F: drive? Why create a new drive rather than just expand the size of the E: drive and add it to the D: pool all on its own? Is there an advantage to having 2 partitions rather than just 1?

  4. 31 minutes ago, Christopher (Drashna) said:

    just to clear up something, the "duplicated data" isn't the "second copy". StableBit DrivePool doesn't have a concept of original and copy.

    It applies to ALL copies of the file. It means that there is data on the drive that has been duplicated. 

    So does that mean then if I duplicate my entire local drivepool to my clouddrive, then drivepool software will show my entire local drivepool as containing only duplicated data? 

    I've also noticed that two of my HDD's are showing as having over 100GB+ of "other" data on them, however when examining the drives using WinDirStat, only the unduplicated data can be seen and the extra "other" data is no where to be seen on the drives. Is this a Drivepool visual bug? The total disk usage reported from WinDirStat matches the numbers reported in Drivepool excluding the 'other' data. So basically WinDirStat sees all of the normal data contained in my drive, however the extra 100GB of other data is no where to be seen.

  5. So I have an 8TB local drive pool composed of 4 different HDD's of varying size. I would like to backup several TB's of this data to my google drive using Drivepool's folder duplication. When I select a folder to duplicate, the folder is copied into CloudDrive's upload queue and Drivepool shows the duplicated data as being present on my CloudDrive and another duplicated copy is also split among the local HDD's making up the local pool (I assume because the duplicated data still needs to be uploaded), essentially showing two duplicated copies of the amount of data I selected instead of just one.

    However, once a folder has finished being duplicated and CloudDrive's "to-upload" queue is 0 bytes, Drivepool still shows that the duplicated files are being stored on my local HDD's in addition to my Clouddrive. How can I get the duplicated data to be stored only on my CloudDrive, and have the duplicated folders be deleted locally after they are uploaded in order to prevent having any duplicated data stored on my local drives outside of my CloudDrive cache?

    What I've done:

    - Disabled plug-in settings that can force immediate balancing

    - Set my CloudDrive to only contain duplicated data while the local drives can only store unduplicated data under Drive Usage Limiter settings.

    Given that my 8TB capacity drive pool is about 6TB full, if I try to duplicate 2TB of data to my CloudDrive then my local Drivepool will be totally full because a second copy of the 2TB duplicated data is being stored on my local drivepool too.

     

    Edit: I solved my question by using pool hierarchy. This article helped a lot: https://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/

×
×
  • Create New...