So I have an 8TB local drive pool composed of 4 different HDD's of varying size. I would like to backup several TB's of this data to my google drive using Drivepool's folder duplication. When I select a folder to duplicate, the folder is copied into CloudDrive's upload queue and Drivepool shows the duplicated data as being present on my CloudDrive and another duplicated copy is also split among the local HDD's making up the local pool (I assume because the duplicated data still needs to be uploaded), essentially showing two duplicated copies of the amount of data I selected instead of just one.
However, once a folder has finished being duplicated and CloudDrive's "to-upload" queue is 0 bytes, Drivepool still shows that the duplicated files are being stored on my local HDD's in addition to my Clouddrive. How can I get the duplicated data to be stored only on my CloudDrive, and have the duplicated folders be deleted locally after they are uploaded in order to prevent having any duplicated data stored on my local drives outside of my CloudDrive cache?
What I've done:
- Disabled plug-in settings that can force immediate balancing
- Set my CloudDrive to only contain duplicated data while the local drives can only store unduplicated data under Drive Usage Limiter settings.
Given that my 8TB capacity drive pool is about 6TB full, if I try to duplicate 2TB of data to my CloudDrive then my local Drivepool will be totally full because a second copy of the 2TB duplicated data is being stored on my local drivepool too.
Question
Niloc
So I have an 8TB local drive pool composed of 4 different HDD's of varying size. I would like to backup several TB's of this data to my google drive using Drivepool's folder duplication. When I select a folder to duplicate, the folder is copied into CloudDrive's upload queue and Drivepool shows the duplicated data as being present on my CloudDrive and another duplicated copy is also split among the local HDD's making up the local pool (I assume because the duplicated data still needs to be uploaded), essentially showing two duplicated copies of the amount of data I selected instead of just one.
However, once a folder has finished being duplicated and CloudDrive's "to-upload" queue is 0 bytes, Drivepool still shows that the duplicated files are being stored on my local HDD's in addition to my Clouddrive. How can I get the duplicated data to be stored only on my CloudDrive, and have the duplicated folders be deleted locally after they are uploaded in order to prevent having any duplicated data stored on my local drives outside of my CloudDrive cache?
What I've done:
- Disabled plug-in settings that can force immediate balancing
- Set my CloudDrive to only contain duplicated data while the local drives can only store unduplicated data under Drive Usage Limiter settings.
Given that my 8TB capacity drive pool is about 6TB full, if I try to duplicate 2TB of data to my CloudDrive then my local Drivepool will be totally full because a second copy of the 2TB duplicated data is being stored on my local drivepool too.
Edit: I solved my question by using pool hierarchy. This article helped a lot: https://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/
Link to comment
Share on other sites
2 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.