Jump to content
Covecube Inc.
  • 0
buddyboy101

Pooling/Duplication Question (Plex Scenario)

Question

Hello!  Noob here, but I have reviewed the DrivePool FAQ and Manual. 

I have an 8TB hard drive that is quickly filling up with videos for Plex streaming on my local home network.  I have 2 10TB drives on the way, all of which will be shucked and placed into a Probox (JBOD).  I plan to consolidate the drives into a pool using DrivePool but have never used the software before.  Someone online recommended that I use Google Drive (via CloudDrive) as a location to duplicate the files, that way the files are safe in case the physical drives were to fail.  

To accomplish this, I believe I need to create a tiered pool - one subpool that contains the 3 physical drives, and one subpool that is the CloudDrive.  I would then set duplication for the master pool, so that the physical drive's files are replicated to the DrivePool.  Please let me know if I'm on the right track or if there are any other settings I should be aware of.

My second question: I've read that you should not upload more than 750GB of files per day to Google Drive, or else you will get a 24 hour ban from the service.  Is there any way I can ensure the initial (and subsequent) duplications to CloudDrive are spread spread out in 750GB increments per day?  Or can I select only certain folders to duplicate and then add more over time (thus ensuring the amount of data transferred to Google Drive)?  My concern about this method is that I believe folder duplication would result in local duplication, not duplication to the cloud.  So can I  configure the destination of duplicated files using folder duplication?  

My third question is whether file updates carry over to their duplicated counterparts in realtime.  So for example, "FileX" is on my local drive and is duplicated as "FileX".  I then rename it on my local drive to "File1" (or make some other change to the file's metadata or contents (in the case of a spreadsheet or document).  Will DrivePool update the duplicated file to reflect the new file name?  And if so, is it only changing the file name, rather than deleting it and re-duplicating the file entirely?  

My last question is how DrivePool handles the failure/removal/replacement of a drive.  In other words, how are the files that reside on the drive re-populated in the event of the drive failing and being replaced?  I imagine they would somehow be pushed from the duplicated versions?  I read that DrivePool will automatically push the files to the remaining disks, but what if there is not enough room on the remaining disks until another one is added - how is this shortage handled? 

And would intentionally (or unintentionally) deleting a file on the physical disk result in the duplicate being deleted as well?  How does DrivePool know that the removal of a file due to drive failure or intentional removal of a disk should not prompt deletion of the corresponding duplicate? 

Sorry for all of the questions.  Just trying to figure this all out.  Thanks!!

Share this post


Link to post
Share on other sites

4 answers to this question

Recommended Posts

  • 0

Just thought of something else - does the local cache size matter if I'm not streaming from the cloud drive?  I'm only using the cloud drive to house duplicate files.  

And Stablebit's materials state that clusters over 4k can lead to less than optimal performance.  Because I need to store at least 28TB of files, I'm leaning towardthe 8 KB cluster size that supports 32TB.  Does this deviation from 4k lead to noticeable performance issues.  Again, the cloud drive is only a back-up location for files that will be streamed locally from physical disks.

Share this post


Link to post
Share on other sites
  • 0
On 10/10/2019 at 3:02 PM, buddyboy101 said:

To accomplish this, I believe I need to create a tiered pool - one subpool that contains the 3 physical drives, and one subpool that is the CloudDrive.  I would then set duplication for the master pool, so that the physical drive's files are replicated to the DrivePool.

This is correct. 

 

On 10/10/2019 at 3:02 PM, buddyboy101 said:

My second question: I've read that you should not upload more than 750GB of files per day to Google Drive, or else you will get a 24 hour ban from the service.  Is there any way I can ensure the initial (and subsequent) duplications to CloudDrive are spread spread out in 750GB increments per day? 

It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able.

On 10/10/2019 at 3:02 PM, buddyboy101 said:

Will DrivePool update the duplicated file to reflect the new file name?  And if so, is it only changing the file name, rather than deleting it and re-duplicating the file entirely?  

Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. 

 

On 10/10/2019 at 3:02 PM, buddyboy101 said:

My last question is how DrivePool handles the failure/removal/replacement of a drive.  In other words, how are the files that reside on the drive re-populated in the event of the drive failing and being replaced?  I imagine they would somehow be pushed from the duplicated versions? 

This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. 

 

On 10/10/2019 at 3:02 PM, buddyboy101 said:

And would intentionally (or unintentionally) deleting a file on the physical disk result in the duplicate being deleted as well?  How does DrivePool know that the removal of a file due to drive failure or intentional removal of a disk should not prompt deletion of the corresponding duplicate? 

DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. 

 

On 10/10/2019 at 6:19 PM, buddyboy101 said:

Just thought of something else - does the local cache size matter if I'm not streaming from the cloud drive?  I'm only using the cloud drive to house duplicate files.  

It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. 

Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. 

On 10/10/2019 at 6:19 PM, buddyboy101 said:

And Stablebit's materials state that clusters over 4k can lead to less than optimal performance.  Because I need to store at least 28TB of files, I'm leaning towardthe 8 KB cluster size that supports 32TB.  Does this deviation from 4k lead to noticeable performance issues.

This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. 

Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...