Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  1. Amazing advice, @srcrist! thank you for taking the time to provide such a detailed response. Will review everything and see how things go. Thanks again!!
  2. Just thought of something else - does the local cache size matter if I'm not streaming from the cloud drive? I'm only using the cloud drive to house duplicate files. And Stablebit's materials state that clusters over 4k can lead to less than optimal performance. Because I need to store at least 28TB of files, I'm leaning towardthe 8 KB cluster size that supports 32TB. Does this deviation from 4k lead to noticeable performance issues. Again, the cloud drive is only a back-up location for files that will be streamed locally from physical disks.
  3. Hello! Noob here, but I have reviewed the DrivePool FAQ and Manual. I have an 8TB hard drive that is quickly filling up with videos for Plex streaming on my local home network. I have 2 10TB drives on the way, all of which will be shucked and placed into a Probox (JBOD). I plan to consolidate the drives into a pool using DrivePool but have never used the software before. Someone online recommended that I use Google Drive (via CloudDrive) as a location to duplicate the files, that way the files are safe in case the physical drives were to fail. To accomplish this, I believe I need to create a tiered pool - one subpool that contains the 3 physical drives, and one subpool that is the CloudDrive. I would then set duplication for the master pool, so that the physical drive's files are replicated to the DrivePool. Please let me know if I'm on the right track or if there are any other settings I should be aware of. My second question: I've read that you should not upload more than 750GB of files per day to Google Drive, or else you will get a 24 hour ban from the service. Is there any way I can ensure the initial (and subsequent) duplications to CloudDrive are spread spread out in 750GB increments per day? Or can I select only certain folders to duplicate and then add more over time (thus ensuring the amount of data transferred to Google Drive)? My concern about this method is that I believe folder duplication would result in local duplication, not duplication to the cloud. So can I configure the destination of duplicated files using folder duplication? My third question is whether file updates carry over to their duplicated counterparts in realtime. So for example, "FileX" is on my local drive and is duplicated as "FileX". I then rename it on my local drive to "File1" (or make some other change to the file's metadata or contents (in the case of a spreadsheet or document). Will DrivePool update the duplicated file to reflect the new file name? And if so, is it only changing the file name, rather than deleting it and re-duplicating the file entirely? My last question is how DrivePool handles the failure/removal/replacement of a drive. In other words, how are the files that reside on the drive re-populated in the event of the drive failing and being replaced? I imagine they would somehow be pushed from the duplicated versions? I read that DrivePool will automatically push the files to the remaining disks, but what if there is not enough room on the remaining disks until another one is added - how is this shortage handled? And would intentionally (or unintentionally) deleting a file on the physical disk result in the duplicate being deleted as well? How does DrivePool know that the removal of a file due to drive failure or intentional removal of a disk should not prompt deletion of the corresponding duplicate? Sorry for all of the questions. Just trying to figure this all out. Thanks!!
  • Create New...