Jump to content
Covecube Inc.
  • 0
ryan74

Help - Setup Duplicate local DrivePool to CloudDrive

Question

Hi

I need some advice on how to setup a GDrive CloudDrive to duplicate a local drivepool.
I have been using DrivePool and Scanner for over a year now and really happy with it.
Using the latest beta versions of all 3 products.
Currently I have 5x HDD's and 1x 250GB SSD cache in a local drivepool (Q:) and I have all Balancers on:
1. SSD Optimizer
2. StableBit Scanner
3. Disk Space Equalizer
4. Volume Equalization
5. Drive Usage Limiter
6. Prevent DriveOverfill
7. Duplication Space Optimizer

I currently do not have any duplication taking place on the local pool (Q:), but I will setup a specific folder for important data to be duplicated. The rest is not so critical, mostly media files.

I would like to setup a GDrive CloudDrive to duplicate some folders, with full drive encryption on.
This where I get lost on what type of config to use.
Is there a best practice guide on this?
1. Do I create 1 big GDrive? +-100TB or more
2. Create smaller ones and put them a drivepool of their own?

Section I'm not so sure on when creating a CloudDrive:
Part 1:
Data duplication: ?
Local Cache Drive: 100GB Same SSD cache drive part of the local drivepool (Q)
Sector Size: ?
Storage chunk size: ?
Chunk cache size: ?
File System: ?
Cluster Size: ?

Part 2:
Current internet speed, Fibre 200mbps down / 100mbps up. ( May change in a month or 2)
I/O Manager
Download Threads: ?
Upload Threads: ?
Download Throttling: ?
Upload Throttling: ?
Upload threshold: ?
Minimum downlaod size: ?

Prefetcher
Enabled / Disabled: ?
Prefetch Trigger: ?
Prefetch forward: ?
Prefetch time windows: ?

I take it that once this setup and the next step will be to create a hybrid pool with localpool + cloudpool and setup folder duplication on the cloudpool, using Drive Usage limiter... Is that correct?

*BTW is it possible to add local and cloud path directories to a Plex library, with the same data?

Any advice would be greatly appreciated.

Thanks

Share this post


Link to post
Share on other sites

10 answers to this question

Recommended Posts

  • 0
2 hours ago, ryan74 said:

1. Do I create 1 big GDrive? +-100TB or more
2. Create smaller ones and put them a drivepool of their own?

Either of these options are workable, depending on your needs. It's really up to you.

 

2 hours ago, ryan74 said:

Section I'm not so sure on when creating a CloudDrive:

You're probably just overthinking this. Just use whatever settings you need to get a drive of the size you require, that can serve the data you store. So you'll want a cluster size that can accommodate your volume size, depending on the maximum size you'd like for your volume. The larger the files you store, the more efficient a larger chunk size will be. If you have a bunch of files larger than 20MB, I'd probably just suggest using 20MB chunks. If most of your files are *smaller*, then it will be more efficient to use smaller chunks. The larger the chunks, the larger the maximum size of the CloudDrive drive as well. A drive with 20MB chunks can expand up to a maximum of 1 petabytes. You'll just need to play with the prefetcher settings to find something that works for your specific use case, hardware, and network. Those settings are different for everyone.

2 hours ago, ryan74 said:

I take it that once this setup and the next step will be to create a hybrid pool with localpool + cloudpool and setup folder duplication on the cloudpool, using Drive Usage limiter... Is that correct?

You will want a nested pool setup with your CloudDrive/CloudDrive Pool (whichever you choose to create) in a master pool with your existing DrivePool. You can then set any balancing settings you like between those two in the master pool. There are a lot of ways to handle the specific balancing configuration from there, depending on what, exactly, you want to accomplish. But it sounds to me like you have the basic concept right, yes.

 

2 hours ago, ryan74 said:

*BTW is it possible to add local and cloud path directories to a Plex library, with the same data?

You won't have to. If you use DrivePool and nest a few pools, as you're planning here, you'll still have one mount point for the master pool to point your applications to. Everything else will be transparent to the OS and your applications.

That is: you will automatically be accessing both the cloud and local duplicates simultaneously, and DrivePool will pull data from whichever it can access when you request the data (using the hierarchy coded into the service, which is smart enough to prioritize faster storage.)

Share this post


Link to post
Share on other sites
  • 0

Thank you for your reply. @srcrist

Yeah, perhaps I am over thinking it, just wanted to make sure of some stuff before I go ahead.

I will play around with some configs and settings to see what works best for me.

Thanks.

Share this post


Link to post
Share on other sites
  • 0
4 hours ago, ryan74 said:

Thank you for your reply. @srcrist

Yeah, sure thing. The only other thing that I would point out is that chkdsk has a hard limit of 60TB per volume. So you'll probably not want to exceed that limit. A single CloudDrive drive can, however, be partitioned into multiple volumes if you need more space, and those volumes can be recombined using DrivePool to create a unified filesystem that can still be error checked and repaired by chkdsk.

Share this post


Link to post
Share on other sites
  • 0

@srcrist

I am little bit confused.

Not sure if I setup the correct way.

So created 1 CloudDrive, with duplication and encryption, then created smaller volumes on it using Disk Manger.

I put these smaller volumes into their own CloudDrivePool (Z:)

Then created a new MainDrivePool (P:) that consists of CloudDrivePool (Z:) and LocalDrivePool (Q:)

I selected Folder Duplication on MainDrivePool (P:) and choose the Folders I want duplicated.

Which other Setting / Balancers do I need to use to ensure that the data in those folders need to be on the LocalDrivePool (Q:) and on CloudDrivePool (Z:)?

NB: On CloudDrivePool (Z:) - The only balancer I have active is Disk Space Equalizer, Equalize by Disk space remaining. Equalizing duplicated and un-duplicated files

Thanks

Share this post


Link to post
Share on other sites
  • 0

I'm not 100% sure what the issue is, as I think it could be multiple things based on this information.

One common mistake is to not enable duplication on both pools. Both pools need to be allowed to accept duplicated data, while only the local pool should be allowed to accept unduplicated data. If only one pool is allowed to contain duplicates, there will be no pool to duplicate to. So maybe check that setting in your balancers.

Balancer logic can get pretty complicated, so I may not be able to be as helpful as I'd like with respect to your specific aims and configuration. But it should be relatively simple. You need two pools (local and cloud), both configured to accept duplicated data, the local pool configured to accept your new (unduplicated) data, and 2x duplicated enabled on the master pool in order to duplicate between them.

Share this post


Link to post
Share on other sites
  • 0
3 hours ago, ryan74 said:

I selected Folder Duplication on MainDrivePool (P:) and choose the Folders I want duplicated.

Now that i read this again: make sure that your duplication is enabled on the master pool. Not the local subpool. Duplication enabled on THAT pool will only duplicate to drives contained within that pool.

Share this post


Link to post
Share on other sites
  • 0

@srcrist

Thank you,

It looks like its working, the Folders that I selected are being duplicated to the CloudDrivePool now.

But I see the notification: I/O Throttled, CloudDriveCache is low on disk space, using 100GB cache

Do you know if its possible to install programs on a Local Drivepool? such as Steam, Epic Games, UPlay, GOG, EA Origin, etc...

Share this post


Link to post
Share on other sites
  • 0

@srcrist

So my data is being duplicated to the CloudDrive, but it is writing very slow. +-500KB/s, is there a way to speed this up?

My cache drive is 100GB SSD and it's full

On CloudDrive GUI, it shows uploading data x1 30-70mb/s but on DrivePool GUI it shows Write +-500KB/s

?

Share this post


Link to post
Share on other sites
  • 0
4 hours ago, ryan74 said:

My cache drive is 100GB SSD and it's full

Writes to your CloudDrive drive will be throttled when there is less than (I believe) 5GB remaining on the cache drive. This is normal and intended functionality. Just get used to throttled writes until all of the data is duplicated. Your local write speed is obviously faster than your upload. Completely expected.

You can install whatever you want to a pool.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...