Jump to content
  • 0

Drivepool duplicating to only Clouddrive..


renxwar

Question

Hello, I've created a CloudDrive connecting to Google Drive, then added the created disk. I would like to duplicate the existing data to just the CloudDrive (essentially creating a 1:1 physical to cloud). I enabled Duplication, then under "Balancing..." and "Drive Usage Limiter", I deselected Duplicated on all phyiscal (leaving Unduplicated checked, and on Google Drive I checked Duplicated and unchecked Unduplicated.

 

When I began duplicating, it's spreading it evenly across all my drives, not just the CloudDrive. Is there a setting I missed to restrict to just the CloudDrive?

Link to comment
Share on other sites

Recommended Posts

  • 0

I'm a bit hesitant on this step:

5. Move all my Files from I: to I:\PoolPart.XXXYYY

Will it start a full cut-and-paste operation or transparently just move the index since it's on the same drivepool mount. That drivepool has 10 drives with 18TB duplicated data on it at the moment. When that's done I was thinking of moving drive letters around, so my services including Plex will be non the wiser.

Also, just to be clear, if a drive goes bad, how will the master pool react to a degradation of the local part in that pool until the missing or damaged drive is swapped? Will the master pool go into read-only and indicate the local pool as a damaged drive until the drive in that pool has been replaced? I'm guessing and hoping all details of such events have been tested and thought of.

And, how will the clouddrive cache behave when it suddenly needs to duplicate massive amounts of data? Will it rotate the 1 GB default I've set? or rape the entire drive (I'm using C: as cache, a 150GB available SSD which I do not want to fill up and crash the system). Do have I have to use a dedicated drive as cache for this or set it fixed for it not to run wild?

Thanks

Link to comment
Share on other sites

  • 0
On 9/2/2018 at 12:59 PM, Thronic said:

I'm a bit hesitant on this step:

5. Move all my Files from I: to I:\PoolPart.XXXYYY

Will it start a full cut-and-paste operation or transparently just move the index since it's on the same drivepool mount. That drivepool has 10 drives with 18TB duplicated data on it at the moment. When that's done I was thinking of moving drive letters around, so my services including Plex will be non the wiser.

It is identical to a normal cut-and-paste on the same drive for Windows.  i.e. it will happen almost immediately.  However Drivepool won't be aware of the new files in the Poolpart-xxxxx folder until you tell it to re-measure.

As a side note for Plex:  only store media files in the Pool (movies, music, etc).  Plex uses hardlinks on it's C: installation folders (%appdata%\Local\Plex Media Server) where it stores metadata, which aren't supported in Drivepool.

 

 

On 9/2/2018 at 12:59 PM, Thronic said:

Also, just to be clear, if a drive goes bad, how will the master pool react to a degradation of the local part in that pool until the missing or damaged drive is swapped? Will the master pool go into read-only and indicate the local pool as a damaged drive until the drive in that pool has been replaced? I'm guessing and hoping all details of such events have been tested and thought of.

Editing my reply here to clarify based on better info.  When a drive goes missing completely, the Pool it's in will go into a Read Only mode, because it can't determine the duplication status across all drives:

 

 

On 9/2/2018 at 12:59 PM, Thronic said:

And, how will the clouddrive cache behave when it suddenly needs to duplicate massive amounts of data? Will it rotate the 1 GB default I've set? or rape the entire drive (I'm using C: as cache, a 150GB available SSD which I do not want to fill up and crash the system). Do have I have to use a dedicated drive as cache for this or set it fixed for it not to run wild?

Not sure I fully understand your question.  What level of duplication do you have set, and is it on the full Pool, or just on files/folders?  If C: is holding a pool part, it is fully available for use by the pool up to and including 100% full.  You'd have to use the Balancer plugins to prevent drive overfill on that volume to avoid that scenario.

Pretty certain that if you have a Clouddrive volume as part of a pool and holding duplication items (or space available for duplication), that Drivepool will try to duplicate (or evacuate) to local drives first.  i.e. if you have a pool with 10 drives (one is a Cloud drive, 9 are physical) and have 6x duplication set - you would still have enough local space on the physical drives to handle all of the duplicated copies.  None would -need- to sit on the Cloud drive, though they could.  Also depends on your use of the Duplication Space Optimizer plugin.

Complete info on your pooling architecture would be helpful.

Link to comment
Share on other sites

  • 0

I only store media files in the pool.

I know how normal degradation works but was curious if the process or events differ with hierarchical pooling.

The last question was about caching/cache size for clouddrive. I'm going to detach it and dedicate a 128GB SSD I have available for expandable caching so I don't have to worry about it filling up or affecting the system drive.

 

Link to comment
Share on other sites

  • 0

Not sure how Hierarchical pools react to bad underlying physical drives where duplication/evacuation is concerned.  It's a new-ish feature that Christopher and Alex know the most about, and can get complicated depending on your overall architecture (pools within pools, physical drives at different levels, etc). 

Moving the Clouddrive cache to a dedicated SSD sounds like a great idea, and avoids a lot of potential space issues.  I'd still look into using the Prevent Drive Overfill plugin in DP, at least as long as you have a part of the Pool residing on C:.

Link to comment
Share on other sites

  • 0

Hierarchical pooling works just like the normal pooling. 

The differences is that it lets you add pools to the pool.  And there is some special handling to make sure things work right. 

On 9/2/2018 at 11:59 AM, Thronic said:

 Will it start a full cut-and-paste operation or transparently just move the index since it's on the same drivepool mount. That drivepool has 10 drives with 18TB duplicated data on it at the moment. When that's done I was thinking of moving drive letters around, so my services including Plex will be non the wiser.

This happens on the same volume, so it should be a "Smart Move".  Eg, updating the location, rather than actually moving the data around. 

And moving the letters around is perfectly fine. 

On 9/2/2018 at 11:59 AM, Thronic said:

 Also, just to be clear, if a drive goes bad, how will the master pool react to a degradation of the local part in that pool until the missing or damaged drive is swapped? Will the master pool go into read-only and indicate the local pool as a damaged drive until the drive in that pool has been replaced? I'm guessing and hoping all details of such events have been tested and thought of.

That depends on the exact issue.

If the disk goes missing, then both of the pools will go into a read only state until the missing disk is "resolved" (removed, or reconnected).  

The software will also notify you on that system, and send you an email if you've enabled that. 

If the drive is having other issues, and StableBit Scanner detects that, then it may clear out the disk in question, and prevent any data from being stored on that disk. 

On 9/2/2018 at 11:59 AM, Thronic said:

 And, how will the clouddrive cache behave when it suddenly needs to duplicate massive amounts of data? Will it rotate the 1 GB default I've set? or rape the entire drive (I'm using C: as cache, a 150GB available SSD which I do not want to fill up and crash the system). Do have I have to use a dedicated drive as cache for this or set it fixed for it not to run wild?

That depends on the cache type that you've set.  

If it's set to "expandable", it will try to use as much of the cache disk in question.  Once you hit 50GB free on it, it will start to exponentially slow down reads, to prevent the disk from being completely filled. 

You can also set the cache type to fixed or proportional to set a hard limit on how much space that the cache can use.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...