Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Posts posted by srcrist

  1. Yes. Once Drivepool sees the data on the drives it will remount the pool. All of the pool information is stored on the drives themselves. If you're changing MoBos, don't forget to deactivate the license before you switch it so you can reactivate the license on the new hardware. Otherwise you'll have to submit a ticket to deactivate your license to make it usable on new hardware. 

  2. 2 hours ago, Bigsease30 said:

    Hello again srcrist,

    Thanks again for all the assistance that you have provided thus far. This community alone is one of the biggest reasons I keep referring Stablebit to friends and Colleges.

    My performance settings are as follows.

    Change your minimum download to your chunk size (probably 20MB, if you used the largest chunk possible). If you're just using this for backup, you really only need to engage the prefetcher when you're making large copies off the drive, so set it to 10MB in 10secs and have it grab maybe 100MB at a time. You can probably even disable it, if you want to. Keeping it enabled will basically only help smooth out network hiccups and help copies move smoother when you're copying data off of the drive. Other than that, you look good. 

    Glad to help. Hope everything works out well for you. 

  3. That's OK. That just means you're out of space on your cache drive. It's normal when it's copying tons of data, and all it will do is slow down writes to the CloudDrive. You can only upload 750GB/day of data to Google anyway. (Which reminds me, throttle your upload to 70-75mbps or you'll get banned from the upload API for 24 hours.) The expandable cache will grow to use your entire drive for writes only. Read cache will still be limited to the amount that you set. If it's important for it not to take up the entire drive, you'll have to use a fixed cache--with the caveat that it will throttle writes to the drive as soon as the cache space fills up, instead of once the drive is out of room. So if you choose to use a fixed cache, give it a fairly generous cache. 

    What are your CloudDrive performance settings? We might want to make some tweaks there to speed up the overall process. 

  4. Just now, Bigsease30 said:

    Perfect! When I get home tonight I will implement the change and report back. Thanks again.

    Sure thing. Two other things to note:

    1) You'll also need to MOVE all of the existing data to the PoolPart folder created on Z: when you made the new pool. Otherwise it will not show up in the new pool for duplication to CloudDrive. 

    2) The old pool no longer needs a drive letter. Move your Z: label to the NEW pool now, and applications will access the data via that. After you've moved the data to the new PoolPart folder, of course. 

  5. BOTH the pool and the CloudDrive volume have to be allowed to accept *duplicated* data in the Drive Usage Limiter. If you set it so one can only hold duplicated data and one can only hold unduplicated data, the logic won't allow it to duplicate the content from one to the other. Make sense? So your pool should be able to hold BOTH, while your CloudDrive should only be able to accept duplicated data. Then it should replicate correctly. All NEW data will go to your Z: Pool, and the duplication settings will slowly copy that data to the CloudDrive over time. 

    If duplicated data is disabled from a drive, it cannot store either the *original* copy NOR the second copy of a duplication. 

  6. There is a possibility that your ISP simply has poor peering with Google, or you have a problem with your network. Upload speed, in particular, is almost entirely determined by chunk size and upload threads. So if you've got 5 threads or so and a 20 MB chunk size, you should easily be able to hit several hundred mbps. 

  7. That's actually a really interesting question, and one I don't know the answer to. I have a pretty strong aversion to ReFS (had a large volume go RAW during a windows update once), so I haven't really used it enough to tell you. If I had to guess, yes. Once all of the drives are NTFS, I would imagine DrivePool would report the pool itself as NTFS. But, honestly, even if it *didn't*, you could always move the data out of poolpart folders on the drives and simply recreate the pool and move the data back--and then it would be NTFS for sure. So there isn't really a problem with continuing this path. That process would only take literal minutes, since Windows simply marks the data as moved without actually initiating a transfer. Moving data in and out of a poolpart folder on the same drive is a matter of seconds, generally speaking. 

    One thing I want to make clear, though, is that we would still need a nested pool setup to accomplish the end goal. This is because the point of the nested pool isn't to provide some sort of compatibility with ReFS, it's to enable us to treat the entire pool of disks as a single volume to duplicate to another location. There simply isn't any way for us to add a CloudDrive volume to the same pool with all of the local drives, and tell DrivePool to place an additional copy of *all* of the data on the CloudDrive unless we treat the pool as a single volume to mirror. 

    The good news is that hierarchical pooling is an in-built feature in DrivePool already. When you nest the pools, you'll see it make adjustments in the UI, signalling some operational adjustments that will ensure that performance isn't diminished. We aren't needlessly complicating the system, or making it do something it isn't *supposed* to do, this is just how the duplication logic of the pool needs to work in order to accomplish your goals. 

  8. OK. So, here is the process as far as I can tell:

    Create a new pool. Add your Z: and D: to the new pool. Disable duplication on your Z: pool entirely. Use the drive usage limiter on the NEW pool to set the CloudDrive volume to ONLY accept duplicated data. Move ALL existing content on Z: to the new poolpart folder on Z:, which shouldn't take long since DrivePool is smart enough to do it on the same drive. Once all of the data is accessible from the new pool, simply enable 2x duplication on the new pool, and it should slowly migrate all of the duplicates to D:.

    Does that make sense?  

    Once you're done with all of that, you can use Z: for the new pool and everything should be identical as far as the other applications on your PC are concerned. 

  9. Well the D: is not a pool. It's just a CloudDrive volume. You'll have two pools, which will be nested. 

    Though, thinking about it, we'll need to also move all your existing data to the new poolpart folder that will be created on the old pool in order to make your existing data accessible to the NEW pool. So that may take a minute. 

    Let me do this in a test with a smaller pool to make sure nothing crazy happens. Give me a minute. 

  10. We don't want to remove them at all. We're creating a hierarchical pooling scheme. We are using your old pool to pool all of your local drives together, and then a new pool to duplicate the data between the pool and your CloudDrive. So leave your old pool as it is, but disable the 2x duplication on that pool. Then add your D: and your Z: to this new pool, and enable 2x duplication on that. 

  11. I think what we actually need to do is create a NEW pool using your old pool and the CloudDrive volume. Disable duplication on your old pool entirely, and then enable 2x duplication in the new pool. That will duplicate the entire pool to the CloudDrive volume. Give that a test and see how it works. Though this doesn't solve the problem of your CloudDrive volume not showing up in DrivePool. I'm not sure why that would be. 

    It might be because the pool is ReFS. Is the CloudDrive NTFS? I'm not sure if DrivePool can merge them. 

    If you create a NEW pool, can you add a CloudDrive volume to that, in any case?

  12. The CloudDrive volume will show up in DrivePool as a drive to be added to the pool once you've created it. Are you saying you've created a volume and it's still not showing up?

    I think my previous idea was wrong regardless. Trying to play with some options now. 

  13. You can add the CloudDrive to your existing pool. Once you create the CloudDrive volume, it will show up in DrivePool as an option to add to the pool just like any other drive. Once you add the drive, you'll want to use the balancing options in DrivePool to adjust the settings you'd like to use. For example, you can adjust whether you want duplicated and unduplicated data on the CloudDrive volume, or if you just want it to hold duplicates exclusively. All of those sorts of settings are settings that you can adjust to make it work as you wish. Once the pool is added and you've set your preferred settings, it will begin to adjust the data on your drives in accordance with your preferences on the next balancing pass. 

    DrivePool and its balancers provide a large number of settings that you can adjust as you need. You'll really just need to open up the balancer configurations and look at them in order to decide what suits your use case. Most likely you'll want to disable the placement of unduplicated data on the CloudDrive volume if you're only using it to back up a physical pool. You'll do that using a setting on the Drive Usage Limiter balancer, and then move that balancer over everything else so that it has the top priority. This means that all NEW data will be placed on your physical pool, and then copied to the CloudDrive volume as a duplicate slowly, over time. 

  14. That could be any number of problems. The bottom line is that this error means there are connectivity issues between your machine and google's servers. It's generally temporary, and those errors will just happen from time to time because the internet isn't perfect. It's only a problem if they're happening on a regular basis. 

    If it keeps happening, you'd have to troubleshoot your connection. 

  15. I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions. 

  16. I'm not sure that I understand your third condition, but effectively all of this can be accomplished by simply adjusting the balancer settings so that no new data is added to the CloudDrive volumes. You can, for example, use the file placement rules under your balancer settings to assign folders to one volume or another within the pool. As long as you check the "Never allow files to be placed on other disks" box, nothing you add to any folder that is NOT assigned to the CloudDrive volume will ever be placed on that volume by a balancing pass. If you want to be a little more creative, you can make a pool out of the CloudDrive volumes, simply disable balancing on that pool entirely, and then add that pool to your local drive pool and adjust the file placement settings that way, as well. 

    Note that once you add the CloudDrive volumes to a pool, you'll need to move the existing data on those volumes to the hidden poolpart folder before they will actually be accessible within the pool. 

    Does that help?

×
×
  • Create New...