Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. Just create a new pool using the new drives, then simply move the content from the old poolpart folders to the new ones on the new drives. That's probably the easiest. Disable duplication on the old pool first, so you don't have duplicates for the copy.
  2. I'm pretty sure that you can just move the poolpart folders from each drive to new drives without issue. Simply disable the DrivePool service, move the folder, and reactivate the service.
  3. Yes. Once Drivepool sees the data on the drives it will remount the pool. All of the pool information is stored on the drives themselves. If you're changing MoBos, don't forget to deactivate the license before you switch it so you can reactivate the license on the new hardware. Otherwise you'll have to submit a ticket to deactivate your license to make it usable on new hardware.
  4. Change your minimum download to your chunk size (probably 20MB, if you used the largest chunk possible). If you're just using this for backup, you really only need to engage the prefetcher when you're making large copies off the drive, so set it to 10MB in 10secs and have it grab maybe 100MB at a time. You can probably even disable it, if you want to. Keeping it enabled will basically only help smooth out network hiccups and help copies move smoother when you're copying data off of the drive. Other than that, you look good. Glad to help. Hope everything works out well for you.
  5. That's OK. That just means you're out of space on your cache drive. It's normal when it's copying tons of data, and all it will do is slow down writes to the CloudDrive. You can only upload 750GB/day of data to Google anyway. (Which reminds me, throttle your upload to 70-75mbps or you'll get banned from the upload API for 24 hours.) The expandable cache will grow to use your entire drive for writes only. Read cache will still be limited to the amount that you set. If it's important for it not to take up the entire drive, you'll have to use a fixed cache--with the caveat that it will throttle w
  6. Sure thing. Two other things to note: 1) You'll also need to MOVE all of the existing data to the PoolPart folder created on Z: when you made the new pool. Otherwise it will not show up in the new pool for duplication to CloudDrive. 2) The old pool no longer needs a drive letter. Move your Z: label to the NEW pool now, and applications will access the data via that. After you've moved the data to the new PoolPart folder, of course.
  7. BOTH the pool and the CloudDrive volume have to be allowed to accept *duplicated* data in the Drive Usage Limiter. If you set it so one can only hold duplicated data and one can only hold unduplicated data, the logic won't allow it to duplicate the content from one to the other. Make sense? So your pool should be able to hold BOTH, while your CloudDrive should only be able to accept duplicated data. Then it should replicate correctly. All NEW data will go to your Z: Pool, and the duplication settings will slowly copy that data to the CloudDrive over time. If duplicated data is disabled f
  8. There is a possibility that your ISP simply has poor peering with Google, or you have a problem with your network. Upload speed, in particular, is almost entirely determined by chunk size and upload threads. So if you've got 5 threads or so and a 20 MB chunk size, you should easily be able to hit several hundred mbps.
  9. Honestly, I would skip the forum and submit a ticket for that. Missing data is pretty serious. Good luck. I hope Christopher can get you sorted. Submit the support ticket here: https://stablebit.com/Contact
  10. That's actually a really interesting question, and one I don't know the answer to. I have a pretty strong aversion to ReFS (had a large volume go RAW during a windows update once), so I haven't really used it enough to tell you. If I had to guess, yes. Once all of the drives are NTFS, I would imagine DrivePool would report the pool itself as NTFS. But, honestly, even if it *didn't*, you could always move the data out of poolpart folders on the drives and simply recreate the pool and move the data back--and then it would be NTFS for sure. So there isn't really a problem with continuing this pat
  11. Sure thing. Good luck! Let me know how it turns out. I'm sure more people will want to try this over time.
  12. No. You'll have to get that working in order to accomplish this. Maybe submit a ticket to support. You probably won't hear back until Monday, though.
  13. It may actually be faster to move the content to the hidden poolpart folder in each individual drive's OLD poolpart folder. If that makes sense.
  14. Once that's done, you should be able to simply remove the drive letters from both your old pool and the CloudDrive. Neither of them need them.
  15. OK. So, here is the process as far as I can tell: Create a new pool. Add your Z: and D: to the new pool. Disable duplication on your Z: pool entirely. Use the drive usage limiter on the NEW pool to set the CloudDrive volume to ONLY accept duplicated data. Move ALL existing content on Z: to the new poolpart folder on Z:, which shouldn't take long since DrivePool is smart enough to do it on the same drive. Once all of the data is accessible from the new pool, simply enable 2x duplication on the new pool, and it should slowly migrate all of the duplicates to D:. Does that make sense?
  16. Well the D: is not a pool. It's just a CloudDrive volume. You'll have two pools, which will be nested. Though, thinking about it, we'll need to also move all your existing data to the new poolpart folder that will be created on the old pool in order to make your existing data accessible to the NEW pool. So that may take a minute. Let me do this in a test with a smaller pool to make sure nothing crazy happens. Give me a minute.
  17. We don't want to remove them at all. We're creating a hierarchical pooling scheme. We are using your old pool to pool all of your local drives together, and then a new pool to duplicate the data between the pool and your CloudDrive. So leave your old pool as it is, but disable the 2x duplication on that pool. Then add your D: and your Z: to this new pool, and enable 2x duplication on that.
  18. It looks like mixed pools ARE possible, but not recommended. So that might be why you're not seeing the option to add the drive. See here: https://community.covecube.com/index.php?/topic/2832-can-i-mix-ntfs-and-refs-drives-in-my-pool-drivepool-version-22-beta/
  19. I think what we actually need to do is create a NEW pool using your old pool and the CloudDrive volume. Disable duplication on your old pool entirely, and then enable 2x duplication in the new pool. That will duplicate the entire pool to the CloudDrive volume. Give that a test and see how it works. Though this doesn't solve the problem of your CloudDrive volume not showing up in DrivePool. I'm not sure why that would be. It might be because the pool is ReFS. Is the CloudDrive NTFS? I'm not sure if DrivePool can merge them. If you create a NEW pool, can you add a CloudDrive volume t
  20. The CloudDrive volume will show up in DrivePool as a drive to be added to the pool once you've created it. Are you saying you've created a volume and it's still not showing up? I think my previous idea was wrong regardless. Trying to play with some options now.
  21. EDIT: I think this is wrong, actually. Let me experiment a bit.
  22. You can add the CloudDrive to your existing pool. Once you create the CloudDrive volume, it will show up in DrivePool as an option to add to the pool just like any other drive. Once you add the drive, you'll want to use the balancing options in DrivePool to adjust the settings you'd like to use. For example, you can adjust whether you want duplicated and unduplicated data on the CloudDrive volume, or if you just want it to hold duplicates exclusively. All of those sorts of settings are settings that you can adjust to make it work as you wish. Once the pool is added and you've set your preferre
  23. That could be any number of problems. The bottom line is that this error means there are connectivity issues between your machine and google's servers. It's generally temporary, and those errors will just happen from time to time because the internet isn't perfect. It's only a problem if they're happening on a regular basis. If it keeps happening, you'd have to troubleshoot your connection.
  24. srcrist

    Longevity Concerns

    I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions.
  25. I'm not sure that I understand your third condition, but effectively all of this can be accomplished by simply adjusting the balancer settings so that no new data is added to the CloudDrive volumes. You can, for example, use the file placement rules under your balancer settings to assign folders to one volume or another within the pool. As long as you check the "Never allow files to be placed on other disks" box, nothing you add to any folder that is NOT assigned to the CloudDrive volume will ever be placed on that volume by a balancing pass. If you want to be a little more creative, you can m
×
×
  • Create New...