Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  1. I know raid is not backup. As I said in the original post, this is to be my secondary backup, as per the 3-2-1 rule I already have everything locally with redundancy and another backup with redundancy, but access to the remote backup is a major hassle, and i would prefer an "online solution" as well. Clouddrive is also extremely convenient if i need some of my files on a remote location. Just detach the drives and attach them another place. I just want redundancy on this backup as well. Cause the clouddrive i have now have some problems that i get checksum errors when i update the software to newer versions, and to prevent something like drive corruption etc from google i thought having duplication and a lot of smaller volumes would mitigate the risk of this happening again. Or if it happens, it's much easier and less time consuming to replace again. And that was why i was just wondering the best way to set up duplication. Clouddrives inbuilt duplication or drivepools inbuilt duplication. Is there a better choice if everything is to be duplicated? If there isn't i just set it in clouddrive and just use the pooling in drivepool. My plan was to have 4 big clouddrives which is partitioned up into multiple partitions, unless using multiple drives instead is a better choice, Because of I/O. The plan was to put the cache on a dedicated ssd when it gets to the main computer as well. It is not a problem buying some extra ssd's for the caches either if that is needed. Is there anything to be gained from multiple drives vs multiple partitions, discarding the need for ssd's etc? The upload process is also going to be cached on an ssd, but with way lower cache sizes, as it's just a temporary thing and not going to be used for anything but upload during this time. But the local copies are not going to be deleted, if that was unclear I also figured out that i'd just use netlimiter for the bandwidth limitation instead of the inbuilt one as that is much easier. Thank you for answering
  2. I would suggest a pure LSI HBA card, like the Dell m1015 and crossflash it to IT mode. It will then not do anything and give the software full access to the drives with nothing in between. Myself i am using the LSI 9201-16i as one of my cards since i need 16 ports. But it is basically the same and it works wonderful.
  3. I see they have changed it now. It only throttles upload gradually now when you reach 20tb you will get 1mbit/s Good that you're happy with it
  4. They also say "unlimited storage" But if you read the fine text in norwegian they say that anything over 10tb is what they call abusive and you will not be allowed to upload more than that. Just a heads up Other than that, its great.
  5. Hello, I've been using clouddrive for some time and have been happy. Have uploaded around 50tb to my clouddrive as of now and it has been working fine. However, I'm about to set up a more serious permanent secondary backup solution using multiple clouddrives with drivepool to get one gigantic backup pool with duplication and in that i have some questions. I have some pretty big storage needs so the number of clouddrives/partition needs to be pretty substantial. I have about ~120-130tb right now that i'm going to upload over the coming months/year. However i dont want huge volumes in case something goes wrong. My solution today is pretty scary to me with only a huge 50tb volume, so i want to use several smaller drives. I'm thinking that 15tb is the sweetspot for me in case one of the drives decides to take a turn for the worse corruption wise. It won't be ages to recover by uploading/downloading/chkdsk. I also have a problem with this drive now that i can't figure out (I get checksum errors when i update to newer version, but not the one i am currently on, but that is something i can fiddle around with when the time comes to migrate eventually) Since i want duplication, and i want this to be futureproof i'm thinking the total drive size with drivepool should be something around 1Petabyte total. I will already fill up 250~tb just by what i have now, and it's ever expanding, so I'm just doubling down on size from the start. Everything also is going to be encrypted. So the questions begin: Should i create many single drives in clouddrive (mounted to folders ofc) that are 15tb each, (about 70 of them) or 4 big 256tb drives that are divided up in to 15tb partitions (mounted in folders) and pool it with drivepool? What are the Pro's and Con's? If there are no cons, 4 huge drives with partitions are obviously the preferred solutions (unlocking drives etc is a mess) Both clouddrive and drivepool have duplication. Where should i set it? Should i set in on the clouddrive disks and let drivepool just do the pooling, or should i not set it on the clouddrive and instead set the pool duplication on drivepool. When i get this up and running and start to upload, since this is one account from google. How can i limit the total bandwidth across all disks to 75mbps for the initial massive upload? Will there be a problem with API calls when the amount of drives/Partitions gets so high? Since the internet speeds where i live now is pretty garbage, I'm planning on doing most of the initial upload in another location with my laptop and external drives. This laptop obviously has limited cache size (around 50-70gb total). Will that be enough to handle the initial upload if i copy large quantities of data at a time? will Clouddrive just limit itself to the 75mbps speed on the copy when it hits the "cap" of used up space on the drive? Also, will there be a problem when I migrate back to the main computer? I was planning on buying another drivepool licence just for this initial upload, does that matter when migrating from one licence to another? Also, since this is a pretty big project that i will have to sink a good amount of time into, any tips on how to set this up as smoothly as possible would be greatly appreciated. Many thanks in advance for responses
  • Create New...