Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 09/16/19 in all areas

  1. 1 point
    This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  2. 1 point
    Umfriend

    My Rackmount Server

    Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably.... But wow, nice setup! With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
  3. 1 point
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  4. 1 point
    I'm running drivepool on a server and I'm sharing the pool. When I access the share on my Mac, every folder except for one is fine. There is a single folder that says I don't have permission to open it. When I check the permissions all of the folders including the one I don't have access to, all have the same permissions. Any ideas what the problem could be?
  5. 1 point
    Spider99

    Event log warning

    you can ignore them - i asked a long time ago and Christopher confirmed its a side effect of it being a virtual disk and nothing to worry about
  6. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  7. 0 points
    Umfriend

    2 pools - best setup method?

    I am not exactly sure what you want to accomplish. Do you want duplication and fast reads? You might want to consider using hierarchical pools, something like: Pool A: 12 x 500GB SSD Pool B: 2x4TB + 2x2TB + 1x 500GB SSD Pool C: Pool A + Pool B I would think that writes go fast (Pool A SSD only, Pool B uses the SSD Cache) and that reads go fast as well as they would read from Pool A effectively (even if the request goes out to Pool C). The downside is that you can only store about 6TB duplicated.

Announcements

×
×
  • Create New...