Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 07/04/20 in all areas

  1. 3 points
    My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
  2. 2 points
    I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
  3. 1 point
    Is it ok to fill non primary hdds to full capacity that would be accessed only to read files?
  4. 1 point
    I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  5. 1 point
    [1] Yes, it will only move the data from disk 2 if a balancing rule causes it to be moved (if you have disk space equalisation turned on, for example). Otherwise, it will stay put. [2] You could just set the drive-overfill plugin to 75-80%? Then if any disk reaches that capacity, it'll move files out? Personally, I assign a pair of landing disks for my pools. Two cheap SSDs where incoming files get dumped. DP then moves them out later, or if it fills up. Note that the disks should be larger than the largest single file you would put on the pool. If cost is an issue, you could try the following setup with existing hardware instead: Install the SSD optimizer plugin Tell DP/SSD Opt. that Disk 2 is an SSD and un-tick the "archive" setting Make Disk 1 and 3 "Archive" drives Change your file placement rules so that only unduplicated files go on Disk 2, and only duplicated files go on 1 and 3 That way, all new incoming files get put on "Disk 2", then later when your duplication/balancing rules engage, it will move the data off of Disk 2 entirely, and duplicate to 1 and 3. This assumes you do not have "real-time duplication" enabled. If you still need the total capacity of the 3 disks, then perhaps a small investment in a 120/240GB SSD to use as a landing drive might be a good idea, and substitute "Disk 2" with "SSD/Disk 4" in the above setup?
  6. 1 point
    I pass through the HBA for Drivepool. I use Dell Perc H310 cards and the SMART data is all visible, as it should be because my Windows VM has direct access to the HBA. edit: Wrong Chris I know, but hopefully helpful?
  7. 1 point
    This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume. You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to the CloudDrive volume and your local duplication independently of one another.
  8. 1 point
    srcrist

    Move data to a partition

    There is no appreciable performance impact by using multiple volumes in a pool.
  9. 1 point
    srcrist

    Move data to a partition

    Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  10. 1 point
    You can do this: https://wiki.covecube.com/StableBit_DrivePool_Q4822624 This hides the letters, and keeps the drives accessible.
  11. 1 point
    Christopher (Drashna)

    SSD Optimizer problem

    This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files. This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files.

Announcements

×
×
  • Create New...