Jump to content

Shane

Moderators
  • Posts

    1036
  • Joined

  • Last visited

  • Days Won

    105

Everything posted by Shane

  1. Unless directed otherwise by a File Placement rule or a balancer that can set file placement limits, DrivePool will always try to place incoming files on the drive with the most free space. To balance existing files across all the drives according to percentage used space, it looks like you need to do the following: Balancing -> Balancers: enable the "Disk Space Equalizer" balancer with its options set to "Equalize by the percent used". I'd also recommend making the "StableBit Scanner" balancer the highest priority if you're using that, and you can probably disable the "Volume Equalization" and "Drive Usage Limiter" balancers. Balancing -> Settings: turn on the Automatic balancing so that the (non-realtime, non-immediate) Balancers can do their thing. For now I'd suggest selecting "Balance immediately" with the "Not more often..." option unchecked.
  2. Removing a drive doesn't make the pool read-only during the removal. It does prevent removing other drives (they'll be queued for removal instead) and I believe it may prevent some other background/scheduled tasks, but one should still be able to read and write files normally to the pool. Only problem I can think of is if you're removing drive X and you've got a file placement rule that says files can only be put on X; I'd assume you'd have to change or disable that rule.
  3. Not that I know of. Perhaps make a feature request via the contact form?
  4. Non-real-time duplication is scheduled, so one disk at a time. When that runs is controlled by FileDuplication_DuplicateTime in the Settings.json file.
  5. As far as I can tell real-time tasks (e.g. real-time duplication) are as concurrent as they need to be (e.g. 2x duping = simultaneous writes to two disks, 10x duping = to ten disks, etc) while scheduled tasks (e.g. balancing, duplication checking, etc, including manually initiated tasks) are one disk at a time (at least for writes).
  6. If the main goal is to have the cloud storage act as an offsite backup that doesn't slow down the local storage, instead of a single pool with local and cloud disks I'd suggest separate pools for local and cloud with a backup tool updating the latter from the former. Edit: Otherwise, with only a single pool of mixed local and cloud drives, ensure the cloud drive local cache is set to expandable and is on a large enough disk to handle typical write volumes. Per the manual: "An expandable cache will try not to consume all of the disk space on the local cache drive. It will always try to keep at least 5 GB free. As the free space on the cache drive approaches 5 GB, in order to continue to maintain some free space on the local cache drive, writes to the cloud drive will get progressively slower."
  7. "?-did i go about this correctly? (i know)i have been impatient to let it duplicate/rebalance while i'm trying to complete my drives swap" "?-why did it give me the not enough space error when i added a new 18tb?" I might guess that if you had "duplicate files later" checked then it may not have had time to do that after the first 10tb was removed and before the second 10tb was removed, so it had to duplicate 2x10tb when at that point it only had 1x18 to go to? And/or did you have any File Placement rules that might've interfered? Only other thing I can think of is something interrupting the enclosure and DrivePool thought that meant not enough space. "?-why does the 2nd 10tb only read in my Sabrent enclosure but not when I install it in my tower?" No idea; DrivePool shouldn't have any problem with a poolpart drive being moved to a different slot (assuming DP was shut down before moving it and started up after moving it). When you say it didn't show up, do you mean "it didn't show up in DrivePool" or do you mean "it didn't show up at all, e.g. not in Windows Disk Management"? Because the latter would indicate a non-DP problem.
  8. After some more testing I can confirm cipher /w run on the pool only erases the free space on one drive in the pool before stopping, while filldisk run on the pool appears to leave anywhere from a few megabyes to over a gigabyte untouched on each of the disks in the pool. Filldisk also reacted similarly when run on individual drives via a mounted path (e.g. E:\Disks\MountedDrive) rather than via a drive letter, which is something to keep in mind if you use the former method to directly access your drives. Based on observing how DrivePool operated, I would recommend free space cleaning tools only be run directly on the individual drives. CyLog's Filldisk: Writes only zeroes, so three times faster than cipher /w. Doesn't remove the files it creates after it's aborted or finished, so you can effectively "pause" and you can see for yourself whether the free space is zero afterwards. In testing did not completely wipe the free space on drives accessed via mounted paths. May not scrub deleted files that are very small (1KB) due to how NTFS works. Microsoft's Cipher: Writes zeroes, then ones, then random bytes so more thorough but three times slower than filldisk. Leaves an empty EFSTMPWP folder on the target afterwards, so hard to check if it gets every last byte and there is a warning in the MS documentation about it potentially missing files of 1KB or less in size. Worked on mounted paths. SysInternals' SDelete: Like Cipher, but apparently has the 1KB file issue solved. TRIM: any SSD with TRIM functionality can do this automagically (and thoroughly). A windows command to manually trigger this is defrag volume /o - e.g. "defrag e: /o". If you are concerned about thieves scrounging through your Windows disks after stealing the machine, I'd recommend Bitlocker and a long passphrase.
  9. "1. Can I specify to keep the most recent files in the SSD cache for faster reads, while also duplicating them to archive disks during the nightly schedule?" No; files can be placed by file and folder name, not by file age. You could try requesting it via the contact form? Not sure whether it'd work better as a plug-in or a placement rule (maybe the latter as an option when pattern matching, e.g. "downloads\*.*" + "newer than 7 days" = "keep on drives A, B, D"). "2. can I specify some folders to always be on the SSD cache?" Yes; you can use Manage Pool -> Balancing -> File Placement feature to do this. You would also need to tick "Balancing plug-ins respect file placement rules" and un-tick "Unless the drive is being emptied" in Manage Pool -> Balancing -> Settings. If you're using the Scanner balancing plug-in, you should also tick "File placement rules respect real-time file placement limits set by the balancing plug-ins".
  10. The 12 Gb/s throughput for the Dell controllers mentioned on the datasheet is total per controller, so if you operated on multiple drives on the same controller simultaneously I'd expect it to be divided between the drives. Having refreshed my memory on PCIe's fiddly details so I can hopefully get this right, there's also limits on total lanes direct to the CPU and via the mainboard; e.g. the first slot(s) may get all 16 direct to the CPU(s) while the other slots may have to share a "bus" of 4 lanes through the chipset to the CPU. So even though you might have a whole bunch of individually x4, x8, x16 or whatever speed slots, everything after the first slot(s) is going to be sharing that bus to get anywhere else (note: the actual number of direct and bus lanes varies by platform). So you'd have to compare read speeds and copy speeds from each slot and between slots, because copying from slotA\drive1 to slotA\drive2 might be a different result from slotA\drive1 to slotB\drive1 from slotB\drive1 to slotC\drive1... and then do that all over again but with simultaneous transfers to see where, exactly, the physical bottlenecks are between everything. As far as drivepool goes, if C was your boot drive and D was your pool drive (with x2 real-time duplication) and that pool consisted of E and F, then you could see drivepool's duplication overhead by getting a big test file and first copying it from C to D, then from C to E, then simultaneously from C to E and C to F. If the drives that make your pool are spread across multiple slots, then you might(?) also see a speed difference between duplication within the drives on a slot and duplication across drives on separate slots. If you do, then consider whether it's worth it to you to use nested pools to take advantage of that. P.S. Applications can have issues with read-striping, particularly some file hashing/verification tools, so personally I'd either leave that turned off or test extensively to be sure.
  11. If you mean Microsoft's cipher /w, It's safe in the sense that it won't harm a pool. However, it will not zero/random all of the free space in a pool that occupies multiple disks unless you run it on each of those disks directly rather than the pool (as cipher /w operates via writing to a temporary file until the target disk is full, and drivepool will return "full" once that file fills the first disk to which it is being written in the pool). You might wish to try something like CyLog's FillDisk instead (that writes smaller and smaller files until the target is full), though disclaimer I have not extensively tested FillDisk with DrivePool (and in any case both programs may still leave very small amounts of data on the target).
  12. Shane

    SMART errors

    Depending on the error, it might have been one that the drive's own firmware was capable of (eventually) repairing?
  13. Real-time duplication xN writes to N drives in parallel, so while there's a little overhead I can only imagine that seeing the speed effectively halve like you're describing would have to be because of a bottleneck between the source and destination. Note the total speed you're getting is roughly what the 7910's integrated/optional storage controllers can manage, 12Gb/s per Dell's datasheet, and that while any given slot may be mechanically x16 it may be electronically less and in any case will also not run any faster than the xN of the card plugged into it (so a x1 card will still run at x1 even in a x16 slot). So I'm suspecting something like that being the issue; what's the model of the NVME cards you've installed? EDIT: It may also be possible to adjust the PCIE bandwith in your BIOS/UEFI, if it supports that.
  14. Hi Andreas, My question is - how do I protect against silent bitrot The two main risks for silent bitrot are "while in RAM" and "sitting on disk". For the former, ECC RAM for your PC (and making sure your PC will actually support and use ECC RAM in the first place). For the latter, you can rely on hardware (e.g. RAID 5/6 cards, NAS boxes like Synology) and/or software (e.g. for Windows, md5deep for checking, SnapRaid or PAR2/MultiPar for checking and repairs; for linux there's also better filesystems like ZFS or bcachefs; or linux-based operating systems like Unraid which simplify the use of those filesystems, etc). For use with StableBit DrivePool, my personal recommendation would be ECC RAM + StableBit Scanner + either (SnapRAID) or (md5deep +optionally PAR2/MultiPar). And if you have very large pools consider whether to use hardware RAID 5/6 instead of SnapRAID et al. Plus, of course, backups. Backups should go without saying. Question does drivepool secure against "silent bitrot" Not actively. Duplication may "protect" against bitrot in the sense that you can compare the duplicates yourself - with x2 duplication you would need to eyeball inspect or rely on checksums you'd made with other tools, while with x3 duplication or more you could also decide the bad one based on the "odd one out" - but DrivePool does not have any functions that actively assist with this. Question: Does diskpool have a "diskpool check" command that checks all files (all duplicates) and detects corrupted files? It does not. If not - sounds like a great value-add feature - ex everytime diskpool writes a file it also calculates and stores a checksum of the file I would like to see a feature along those lines too. * option to enable: read verification: everytime diskpool reads a file - verifies that read file checksum is as expected You'd see a very noticeable performance hit; regular (e.g. monthly) background scans would I think be better.
  15. Martixingwei is correct, you would need to download that entire clouddrive folder from your google drive to local storage and run the converter tool.
  16. Shane

    Drive question.

    Robocopy already skips existing files (if they have matching time and size) by default; this can be overriden via the /IS (include files with same size, last-modified time and attributes) and /IT (include files with same size and last-modified time but different attributes) switches. See https://ss64.com/nt/robocopy.html#fileclasses for more details.
  17. Once you've obtained your own API key from Google, the file to edit is C:\ProgramData\StableBit CloudDrive\ProviderSettings.json I also suggest making a backup copy of that file before you make any changes to it. You may then need to restart clouddrive. Per Christopher's post, "The safest option would be to detach the drive, change API keys, and re-attach. However, you *should* be able to just re-authorize the drive after changing the API keys." I suspect 350 Mbps is the best you'll get.
  18. Hi varied, DrivePool will identify the two pools as being different and connect to each of them as two separate pools on the new PC. There is no built-in command to merge pools (yet) but it can be done manually.
  19. Shane

    Drive question.

    If you're wanting to keep things as simple as possible and you're going to long-format the old drives anyway, then I'd suggest trusting DrivePool to be able to handle the job? create the new pool copy your content from the old pool to the new pool remove the old pool (quickest way #1: stop the service, rename the poolpart.* folder on each to oldpart.*, start the service, remove the "missing" drives from the pool until all gone) (quickest way #2: eject the old drives, remove the "missing drives" until all gone, format the old drives on another machine) This has the side benefits of #A, not having to worry about manually fiddling with pool parts and/or resetting permissions, as unless you're deliberately copying across the permissions from the old pool your content should inherit them from the new pool, #B, you can optionally run lennert's treecomp or other comparison tool of choice after step 2 to ensure it got everything, bit-for-bit if desired, and #C, give the new Terra a good workout now rather than later. And at the end you'll have one pool. P.S. If you've got the drive capacity now, consider turning on real-time x2 duplication for the new pool. YMMV but even though I've got nightly backups, knowing that if any drive in my pools just decides to up and die out of the blue I still won't lose even one day's work from it gives me extra peace of mind.
  20. As a rough estimate, 100TB at 350 Mbps would take at least 27 days (give or take) or almost twice as long as the days remaining until May 15th, so to have any chance of completing that successfully you'd definitely need to switch over to using your own API key. Note that even with your own key there's still a risk Google breaks something on their end (hopefully extremely unlikely but we can't rule it out) and you end up having to download the entire drive manually, so it's up to you to consider whether you should be safely detaching your clouddrive now so that you can begin the manual download.
  21. Windows bases its initial decision on whether the total size of the files/folders being copied is less than the pool's free space as reported by DrivePool. So with 465 GB of files going into 7 TB of free space, Windows should at least start the operation. That said, if you have real-time duplication x3 enabled, to finish copying a 465GB folder into the pool you'd need 3 times 465GB of free space available in the pool, and each and every file involved would need to fit within the remaining free space available in at least three drives within the pool (even if not necessarily the same three drives for different files). E.g. if one of the files was 300GB and less than three drives in the pool had more than 300GB free each, then it wouldn't be able to complete the operation. If none of the above is applicable - e.g. you mention trying to copy 465GB directly to an individual drive with 3TB free and it didn't work - then something weird is going on.
  22. It means Windows at least thinks that they're ok; running the fixes would ensure that they actually are ok. Stopped as in completed without error but didn't move anything? Might need to adjust the plugin's priority and/or the Balancing settings so that the limiter correctly applies?
  23. Shane

    Drive question.

    It's an awful feeling to have when it happens (been there). Best wishes, I hope you get good news back from the recovery request.
  24. Hmm. It's very kludgy, but I wonder: Pool X: enough drive(s) to handle backup cycle workload. Duplication disabled. Pool Y: the other drives. Duplication (real-time) enabled. Pool Z: consists of pools X and Y. Folders in File Placement section set so that all folders EXCEPT the backup folders go directly to Y. SSD Optimizer to use pool X as "SSD" and pool Y as "Archive". Balancing settings to un-tick "File placement rules respect..." and tick "Balancing plug-ins respect..." and tick "Unless the drive is...". Result should in theory be that everything except the backup folders (and any files in the root folder of Z) get duplicated real-time in Y, while backups land in the X and only later get emptied into Y (whereupon they are duplicated)?
  25. I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
×
×
  • Create New...