Jump to content

Shane

Moderators
  • Posts

    744
  • Joined

  • Last visited

  • Days Won

    66

Posts posted by Shane

  1. Hi Andreas,

    My question is - how do I protect against silent bitrot

    The two main risks for silent bitrot are "while in RAM" and "sitting on disk".

    For the former, ECC RAM for your PC (and making sure your PC will actually support and use ECC RAM in the first place).

    For the latter, you can rely on hardware (e.g. RAID 5/6 cards, NAS boxes like Synology) and/or software (e.g. for Windows, md5deep for checking, SnapRaid or PAR2/MultiPar for checking and repairs; for linux there's also better filesystems like ZFS or bcachefs; or linux-based operating systems like Unraid which simplify the use of those filesystems, etc).

    For use with StableBit DrivePool, my personal recommendation would be ECC RAM + StableBit Scanner + either (SnapRAID) or (md5deep +optionally PAR2/MultiPar). And if you have very large pools consider whether to use hardware RAID 5/6 instead of SnapRAID et al.

    Plus, of course, backups. Backups should go without saying. B)

    Question does drivepool secure against "silent bitrot"

    Not actively. Duplication may "protect" against bitrot in the sense that you can compare the duplicates yourself - with x2 duplication you would need to eyeball inspect or rely on checksums you'd made with other tools, while with x3 duplication or more you could also decide the bad one based on the "odd one out" - but DrivePool does not have any functions that actively assist with this.

    Question: Does diskpool have a "diskpool check" command that checks all files (all duplicates) and detects corrupted files?

    It does not.

    If not - sounds like a great value-add feature - ex
    everytime diskpool writes a file it also calculates and stores a checksum of the file

    I would like to see a feature along those lines too.

    * option to enable: read verification: everytime diskpool reads a file - verifies that read file checksum is as expected

    You'd see a very noticeable performance hit; regular (e.g. monthly) background scans would I think be better.

  2. Robocopy already skips existing files (if they have matching time and size) by default; this can be overriden via the /IS (include files with same size, last-modified time and attributes) and /IT (include files with same size and last-modified time but different attributes) switches. See https://ss64.com/nt/robocopy.html#fileclasses for more details.

  3. Once you've obtained your own API key from Google, the file to edit is C:\ProgramData\StableBit CloudDrive\ProviderSettings.json

    I also suggest making a backup copy of that file before you make any changes to it.

    You may then need to restart clouddrive. Per Christopher's post, "The safest option would be to detach the drive, change API keys, and re-attach. However, you *should* be able to just re-authorize the drive after changing the API keys."

    I suspect 350 Mbps is the best you'll get. 

  4. If you're wanting to keep things as simple as possible and you're going to long-format the old drives anyway, then I'd suggest trusting DrivePool to be able to handle the job?

    1. create the new pool
    2. copy your content from the old pool to the new pool
    3. remove the old pool
      • (quickest way #1: stop the service, rename the poolpart.* folder on each to oldpart.*, start the service, remove the "missing" drives from the pool until all gone)
      • (quickest way #2: eject the old drives, remove the "missing drives" until all gone, format the old drives on another machine)

    This has the side benefits of #A, not having to worry about manually fiddling with pool parts and/or resetting permissions, as unless you're deliberately copying across the permissions from the old pool your content should inherit them from the new pool, #B, you can optionally run lennert's treecomp or other comparison tool of choice after step 2 to ensure it got everything, bit-for-bit if desired, and #C, give the new Terra a good workout now rather than later. And at the end you'll have one pool.

    P.S. If you've got the drive capacity now, consider turning on real-time x2 duplication for the new pool. YMMV but even though I've got nightly backups, knowing that if any drive in my pools just decides to up and die out of the blue I still won't lose even one day's work from it gives me extra peace of mind.

  5. 3 hours ago, Mav1986 said:

    Thanx, i just hope i can get as much as i can downloaded between now and the 15th May, then we will see how it works overall.

    As a rough estimate, 100TB at 350 Mbps would take at least 27 days (give or take) or almost twice as long as the days remaining until May 15th, so to have any chance of completing that successfully you'd definitely need to switch over to using your own API key.

    Note that even with your own key there's still a risk Google breaks something on their end (hopefully extremely unlikely but we can't rule it out) and you end up having to download the entire drive manually, so it's up to you to consider whether you should be safely detaching your clouddrive now so that you can begin the manual download.

  6. Windows bases its initial decision on whether the total size of the files/folders being copied is less than the pool's free space as reported by DrivePool. So with 465 GB of files going into 7 TB of free space, Windows should at least start the operation.

    That said, if you have real-time duplication x3 enabled, to finish copying a 465GB folder into the pool you'd need 3 times 465GB of free space available in the pool, and each and every file involved would need to fit within the remaining free space available in at least three drives within the pool (even if not necessarily the same three drives for different files). E.g. if one of the files was 300GB and less than three drives in the pool had more than 300GB free each, then it wouldn't be able to complete the operation.

    If none of the above is applicable - e.g. you mention trying to copy 465GB directly to an individual drive with 3TB free and it didn't work - then something weird is going on.

  7. It means Windows at least thinks that they're ok; running the fixes would ensure that they actually are ok.

    3 hours ago, billyboy12 said:

    Nothing happened! I clicked "balance" and it ran for a few seconds then stopped!

    Stopped as in completed without error but didn't move anything? Might need to adjust the plugin's priority and/or the Balancing settings so that the limiter correctly applies?

  8. Hmm. It's very kludgy, but I wonder:

    Pool X: enough drive(s) to handle backup cycle workload. Duplication disabled.

    Pool Y: the other drives. Duplication (real-time) enabled.

    Pool Z: consists of pools X and Y. Folders in File Placement section set so that all folders EXCEPT the backup folders go directly to Y. SSD Optimizer to use pool X as "SSD" and pool Y as "Archive". Balancing settings to un-tick "File placement rules respect..." and tick "Balancing plug-ins respect..." and tick "Unless the drive is...".

    Result should in theory be that everything except the backup folders (and any files in the root folder of Z) get duplicated real-time in Y, while backups land in the X and only later get emptied into Y (whereupon they are duplicated)?

  9. It's possible a file or folder on that drive has corrupted NTFS permissions (see this thread - it includes fixes).

    When you used the drive usage limiter to evacuate the drive, were there any of your files remaining inside that drive's poolpart? Because if it at least cleared out all of your files from the drive, you could then manually "remove" the drive by renaming its poolpart and then Removing the now-"missing" drive.

  10. Pseudorandom thought, could it be something to do with USB power management? E.g. something going into an idle state while you're AFK thus dropping out the Sabrent momentarily?

    Also it looks like Sabrent has a support forum, perhaps you could contact them there? There's apparently a 01-2024 firmware update available for that model that gets linked by Sabrent staff in a thread involving random disconnects, but is not listed in the main Sabrent site (that I can find, anyway).

  11. When you say chkdsk, was that a basic chkdsk or a chkdsk /b (performs bad sector testing)?

    I think I'd try - on a different machine - cleaning (using the CLEAN ALL feature of command line DISKPART), reinitialising (disk management) and long formatting (using command line FORMAT) it to see whether the problem is the drive or the original machine. If I didn't have a different machine, I'd still try just in case CLEAN ALL got rid of something screwy in the existing partition structure.

    I'd then run disk-filltest (a third party util) or similar on it to see if that shows any issues.

    If it passes both, I'd call it good. If it can only be quick-formatted, but passes disk-filltest, I'd still call it okay for anything I didn't care strongly about (because backups, duplications, apathy, whatever). If it fails both, it's RMA time (or just binning it).

    YMMV, IMO, etc. Hope this helps!

  12. If that's 100GB (gigabytes) a day then you'd only get about another 3TB done by the deadline (100gb - gigabits - would be much worse), so unless you can obtain your own API key to finish the other 4TB post-deadline (and hopefully Google doesn't do anything to break the API during that time), that won't be enough to finish before May 15th.

    So with no way to discern empty chunks I'd second Christopher's recommendation to instead begin manually downloading the cloud drive's folder now (I'd also be curious to know what download rate you get for that).

  13. I'm not aware of a way to manually discern+delete "empty" CD chunk files. @Christopher (Drashna) is that possible without compromising the ability to continue using a (read-only) cloud drive? Would it prevent later converting the cloud drive to local storage?

    I take it there's something (download speeds? a quick calc suggests 7TB over 30 days would need an average of 22 Mbps, but that's not including overhead) preventing you from finishing copying the remaining uncopied 7TB on the cloud drive to local storage before the May 15th deadline?

  14. 18 hours ago, PJay said:

    + This may be off topic but in this forum I saw a guy who lost the encrypt key which made him unable to recover his pool after windows reinstallation. I think I haven't touched the encrypt option but just to verify is there a way I can check if I have encrypt option enabled on my pool ? (I've seen some couple of times but I can't find it anymore ... so I can't check if my drive pool is encrypted or not)

    DrivePool does not use encryption (that's CloudDrive). However, in the event that you have used Windows Bitlocker to encrypt the physical drives on which your pool is stored then you will need to ensure you have those key(s) saved (which Bitlocker would have prompted you to do during the initial encryption process).

  15. You can use the command  dpcmd get-duplication object  as an administrator, on the machine on which DrivePool is installed, where object is the full path of a folder or file in the pool (e.g. "p:\testfolder" or "p:\testfolder\testfile.txt") to check that the actual duplication matches* the expected duplication of the path, and it will also return which poolparts contain that object.

    *Note that due to the way DrivePool works, the actual duplication may exceed the expected duplication for folder objects; this is normal.

  16. Explorer relies on the underlying hardware's error-checking capability to indicate any problems when copying or moving a file. When dealing with a system that is rebooting by itself without explanation, I wouldn't trust that - let alone for anything critical or irreplaceable. You can use copier programs such as Teracopy or Fastcopy that have built-in checksum verification as an option when copying to a new target, or use syncing programs such as FreeFileSync or TreeComp which allow you to compare the contents of two existing drives/folders.

    I would also suggest checking and resolving the disk identifiers issue before proceeding.

  17. Regarding the reboots, given you strongly suspect it's DrivePool-related then I would recommend opening a support ticket with StableBit. Though with any use of Scanner increasing the reboot frequency I'd suspect some kind of load/conflict trigger; I'd be giving your new CPU dubious looks too.

    Evacuating follows the same rules as placing files on the drive - for example it will default to evacuating to the drive(s) with the most free space at the time - so it should thus be possible to force it to use a specific drive via the Ordered File Placement balancer, but I'd want to test that first.

    However if your machine is often rebooting whenever you're using the pool then I'm not sure how you plan to successfully evacuate a drive via DrivePool's Remove function; you may have to do so manually:

    • Add the new drive.
    • Stop the DrivePool service.
    • Copy your content from the hidden poolpart folder on the old drive to the hidden poolpart folder on the new drive. Don't include system folders (e.g. recycle, sysvol, covefs).
    • I'd also suggest verifying the copy afterwards (e.g. using FreeFileSync, md5deep, et cetera) if your copy method doesn't include its own post-copy verification.
    • Format the old drive.
    • Start the DrivePool service.
    • Amend any balancers or placement rules that specify the old drive.
    • Remove the "missing" old drive.
  18. While files and folders in a pool will still show their size (and also their "size on disk") as if they were not duplicated, DrivePool calculates the total, used and free space on the basis of the actual physical drives' total, used and free space; it does not attempt to predict space on the basis of duplication (e.g. that if you have x2 duplication on the entire pool you'll of course be using x2 as much space for each file). This is because of the problem of it being impossible to predict post-duplication free space when using per-folder duplication.

×
×
  • Create New...