Jump to content

Shane

Moderators
  • Posts

    737
  • Joined

  • Last visited

  • Days Won

    65

Shane last won the day on April 19

Shane had the most liked content!

About Shane

Profile Information

  • Gender
    Not Telling
  • Location
    Australia
  • Interests
    Eclectic.

Recent Profile Visitors

3501 profile views

Shane's Achievements

  1. Windows bases its initial decision on whether the total size of the files/folders being copied is less than the pool's free space as reported by DrivePool. So with 465 GB of files going into 7 TB of free space, Windows should at least start the operation. That said, if you have real-time duplication x3 enabled, to finish copying a 465GB folder into the pool you'd need 3 times 465GB of free space available in the pool, and each and every file involved would need to fit within the remaining free space available in at least three drives within the pool (even if not necessarily the same three drives for different files). E.g. if one of the files was 300GB and less than three drives in the pool had more than 300GB free each, then it wouldn't be able to complete the operation. If none of the above is applicable - e.g. you mention trying to copy 465GB directly to an individual drive with 3TB free and it didn't work - then something weird is going on.
  2. It means Windows at least thinks that they're ok; running the fixes would ensure that they actually are ok. Stopped as in completed without error but didn't move anything? Might need to adjust the plugin's priority and/or the Balancing settings so that the limiter correctly applies?
  3. Shane

    Drive question.

    It's an awful feeling to have when it happens (been there). Best wishes, I hope you get good news back from the recovery request.
  4. Hmm. It's very kludgy, but I wonder: Pool X: enough drive(s) to handle backup cycle workload. Duplication disabled. Pool Y: the other drives. Duplication (real-time) enabled. Pool Z: consists of pools X and Y. Folders in File Placement section set so that all folders EXCEPT the backup folders go directly to Y. SSD Optimizer to use pool X as "SSD" and pool Y as "Archive". Balancing settings to un-tick "File placement rules respect..." and tick "Balancing plug-ins respect..." and tick "Unless the drive is...". Result should in theory be that everything except the backup folders (and any files in the root folder of Z) get duplicated real-time in Y, while backups land in the X and only later get emptied into Y (whereupon they are duplicated)?
  5. I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
  6. It's possible a file or folder on that drive has corrupted NTFS permissions (see this thread - it includes fixes). When you used the drive usage limiter to evacuate the drive, were there any of your files remaining inside that drive's poolpart? Because if it at least cleared out all of your files from the drive, you could then manually "remove" the drive by renaming its poolpart and then Removing the now-"missing" drive.
  7. Shane

    Drive question.

    Pseudorandom thought, could it be something to do with USB power management? E.g. something going into an idle state while you're AFK thus dropping out the Sabrent momentarily? Also it looks like Sabrent has a support forum, perhaps you could contact them there? There's apparently a 01-2024 firmware update available for that model that gets linked by Sabrent staff in a thread involving random disconnects, but is not listed in the main Sabrent site (that I can find, anyway).
  8. Shane

    Drive question.

    When you say chkdsk, was that a basic chkdsk or a chkdsk /b (performs bad sector testing)? I think I'd try - on a different machine - cleaning (using the CLEAN ALL feature of command line DISKPART), reinitialising (disk management) and long formatting (using command line FORMAT) it to see whether the problem is the drive or the original machine. If I didn't have a different machine, I'd still try just in case CLEAN ALL got rid of something screwy in the existing partition structure. I'd then run disk-filltest (a third party util) or similar on it to see if that shows any issues. If it passes both, I'd call it good. If it can only be quick-formatted, but passes disk-filltest, I'd still call it okay for anything I didn't care strongly about (because backups, duplications, apathy, whatever). If it fails both, it's RMA time (or just binning it). YMMV, IMO, etc. Hope this helps!
  9. If that's 100GB (gigabytes) a day then you'd only get about another 3TB done by the deadline (100gb - gigabits - would be much worse), so unless you can obtain your own API key to finish the other 4TB post-deadline (and hopefully Google doesn't do anything to break the API during that time), that won't be enough to finish before May 15th. So with no way to discern empty chunks I'd second Christopher's recommendation to instead begin manually downloading the cloud drive's folder now (I'd also be curious to know what download rate you get for that).
  10. I'm not aware of a way to manually discern+delete "empty" CD chunk files. @Christopher (Drashna) is that possible without compromising the ability to continue using a (read-only) cloud drive? Would it prevent later converting the cloud drive to local storage? I take it there's something (download speeds? a quick calc suggests 7TB over 30 days would need an average of 22 Mbps, but that's not including overhead) preventing you from finishing copying the remaining uncopied 7TB on the cloud drive to local storage before the May 15th deadline?
  11. Glad to hear. Was there any finding of what caused it?
  12. DrivePool does not use encryption (that's CloudDrive). However, in the event that you have used Windows Bitlocker to encrypt the physical drives on which your pool is stored then you will need to ensure you have those key(s) saved (which Bitlocker would have prompted you to do during the initial encryption process).
  13. You can use the command dpcmd get-duplication object as an administrator, on the machine on which DrivePool is installed, where object is the full path of a folder or file in the pool (e.g. "p:\testfolder" or "p:\testfolder\testfile.txt") to check that the actual duplication matches* the expected duplication of the path, and it will also return which poolparts contain that object. *Note that due to the way DrivePool works, the actual duplication may exceed the expected duplication for folder objects; this is normal.
  14. Explorer relies on the underlying hardware's error-checking capability to indicate any problems when copying or moving a file. When dealing with a system that is rebooting by itself without explanation, I wouldn't trust that - let alone for anything critical or irreplaceable. You can use copier programs such as Teracopy or Fastcopy that have built-in checksum verification as an option when copying to a new target, or use syncing programs such as FreeFileSync or TreeComp which allow you to compare the contents of two existing drives/folders. I would also suggest checking and resolving the disk identifiers issue before proceeding.
  15. Regarding the reboots, given you strongly suspect it's DrivePool-related then I would recommend opening a support ticket with StableBit. Though with any use of Scanner increasing the reboot frequency I'd suspect some kind of load/conflict trigger; I'd be giving your new CPU dubious looks too. Evacuating follows the same rules as placing files on the drive - for example it will default to evacuating to the drive(s) with the most free space at the time - so it should thus be possible to force it to use a specific drive via the Ordered File Placement balancer, but I'd want to test that first. However if your machine is often rebooting whenever you're using the pool then I'm not sure how you plan to successfully evacuate a drive via DrivePool's Remove function; you may have to do so manually: Add the new drive. Stop the DrivePool service. Copy your content from the hidden poolpart folder on the old drive to the hidden poolpart folder on the new drive. Don't include system folders (e.g. recycle, sysvol, covefs). I'd also suggest verifying the copy afterwards (e.g. using FreeFileSync, md5deep, et cetera) if your copy method doesn't include its own post-copy verification. Format the old drive. Start the DrivePool service. Amend any balancers or placement rules that specify the old drive. Remove the "missing" old drive.
×
×
  • Create New...