Jump to content

All Activity

This stream auto-updates

  1. Today
  2. I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
  3. It's possible a file or folder on that drive has corrupted NTFS permissions (see this thread - it includes fixes). When you used the drive usage limiter to evacuate the drive, were there any of your files remaining inside that drive's poolpart? Because if it at least cleared out all of your files from the drive, you could then manually "remove" the drive by renaming its poolpart and then Removing the now-"missing" drive.
  4. I've been using DrivePool for many years, and I've re-arranged my pool many times. But I'm struggling to remove one of my disks, I keep getting a "error removing drive" message, and when clicking "details" it says "access is denied". I've tried all kinds of things, like checking "force-close any files" and using the drive usage limiter and setting the problem drive to not have anything stored on it, but nothing works! One thing I have noticed, is the error always pops up at the same point. If I do a balance, so the drive fills up again, then try to remove it, it'll go so far, before throwing up the error. At this point, trying again will throw up the error right away. Looking under "disk activity" it seems to be accessing 2 desktop.ini files, and I've tried removing these files, but this didn't help.
  5. fleggett1

    Drive question.

    I went into the power settings and disabled USB selective suspend. I also made sure the system wouldn't turn off any disks. I just did this, so it'll take a little time to see if it had any results. I actually had no idea the Sabrent had flashable firmware. I downloaded and applied the one linked in that thread. Again, I just did this a few hours ago, so results (if any) may take a day or two to manifest. Thanks for the link! However, after flashing, I did try to do another chkdsk /b on the Exos, but it did the same thing and got jammed at 5%. I'm doing a "clean all" on it now, but if I'm reading things right, that'll take awhile, so I'll leave it overnight. I'm beginning to think the PSU for the Sabrent might be underpowered, as others in that forum also complained about disk drops, especially when it came to SSDs. I have all 10 bays populated with various Seagate and WD spinners, which could be causing issues. If the flash and power settings don't improve things, I'm thinking of ditching it for a 6-bay Terramaster: https://www.amazon.com/gp/product/B0BZHSK29B/ref=ox_sc_act_title_1?smid=A307CH216CTGMP&psc=1 I don't like the fact that the Terra seems to use old-school drive sleds, but I'll gladly accept that hassle if it means I can get the pool back to 100% (I can settle for only six disks in the meantime). I might even take apart the Sabrent to see if the PSU can be upgraded (probably not, but it's worth looking into). More to come!
  6. Yesterday
  7. I have a drivepool with 10 drives set up as drive Y: Everything in Y: is set to 2X real-time duplication. Is there any way, by using the balancing settings, hierarchical pools etc, to set only one specific folder (and its sub-folders) in Y: to NOT use real-time duplication? I don't want to make a new drive letter if I don't have to. The problem i'm having is with drive image backup software such as Hasleo, Macrium Reflect, etc. They often make HUGE files (100GB and up) and I'm finding that I often get messages such as this: ================================================================================== Duplication Warnings There were problems checking, duplicating or cleaning up one or more files on the pool. You can have DrivePool automatically delete the older file part in order to resolve this conflict. One or more duplicated files have mismatching file parts. One reason why this can happen is if something changed the duplicated file's parts directly on the pooled disk and not through the pool. This can also happen if a disk was altered while it was missing from the pool. You can resolve the conflicts manually by deleting the incorrect file part from the pooled disks. Files: \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_4.rdr File parts different. \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_2.rdr File parts different. \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_3.rdr File parts different. ===================================================================================== Since there's no easy way to know what size the completed backup file is going to be, I figure it's best to let Drivepool wait until the entire large file is completed before duplication begins. Is there a simple way to accomplish this without setting up new drive letters, network file shares, etc?
  8. I'm sure you figured it out already... From the images you posted, it just looks like a simple change is needed. The pool called ORICO BOX is fine as is. The one in the first image is not correct. You should have: A pool that has 12TB1 & 12TB2 with NO duplication set. (lets give it drive letter W:) A pool called ORICO BOX with NO duplication set. (with the assorted drives) (Lets call it drive letter X:) Now, drive W: essentially has 24TB of storage since anything written to W: will only be saved to ONE of the two drives. You can set the balancing plugin to make them fill up equally with new data. Drive X: essentially has 28TB of storage since anything written to X: will only be saved to ONE of the five drives. At this point, you make ANOTHER new pool, Lets call it Z: In it, put Drivepool W: and Drivepool X:. Set the duplication settings to 2X for the entire pool. Remember, you are only setting Drivepool Z: to 2X duplication, no other drivepools need changed. What this should do (if I didn't make a dumb typo): Any file written to drive Z: will have one copy stored on either 12TB1 OR 12TB2, AND a duplicate copy will be stored on ONE of the five Orico Box drives. You must read & write your files on drive Z: to make this magic happen. Draw it out as a flowchart on paper and it is much easier to visualise.
  9. Shane

    Drive question.

    Pseudorandom thought, could it be something to do with USB power management? E.g. something going into an idle state while you're AFK thus dropping out the Sabrent momentarily? Also it looks like Sabrent has a support forum, perhaps you could contact them there? There's apparently a 01-2024 firmware update available for that model that gets linked by Sabrent staff in a thread involving random disconnects, but is not listed in the main Sabrent site (that I can find, anyway).
  10. fleggett1

    Drive question.

    I've resisted saying this, but I think there's a problem with the Sabrent. Which, if true, really screws me. I'm beginning to suspect the Sabrent because I tried long formatting a brand-new 18 TB Exos and it also failed. I started the process in disk management, made sure that the percentage was iterating, and went to bed. Got up and nothing was happening and the disk was still showing "raw". So, at some point, the format failed without even generating an error message. I'll also periodically wake up to a disk or two having randomly dropped-out of the pool. I'll reboot the machine and those same disks will magically (re)appear. I'm currently doing a chkdsk /b on the new Exos after doing a quick format in order to assign it a drive letter (which worked). It started-out fine, but is now running at less than a snail's pace, with chkdsk reporting that it won't complete for another 125 hours. Scratch that, now chkdsk is saying 130 hours and it has stubbornly stayed at 5% for the past two hours. I do have another machine I can try long formats on and will do so, but I'm not sure what that'll prove at this point. I've also tried consulting Event Viewer, but so much data gets dumped into it that I can't really pinpoint anything (maybe that's just me being an idiot). I was really, REALLY relying on something like the Sabrent since it seemed to be a Jack-of-all-trades solution to having a bunch a disks without resorting to a server-style case or expensive NAS. If anyone has any suggestions as to a similar device, I'd love to hear it.
  11. Last week
  12. Also, the system Event Viewer can give you an indication for why it failed. In fact, with support tickets, it's one of my first go-tos for troubleshooting weird issues. If you're seeing a lot of disk errors or the like in the event viewer, it can indicate an issue. Also, the burst test in StableBit Scanner can help identify communication issues with the drive.
  13. That is the pool drive. The 2048GB size exactly, that is the giveaway, and the "COVECUBECoveFsDisk_____" is just confirmation of that. (that's the driver name). I'm not sure why that happened, but uninstalling and reinstalling StableBit DrivePool can also fix this. And the drive is always reported as that size, in that section of Disk Management. But elsewhere it shows the correct size.
  14. I appreciate this thread. 4tb unusable for duplication and I was looking for an answer and reading through the thread you guys answered about 4 more questions. I guess I'm reviving the conversation by replying. But thanks.
  15. Shane

    Drive question.

    When you say chkdsk, was that a basic chkdsk or a chkdsk /b (performs bad sector testing)? I think I'd try - on a different machine - cleaning (using the CLEAN ALL feature of command line DISKPART), reinitialising (disk management) and long formatting (using command line FORMAT) it to see whether the problem is the drive or the original machine. If I didn't have a different machine, I'd still try just in case CLEAN ALL got rid of something screwy in the existing partition structure. I'd then run disk-filltest (a third party util) or similar on it to see if that shows any issues. If it passes both, I'd call it good. If it can only be quick-formatted, but passes disk-filltest, I'd still call it okay for anything I didn't care strongly about (because backups, duplications, apathy, whatever). If it fails both, it's RMA time (or just binning it). YMMV, IMO, etc. Hope this helps!
  16. fleggett1

    Drive question.

    What do you guys think about a drive that fails a long format (for unknown reasons), but passes all of Scanner's tests, isn't throwing out SMART errors, and comes up fine when running chkdsk?
  17. I recently replaced two 4tb drives that were starting to fail with two 8tb drives. I replaced them one a time over a span of 4 days. The first drive replacement was flawless. The second drive I found an unallocated 2048.00GB Disk 6 (screenshot 1). When looking at it via Drive pool's interface it shows up as a COVECUBEcoveFsDisk__ (Picture 2). Windows wants to initialize it. I don't want to lose data. But I'm not entirely sure what's going on. Any insight or instructions on how to fix it? Thank you. *Update. My Drivepool is only 21.8TB large so i think it is missing some space. Forgive the simple math. 8+8+4+4 is about 24tb and im showing a drivepool of 21.8tb. Thats about what my unallocated space is give or take. *New Update. After a few (more than 2) for windows updates and stuff and letting it balance and duplicate it resolved itself.
  18. Sorry, I didn't mention: Upload verification was disabled. I opened a ticket.
  19. Earlier
  20. If that's 100GB (gigabytes) a day then you'd only get about another 3TB done by the deadline (100gb - gigabits - would be much worse), so unless you can obtain your own API key to finish the other 4TB post-deadline (and hopefully Google doesn't do anything to break the API during that time), that won't be enough to finish before May 15th. So with no way to discern empty chunks I'd second Christopher's recommendation to instead begin manually downloading the cloud drive's folder now (I'd also be curious to know what download rate you get for that).
  21. I have been using robocopy with move, over write. it 'deletes' the file from the cloud drive. its not shown any longer, and the 'space' is shown to be free. the mapping is being done to some degree, even as a read only path. if there were some way to scan the google drive path and 'blank' as in, bad map locally in the BAM map that would force a deletion of the relevant files from the google path. this would do wonders. glasswire reads that I'm getting about 2mb down. this is no where near the maximum, but it looks like I get about 100gb a day.
  22. That's a hard question to answer. SSDs tend to be fast, including in how they fail. But given this, I suspect that you might have some time. However, back up the drive, for sure! Even if you can't restore/clone the drive, you'd be able to recover settings and such from it. But ideally, you could restore from it. And cloning the drive should be fine. Most everything is SSD aware anymore, so shouldn't be an issue. But also, sometimes a clean install is nice.
  23. Would you mind opening a ticket about this? https://stablebit.com/contact/ This is definitely diving into the more technical aspects of the software, and I'm not as comfortable with how well I understand it, and would prefer to point Alex, the developer, to this discussion directly. However, I think that some of the "twice" is part of the upload verification process, which can be changed/disabled. Also, the file system has duplicate blocks enabled for the storage, for extra redundancy in case of provider issues (*cough* google drive *cough*). But it also sounds like this may not be related to that.
  24. This is mainly an informational post. This is concerning Windows 10. I have 13 drives pooled and i have every power management function set so as to not allow Windows to control power or in any way shut down the drives or anything else. I do not allow anything on my server to sleep either. I received a security update from Windows about 5 days ago. After the update I began daily to receive notices that my drives were disconnected. Shortly after any of those notices (within 2 minutes) I received a notice that all drives have been reconnected. There was never any errors resulting from whatever triggered the notices. I decided to check and I found that one of my USB controllers had its power control status changed. I changed it back to not allowing Windows to control its power and I have not received any notices since. I do not know for sure but I am 99% sure that the Windows update toggled that one controller's power control status to allow windows to turn it off when not being used. I cannot be 100% sure that I have had it always turned off but, until the update, I received none of the notices I started receiving after the update. I suggest, if anyone starts receiving weird notices about several drives becoming lost from the pool, that you check the power management status of your drives. Sometimes Windows updates are just not able to resist changing things. They also introduce gremlins. You just have to be careful to not feed them after midnight and under no circumstances should you get an infested computer wet.
  25. Make sure that you're on the latest release version. There are some changes to better handle when the provider is read only (should be at least version 1.2.5.1670). As for the empty chunks, there isn't really a way to discern that, unfortunately. If you have the space, the best bet is to download the entire directory from Google Drive, and convert the data/drive, and then move the data out.
  26. I'm not aware of a way to manually discern+delete "empty" CD chunk files. @Christopher (Drashna) is that possible without compromising the ability to continue using a (read-only) cloud drive? Would it prevent later converting the cloud drive to local storage? I take it there's something (download speeds? a quick calc suggests 7TB over 30 days would need an average of 22 Mbps, but that's not including overhead) preventing you from finishing copying the remaining uncopied 7TB on the cloud drive to local storage before the May 15th deadline?
  27. I need some assistance, fast. about a year ago, google changed their storage policy. I had exceeded what was then the unlimited storage by about 12 tb. I immediately started evacuating it. I only got it down 10 gig from 16, and google suspended me with read only mode. it can delete, but it cannot write. I have continued to download the drive. it is now down to about 7 tb so far as the pool drive is concerned, but unless drive pool has a mode to 'bad map' and delete the sector files from the drive pool, I am still at 12 tb and way over its capacity. I do appreciate that cloud drive is discontinuing google drive, but I still have some time, but I need to delete as much from the google path as possible, and I can't do that without being able to write back to it, which I cannot do. 'delete bad/read only but empty' would be a big help.
  1. Load more activity
×
×
  • Create New...