Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Shane

    Drive question.

    If you're wanting to keep things as simple as possible and you're going to long-format the old drives anyway, then I'd suggest trusting DrivePool to be able to handle the job? create the new pool copy your content from the old pool to the new pool remove the old pool (quickest way #1: stop the service, rename the poolpart.* folder on each to oldpart.*, start the service, remove the "missing" drives from the pool until all gone) (quickest way #2: eject the old drives, remove the "missing drives" until all gone, format the old drives on another machine) This has the side benefits of #A, not having to worry about manually fiddling with pool parts and/or resetting permissions, as unless you're deliberately copying across the permissions from the old pool your content should inherit them from the new pool, #B, you can optionally run lennert's treecomp or other comparison tool of choice after step 2 to ensure it got everything, bit-for-bit if desired, and #C, give the new Terra a good workout now rather than later. And at the end you'll have one pool. P.S. If you've got the drive capacity now, consider turning on real-time x2 duplication for the new pool. YMMV but even though I've got nightly backups, knowing that if any drive in my pools just decides to up and die out of the blue I still won't lose even one day's work from it gives me extra peace of mind.
  3. As a rough estimate, 100TB at 350 Mbps would take at least 27 days (give or take) or almost twice as long as the days remaining until May 15th, so to have any chance of completing that successfully you'd definitely need to switch over to using your own API key. Note that even with your own key there's still a risk Google breaks something on their end (hopefully extremely unlikely but we can't rule it out) and you end up having to download the entire drive manually, so it's up to you to consider whether you should be safely detaching your clouddrive now so that you can begin the manual download.
  4. Yesterday
  5. Thanx, i just hope i can get as much as i can downloaded between now and the 15th May, then we will see how it works overall.
  6. From above conversation and the FAQ, you should be able to continue to use Google Drive by enabling experimental/deprecated provider and set your own Google API key in the "ProviderSettings.json" file. However, they will not actively update the software to keep Google Drive provider working as Google could change their API at any time.
  7. OMG! ive just spotted this & ive got over 100tb on gdrive via stablebit clouddrive! What is the quickest way i cant get my data downloaded please? as im getting approx 350 dload from stablebit on my 1gb internet service. Will google drive fully stop on the 15th May or will i still be able to access it untill google drive do a future update that breaks stablebit access? Thank You
  8. fleggett1

    Drive question.

    The drives long-formatted just fine. That's one hoop jumped. Now, I can start rebuilding things, but I need some pointers. Three of the drives in the Terra are part of the (old) pool and is being recognized as such. I want to create a new pool using the three Exos as the base, but I don't want DP to have to contend with two pools going simultaneously. I know the software is designed to support multiple pools, but I want to keep things as simple as possible. Plus, in the end, I only want the one pool. Once all the data is moved to the Exos, I'll be long-formatting the old drives to ensure they're still good. How can I eliminate the previous pool without affecting the files on the three drives? Do I just remove them one-by-one, presumably in archive mode, until they've all been processed? And is there anything else I should do afterwards, like taking ownership of the old pool files and/or resetting permissions? Thanks in advance!
  9. Last week
  10. fleggett1

    Drive question.

    Rossmann said they couldn't do anything with the drive, which REALLY surprised me, so they referred me to Gillware Data Recovery. I need to talk to them first before sending the drive in to make sure the attempt will be a reasonable sum, which I'll try to do tomorrow. I guess the moral of the story is not to futz around with diskpart unless you know for damn sure you know what you're doing. I received the Terramaster. It looks like a typical grey internal drive cage, just a little wider and without any mounting points. It uses screw-in drive sleds, unlike the Sabrent, which is a PITA, but c'est la vie. However, it also came with some paper-like inserts that go in-between the drives and the bottom of the sleds, which I've never seen. The boxed instructions don't say why they're required and the online product guide doesn't even mention them, so I can only guess their purpose (electrical insulators?). Bizarre. It also doesn't have an internal PSU as such, but came with a laptop-style 12V barrel plug brick power converter. Unlike the Sabrent, the drives sit vertically, which makes me a bit uncomfortable (I might lay it on its side). There is only one USB-C connector. There has been some talk on the Terramaster forums about people needing a powered USB hub with this unit. I don't know why this would be required when directly attached to the USB-C port on a motherboard, but maybe that's been my problem all along. If so, I'll consider slitting my wrists later. I'm in the process of long-formatting three 18 TB Seagate Exos simultaneously. If that passes, I'll continue trying to rebuild the pool. This is my last attempt at doing so. If the formats fail, I'm giving-up, at least for awhile. Maybe in the far-flung future I'll get a proper tower with a 10,000 watt PSU and stuff it with drives, but I really, REALLY hope the Terramaster comes through. I have confirmed that Windows recognizes all six drives, but the Sabrent did the same until I started encountering those drive dropouts, so longevity will determine the winner. More to come!
  11. I tried copying a large (200+GB) system image file from a win 10 pro PC to the aformentioned drivepool with about 7TB free space & this worked OK. I suspect it's an issue with Win 11.
  12. Windows bases its initial decision on whether the total size of the files/folders being copied is less than the pool's free space as reported by DrivePool. So with 465 GB of files going into 7 TB of free space, Windows should at least start the operation. That said, if you have real-time duplication x3 enabled, to finish copying a 465GB folder into the pool you'd need 3 times 465GB of free space available in the pool, and each and every file involved would need to fit within the remaining free space available in at least three drives within the pool (even if not necessarily the same three drives for different files). E.g. if one of the files was 300GB and less than three drives in the pool had more than 300GB free each, then it wouldn't be able to complete the operation. If none of the above is applicable - e.g. you mention trying to copy 465GB directly to an individual drive with 3TB free and it didn't work - then something weird is going on.
  13. I have 14 mixed size HDD's in my drivepool on a Windows 10 Pro x64 PC giving a total drivepool size of about 24TB with about 7TB free space. I use x3 duplication. When I tried to copy approx 465GB to the drivepool over my LAN from another Win 11 Pro PC, windows 11 reported insufficient free space! I also tried copying the same folder to a 4TB HDD, with about 3TB free space, that is a member of the same drivepool with the same error about insufficient free space! All 14 HDD's are reported as 'healthy' by stablebit scanner. Could this be a drivepool issue or a Win 11 issue? Any suggestions would be appreciated.
  14. fleggett1

    Drive question.

    I haven't heard from Rossmann yet. If I don't get a reply by Wednesday, I'll voice call them. I did order the 6-bay Terramaster, which should be here tomorrow. If/when I'm satisfied it's running as advertised, I'll rebuild the pool and copy over what I can. Assuming everything goes smoothly (har har), I'll reinstall everything and restore from config backups, which I've been holding-off on doing until I can figure things out. I really, REALLY hope the Terramaster "just works". Since it only accepts six drives, the power demand should be a lot less than the Sabrent. I'd still like to disassemble the Sabrent to see what sort of PSU it's using. I'm 99.9% sure it's something custom, but you never know until you actually eyeball things. I'll report back in a few days. Wish me luck.
  15. Earlier
  16. It means Windows at least thinks that they're ok; running the fixes would ensure that they actually are ok. Stopped as in completed without error but didn't move anything? Might need to adjust the plugin's priority and/or the Balancing settings so that the limiter correctly applies?
  17. Shane

    Drive question.

    It's an awful feeling to have when it happens (been there). Best wishes, I hope you get good news back from the recovery request.
  18. fleggett1

    Drive question.

    I think I'm gonna give up on having a pool. Maybe a computer. I was trying to clear that 18 TB Exos out using diskpart. This Exos has the label of ST1800NM. Another 20 GB Exos that I had bought a few months ago has the label ST2000NM. I was tired, bleary-eyed, more than a little frustrated with all these problems, wasn't thinking 100% straight, and selected the ST2000NM in diskpart, and cleaned it. Problem is this drive had gigabytes of data that was critical to the pool. GIGABYTES. I still can't believe I made such a simple, rookie, and yet devastating mistake. My God. I don't know if any of the file table can be salvaged, as I just did a simple "clean" and not a "clean all", but I've got a recovery request into Rossmann Repair Group to see if they can do anything with it. I know there are some tools in the wild that I could probably run myself on the drive, but I don't trust myself to do much of anything atm. I should never have thrown money at an AM5 system. I also probably should've stayed away from the Sabrent and anything like it. Instead, I should've done what any sane person would've done and assembled a proven AM4 or Intel platform in a full tower and attached the drives directly to the motherboard. Yeah, the cable management would've been a nightmare, but literally anything would be better than this. My goal of staving-off obsolescence as much as possible has instead kicked me in the teeth while I was already lying prone in a ditch. If, by some miracle, Rossmann is able to recover the data, I'm going to take a long and hard look at my PC building strategy. Hell, maybe I'll throw money at a prebuilt or one of those cute HDMI-enabled NUCs that're all the rage. I just know that I'm exhausted and am done with all of this, at least for the time being.
  19. Hmm. It's very kludgy, but I wonder: Pool X: enough drive(s) to handle backup cycle workload. Duplication disabled. Pool Y: the other drives. Duplication (real-time) enabled. Pool Z: consists of pools X and Y. Folders in File Placement section set so that all folders EXCEPT the backup folders go directly to Y. SSD Optimizer to use pool X as "SSD" and pool Y as "Archive". Balancing settings to un-tick "File placement rules respect..." and tick "Balancing plug-ins respect..." and tick "Unless the drive is...". Result should in theory be that everything except the backup folders (and any files in the root folder of Z) get duplicated real-time in Y, while backups land in the X and only later get emptied into Y (whereupon they are duplicated)?
  20. I've checked the advanced permissions for the pool as well as the hidden poolpart folder on the problem drive, they look just like the ones in your example images in your linked thread. So does this mean my NTFS permissions are OK? Nothing happened! I clicked "balance" and it ran for a few seconds then stopped! *EDIT* Seem it was a permission issue in the end! I followed the guide at https://wiki.covecube.com/StableBit_DrivePool_Q5510455 and this fixed it, I was able to remove the drive!
  21. Yea, I couldn't find any way other than making a new drivepool. Anyhow, I have an idea what's going on, not positive. The drives in that pool range in size from 6TB all the way down to 500GB. As the pool fills up, those small drives run out of space unless the balancing plugins are set up right. I found that I had most of the balancing plugins deactivated for ??? reason. I activated ORDERED FILE PLACEMENT, and put the larger drives at the top of the list. That should fix things. I think using the SSD optimizer is another way to solve the issue. Another option (that I don't like) is to set my backup software to limit backup files to a maximum size, maybe 25GB each.
  22. I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
  23. It's possible a file or folder on that drive has corrupted NTFS permissions (see this thread - it includes fixes). When you used the drive usage limiter to evacuate the drive, were there any of your files remaining inside that drive's poolpart? Because if it at least cleared out all of your files from the drive, you could then manually "remove" the drive by renaming its poolpart and then Removing the now-"missing" drive.
  24. I've been using DrivePool for many years, and I've re-arranged my pool many times. But I'm struggling to remove one of my disks, I keep getting a "error removing drive" message, and when clicking "details" it says "access is denied". I've tried all kinds of things, like checking "force-close any files" and using the drive usage limiter and setting the problem drive to not have anything stored on it, but nothing works! One thing I have noticed, is the error always pops up at the same point. If I do a balance, so the drive fills up again, then try to remove it, it'll go so far, before throwing up the error. At this point, trying again will throw up the error right away. Looking under "disk activity" it seems to be accessing 2 desktop.ini files, and I've tried removing these files, but this didn't help.
  25. fleggett1

    Drive question.

    I went into the power settings and disabled USB selective suspend. I also made sure the system wouldn't turn off any disks. I just did this, so it'll take a little time to see if it had any results. I actually had no idea the Sabrent had flashable firmware. I downloaded and applied the one linked in that thread. Again, I just did this a few hours ago, so results (if any) may take a day or two to manifest. Thanks for the link! However, after flashing, I did try to do another chkdsk /b on the Exos, but it did the same thing and got jammed at 5%. I'm doing a "clean all" on it now, but if I'm reading things right, that'll take awhile, so I'll leave it overnight. I'm beginning to think the PSU for the Sabrent might be underpowered, as others in that forum also complained about disk drops, especially when it came to SSDs. I have all 10 bays populated with various Seagate and WD spinners, which could be causing issues. If the flash and power settings don't improve things, I'm thinking of ditching it for a 6-bay Terramaster: https://www.amazon.com/gp/product/B0BZHSK29B/ref=ox_sc_act_title_1?smid=A307CH216CTGMP&psc=1 I don't like the fact that the Terra seems to use old-school drive sleds, but I'll gladly accept that hassle if it means I can get the pool back to 100% (I can settle for only six disks in the meantime). I might even take apart the Sabrent to see if the PSU can be upgraded (probably not, but it's worth looking into). More to come!
  26. I have a drivepool with 10 drives set up as drive Y: Everything in Y: is set to 2X real-time duplication. Is there any way, by using the balancing settings, hierarchical pools etc, to set only one specific folder (and its sub-folders) in Y: to NOT use real-time duplication? I don't want to make a new drive letter if I don't have to. The problem i'm having is with drive image backup software such as Hasleo, Macrium Reflect, etc. They often make HUGE files (100GB and up) and I'm finding that I often get messages such as this: ================================================================================== Duplication Warnings There were problems checking, duplicating or cleaning up one or more files on the pool. You can have DrivePool automatically delete the older file part in order to resolve this conflict. One or more duplicated files have mismatching file parts. One reason why this can happen is if something changed the duplicated file's parts directly on the pooled disk and not through the pool. This can also happen if a disk was altered while it was missing from the pool. You can resolve the conflicts manually by deleting the incorrect file part from the pooled disks. Files: \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_4.rdr File parts different. \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_2.rdr File parts different. \Pool2\Backup\SYSTEM_Backups\JR4_backups\JR4_RDI\JR4_C_NVME__D_SSD_20240409220013_224_3.rdr File parts different. ===================================================================================== Since there's no easy way to know what size the completed backup file is going to be, I figure it's best to let Drivepool wait until the entire large file is completed before duplication begins. Is there a simple way to accomplish this without setting up new drive letters, network file shares, etc?
  27. I'm sure you figured it out already... From the images you posted, it just looks like a simple change is needed. The pool called ORICO BOX is fine as is. The one in the first image is not correct. You should have: A pool that has 12TB1 & 12TB2 with NO duplication set. (lets give it drive letter W:) A pool called ORICO BOX with NO duplication set. (with the assorted drives) (Lets call it drive letter X:) Now, drive W: essentially has 24TB of storage since anything written to W: will only be saved to ONE of the two drives. You can set the balancing plugin to make them fill up equally with new data. Drive X: essentially has 28TB of storage since anything written to X: will only be saved to ONE of the five drives. At this point, you make ANOTHER new pool, Lets call it Z: In it, put Drivepool W: and Drivepool X:. Set the duplication settings to 2X for the entire pool. Remember, you are only setting Drivepool Z: to 2X duplication, no other drivepools need changed. What this should do (if I didn't make a dumb typo): Any file written to drive Z: will have one copy stored on either 12TB1 OR 12TB2, AND a duplicate copy will be stored on ONE of the five Orico Box drives. You must read & write your files on drive Z: to make this magic happen. Draw it out as a flowchart on paper and it is much easier to visualise.
  28. Shane

    Drive question.

    Pseudorandom thought, could it be something to do with USB power management? E.g. something going into an idle state while you're AFK thus dropping out the Sabrent momentarily? Also it looks like Sabrent has a support forum, perhaps you could contact them there? There's apparently a 01-2024 firmware update available for that model that gets linked by Sabrent staff in a thread involving random disconnects, but is not listed in the main Sabrent site (that I can find, anyway).
  29. fleggett1

    Drive question.

    I've resisted saying this, but I think there's a problem with the Sabrent. Which, if true, really screws me. I'm beginning to suspect the Sabrent because I tried long formatting a brand-new 18 TB Exos and it also failed. I started the process in disk management, made sure that the percentage was iterating, and went to bed. Got up and nothing was happening and the disk was still showing "raw". So, at some point, the format failed without even generating an error message. I'll also periodically wake up to a disk or two having randomly dropped-out of the pool. I'll reboot the machine and those same disks will magically (re)appear. I'm currently doing a chkdsk /b on the new Exos after doing a quick format in order to assign it a drive letter (which worked). It started-out fine, but is now running at less than a snail's pace, with chkdsk reporting that it won't complete for another 125 hours. Scratch that, now chkdsk is saying 130 hours and it has stubbornly stayed at 5% for the past two hours. I do have another machine I can try long formats on and will do so, but I'm not sure what that'll prove at this point. I've also tried consulting Event Viewer, but so much data gets dumped into it that I can't really pinpoint anything (maybe that's just me being an idiot). I was really, REALLY relying on something like the Sabrent since it seemed to be a Jack-of-all-trades solution to having a bunch a disks without resorting to a server-style case or expensive NAS. If anyone has any suggestions as to a similar device, I'd love to hear it.
  1. Load more activity
×
×
  • Create New...