Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Hi Folks, I would like to move an entire cloud drive from my iDrive Paris region to one in San Jose. iDrive does provide a migration tool to move an entire S3 bucket from one region to another. I know that I have to quiesce/stop changes to the bucket during migration. I suppose that's easy enough to do just by detaching the cloud drive. But how can I then mount the cloud drive from its new location? Is it better to create a new cloud drive in a new bucket in San Jose, detach it, migrate the Paris bucket contents to the San Jose bucket, and (somehow) mount the cloud drive from metadata that now exists in San Jose? I'd much prefer to migrate rather than download from one region and re-upload to another, which I suspect will be much, much slower. Any help is much appreciated.
  3. Yesterday
  4. I'm glad to hear it!
  5. Last week
  6. did you ever figure this out? i am having the same problem.
  7. So, the plot thickens. I did a lengthy re-format of the drive and the bad sector cleared. I've seen this happen before when a single block gets marked as bad. Kind of like doing an old-school low level format. I actually had Scanner do a full surface re-analysis of the drive and it now reports as healthy. But when I added it back to the pool, it's still not being written to. Now this got real odd. 🤔
  8. @Christopher (Drashna) Any ideas?
  9. Tried that out. It's not evacuating it, but it's also not writing new data to it.
  10. If you manually put a test file into that disk's poolpart and do a Re-measure, does it evacuate the file?
  11. That did the trick! thank you.
  12. Hmm, so this is odd. I did what you suggested and removed the drive from scanning and marked it as unchecked. It no longer shows as Damaged in StableBit Scanner. However, I added it back to my pool and it refuses to write anything to it. 🤔
  13. Yeah, I think that could work! I'll give that a shot and see what happens. Thanks!
  14. Hmm. Scanner checks by disk, not by volume, so I suspect an old trick where you split the drive into multiple partitions with the bad sectors in unallocated space won't work: Disk: |<- volume 1 -> <- unallocated space containing 8 bad sectors -> <- volume 2 ->| Only other option I can think of is telling Scanner to NOT scan the surface of that disk automatically (in Scanner GUI, click that disk, click "Disk settings" (has the three green horizontal bars next to it), tick "Never scan surface automatically", click Save) then expand the disk's entry so you can click on Edit blocks and select "Mark all Unreadable blocks Unchecked". Scanner should then still alert on any new SMART errors but it'll be up to you to remember to manually scan that disk from time to time.
  15. So I have a pile of drives in my pool and one of them is an older 4TB Red. Recently, 8 of sectors totaling a whopping 4kb of data were flagged as bad and the drive was evacuated. I reformatted the drive and did multiple re-checks of it to see if this was the start of a spreading problem, but it isn't and I'm confident the drive is OK for the next while at least. However, if I re-add it to the pool, it won't balance to it because it reports as Damaged by Scanner. I'd like to ignore the Damaged warning on this drive for now unless it gets worse (similar to how you can tell Scanner to ignore SMART warnings), but I don't see a way to do that without telling the balancers to ignore all damaged drives, which I don't want to do. But I don't want to take this drive out of service over 4kb of bad sectors that aren't getting worse. Is there any way I can have Scanner treat the drive as OK while still evacuating others that go Damaged? Thanks.
  16. Earlier
  17. Wouldn't work. For example, say our SSD drives are D and E and our HDD drives are F and G, so we pool D and E as S and then S and F and G as T and enable duplication on T. This effectively gets us D:\poolpart.SD and E:\poolpart.SE and S:\poolpart.TS (and thus D:\poolpart.SD\poolpart.TS and E:\poolpart.SE\poolpart.TS) and F:\poolpart.TF and G:\poolpart.TG - which means any data placed on S never goes on the HDDs (regardless of whether duplication is enabled). If you did it the other way (so the pool of HDDs is being added to the pool of SSDs) you'd at least get data placed on the SSD pool going on the HDDs sometimes but you wouldn't be able to guarantee that every file would end up on both a HDD and a SSD unless you started manually controlling it via (I suspect) a quantity of placement rules at least equal to the quantity of SSDs - which rapidly defeats the purpose of automating things simply. FWIW, I use the volume labels of the drives to indicate their position and serial. E.g. a drive's label might be cardC-portP-bayB-diskD. That way if I saw that all the drives whose labels began with card1 suddenly showed in the GUI as missing from my pool I'd have good reason to suspect there was a problem with that particular card.
  18. +1 for #1 It would be great to be able to specify a time limit for files to age out of the cache. Keeping them on the SSD for a few days would reduce reads to my HDDs by at least half I would estimate - seems like something the spin-down crowd would really appreciate.
  19. I actually thought of another way to do this that would only take two pools. Thinking of doing something like this for an idea I have for my Nextcloud server. Not sure how viable this is in terms of how data replication takes place but: Create one pool and put all your SSDs into it, no replication enabled. Create another pool and put your HDDs and the SSD pool and enable replication here. Don't assign a drive letter to the pool with all the drives in it so only the SSD pool is accessible at the UI level, meaning that is where you read and write to. This at least would be perfectly viable if you had just one SSD and one HDD you want to replicate to (plus it gets around DrivePool showing the entire capacity of a given pool even if you're duplicating the data across it), although I suppose DrivePool's storage prioritization function would do this anyway. I'm pretty sure it would also work for JBOD'd SSDs as well (since the data is only being duplicated when it gets down the level with the hard drives in it, meaning DrivePool should only be replicating data as it comes into the pool with every drive in it and the only place data is generated and read from is the level with the SSDs, and then replicating the data would push the data back from the HDDs in the event of an SSD needing to be replaced). Not sure if DrivePool would take what it sees in a pool and try to write the entire pool to a single storage unit if you're not replicating data at the level you wrote it initially or if it splits up data on the pool you're not directly writing to just as well as if you were directly writing to it (in other words can DrivePool "see past" a nested pool to the storage devices contained in it and send the data over to the unpooled drives in the "lower" pool in a similarly-organized way as it is in the unreplicated pool it's reading from).
  20. I have 3 machines all on same network. These are physical boxes and not VM's. Each can see the other in the drop down when using Stablebit Scanner. If I have drives on an external usb controller. In this case 8 discs in the external case. I move the external usb box among the 3 machines. Will they or can they share the scan data or will they each make their own historys and scan data determinations seperatly? I am asking this because the same jbod in the box give different results depending on which machine is doing the scanning at the time. I suspect a bad eSata port on one of the machines may be giving me false readings and flagging the disk as bad. Would really like to know if any data is being shared or can be shared. I read above the answer given for VM's and wanted to know if it applied to physcial machines as well. Thank You, Valerie
  21. DO NOT TOUCH THE UNINITIALIZED DISKS. Reconnecting the enclosure may be a good idea. But I would highly recommend running data recovery on them (such as test disk, recuva, etc), something that can recover the partition tables. Any other action on the drives may/will overwrite the partition tables and make things that much more difficult to recover.
  22. something is wrong; i disconnected dock #2 with 3x drives (10TB, 10TB, 8TB); let DrivePool stabilize with just dock #1; removed and re-added 2x ssd cache drives; i reconnected dock #2 and the 3 drives, wiped them, reformatted, added them back to DrivePool; Reset DrivePool settings; I restarted my beelink mini-pc; 1x 10TB & 8TB drive now showing 'Non-Pooled' and all 3 drives in dock #2 have a phantom presence under Non-Pooled as well
  23. If you haven't already, enable and open the advanced settings in StableBit Scanner. In the advanced settings, find the "Smart" section and check the "IgnoreChecksum" option. http://wiki.covecube.com/StableBit_Scanner_Advanced_Settings This will most likely fix the issue with the drives not showing the SMART data.
  24. I've been buying manufacturer recertified drives for a while now. Sounds like that isn't going to change, but is already getting more expensive. Aside from that, there really isn't a solution. Buy what you can, when you can. Double check the manufacturer's warranty period (and expect those to drop, as well).
  25. https://mashable.com/article/ai-hard-drive-hdd-shortages-western-digital-sold-out It's real. Just paid a nosebleed price for a 24TB drive, and don't see many opportunities to buy more Happened suddenly. over a time period of months. This is something new (excepting the floods). Ideas for extending drive service life welcome. I fear we're in for a long drought unless the AI bubble bursts.
  26. Stablebit Scanner (build 2.6.13.4088) is not showing me SMART data for one of my disk drives. It says "The on-disk SMART check is not accessible. Drive temperature and operating parameters are not available and will not be monitored" CrystalDiskInfo is able to report SMART data on this disk just fine. I recently swapped an old drive with this new on in my server's case. Before doing that, I did a forensic sector-by-sector copy with Macrium Reflect from the old drive (which was showing SMART failures from Stablebit Scanner) so that I could drop the new drive seamlessly into my server (which has a Stablebit DrivePool/Snapraid setup). That all worked flawlessly - Except for not being able to get SMART data for the new drive from Scanner. The Drive Model is WDC WD100EFGX-68CPLN0 - a WD Red Plus Internal NAS HDD. In the UI, I can select to sort "By Controller" and the problematic drive is grouped under a Standard SATA AHCI Controller. The other three drives in this same group have no problem with their SMART data being read. Any help to fix this would he great.
  27. tl;dr: Reset the settings in StableBit DrivePool: https://wiki.covecube.com/StableBit_DrivePool_Q2299585B The drives shouldn't appear as both pooled and unpooled. Also, pooled drives shouldn't show up as (just) unpooled drives. And in both cases, there may be some corruption of the local pool config (for the service). While this isn't critical (eg, it shouldn't break anything), it's not supposed to happen. Rebooting the system may fix this, as may restarting the service, as it should reload the information. However, this doesn't always fix things, and in this case resetting the settings should fix this (as it clears out the locally saved settings and rebuilds the data from scratch, from the pooled drives) If this keeps happening, though, open a ticket at https://stablebit.com/Contact and let us know.
  28. My Beelink mini-pc running Win11 Pro and DrivePool v2.3.13.1687... i have 2x Sabrent 5 bay usb 3.2 gen 2, dock #1 filled with 5 drives, and dock #2 filled with 3 (the docks were daisy chained), + 2x samsung ssd (cache drives, connected via usb-one connected directly into the beelink, the other was connected in the 2nd usb-c slot behind a Sabrent dock); for many months now, DrivePool has been showing several drives under 'Non-Pooled' but these drives are actually apart of the pool; i didn't have any issues with my data or DrivePool G:\, so I left it as is; i had a very successful run of setting up WSL 2 + Docker Containers on Win11 Pro using ChatGPT! that I asked it: "some disks that are apart of the pool are showing as non-pooled, why is that?" it returned: TL;DR Most likely: ✔ Drive is detected by Windows ✔ DrivePool just hasn’t reattached it ✔ Data is still safe ✔ Click Add to Pool or restart DrivePool service Not a failure. //// so I clicked on Add to Pool on the disks under Non-Pooled. shortly after, one after another drives were showing as "Missing", until all 3 drives in dock #2 showed "missing"; i checked the drives in Windows Disk Management, it asked me to 'initialize' the disks, and I let it; and slowly my just-fine working DrivePool felt like it got corrupted and thus my nightmare began. at the moment, i've completely detached dock #2; i have dock #1 and the 2x cache disks connected; I removed dock #2 disks from Drivepool; after DrivePool measuring/duplicating; DrivePool ended with "Missing Disk" saying 1of2 cache disks was missing; i was out of town and thought I detached it; i'm back with my hardware, the drive was connected the whole time; what should I do now? 😕 going back-I tried ChatGPT to recover the drives in dock #2; it had me try to recover the NTFS filesystem via TestDisk but that didn't seem to work so I just removed the 3x disks and am running with just dock #1 since I didn't have issues with those drives; i want to have a healthy drivepool so i can one-by-one add back dock #2 and the 3x disks; i'm confused why it now said 1of2 cache disks was missing? perhaps because i removed dock #2 and the cache disk was ext connected there, and now i removed dock #2 completely and have them connected in different ports?
  29. They posted a ticket too, but since the information is super useful, I wanted to post it here, too: https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry This covers the two main methods to deal with path length issues (which I suspect is the root issue here). Namely, the registry hack for Win32's long path support (which I've personally found is either finicky or outright doesn't work). Or using full UNC paths (eg, adding "\\?\" to the beginning of the path), which I've found works much better. This format is supported by .NET programs, full. So things like Emby, and Jellyfin should work just fine with (Plex should too).n
  1. Load more activity
×
×
  • Create New...