Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Hi, if an old drive that held a poolpart has been replaced then adding the replacement drive to the pool normally results in the new drive receiving a new poolpart folder with a new identification string from DrivePool. You can test this by placing unique files in each of the poolpart folders (the one you created and the one drivepool creates) on the disk and seeing which one appears in your pool. If you are sure that your content in the old poolpart folder you restored lines up with up your expectations then you could stop drivepool, move your content from the old poolpart folder into the new poolpart folder, start drivepool and perform a re-measure. The moved content should reappear in the pool. Don't attempt to move the hidden System Volume Information, $RECYCLE.BIN or .covefs folders; they will not be valid for the new poolpart.
  3. I've had multiple disk problems on multiple disks. After 6 weeks of comprehensive testing, the company that built my PC found a single faulty disk, which will be replaced under warranty. Upon receiving my computer last week, my DrivePool consisted of 3 drives only (should be 4). I also use Snapraid with a single parity drive. Upon attempting to restore my missing disk with Snapraid, I recovered only 463GB out of some 3.3TB. My recovery procedure was this: - Stop DrivePool and Scanner - Restore, with Snapraid, missing folders and files to new drive, with a Poolpart folder that I created with the same name as the original - restart DrivePool - add restored Poolpart folder to DrivePool Upon doing a remeasure, I end up with the original Poolpart folder populated with the 463GB of files Snapraid managed to restore, plus a new, empty Poolpart folder. Where does this empty folder come from? I can delete it, but the next remeasure creates another one. How should I proceed please?
  4. Yesterday
  5. I'm wondering if, if I start encrypting my hard drives with VeraCrypt, does StableBit Scanner still work perfectly?
  6. Last week
  7. Fair. For what it's worth the mantra "RAID is not backup" is shorthand for "RAID should never be a substitute for having independent copies of your data somewhere else" and this applies to DrivePool as well: if enough drives die you will lose data from the pool/array. TLDR, the only way to do backups is to actually have backups. The business mantra for that one is of course "3-2-1": have at least three copies*, on two different media, with at least one kept offsite. The middle part used to mean HDD versus tape, but these days also means using different HDD brands/models to minimise the risk from potential manufacturing defect. *the original and two separate backups Re hardware RAID being obsolete... it depends on the use-case. For home use? Basically, yeah, if the alternative is using an OS with good software RAID support. But sadly DrivePool is only available for Windows and DrivePool doesn't support dynamic drives so I can't even use Windows software RAID1 with it.
  8. SSDs also don't have the capacity that I need (at least, not in the consumer space). I'm talking at least 18 TB and more, if economically rational. I feared RAID would come into the convo. To expose my ignorance, I know very little about the tech and what I have gleaned scares the bejeezus outta me. I keep hearing the mantra that "RAID is not a backup" and have read stories of entire arrays going south because a drive or two failed and the wrong strategy was employed for a rescue. And then there's really bleeding-edge stuff like unRAID and filesystems with RAID-type features built-in, like zfs. I just want Plain Jane NTFS so that, if something goes wrong, the rest of the pool isn't dragged into the abyss or needs a rebuild that may (or may not) succeed after a few weeks of drive hammering. I also VERY MUCH like the fact that I can pull a drive from my DP pool, stick it in another Windows machine that doesn't have a clue what Drivepool is, and I can access all the files normally, without additional steps, and free from some exotic RAID implementation. IOW, I like - no, NEED - a solution that's straightforward and doesn't descend into insane complexity. I'm old and any new tricks I'm forced to learn have to be relatively simple and something I can visualize in my creaky, calcium-riddled brain. This is why DP has fit all my criteria, as it does what it needs to do to configure the pool, present it to Windows as such, and that's it. (I know the software is far more sophisticated under the hood than what I'm making it out to be, but I'm just speaking from the perspective of the "user experience".) Besides, isn't hardware-specific RAID obsolete at this point given the raw power that modern machines offer, even low-spec boxes?
  9. Trig

    WD80EFPX

    Odd that the 6tb version of that same drive doesn't work though, if it were the controller would I have not seen this on one of the many drives I've had on it over the years?
  10. I have seen this happen before... not recently... maybe twice since I started using DrivePool (2013)? When I checked, the couple of "leftover" files had been properly copied to the pool where they were supposed to be, along with everything else that had been on the removed drive, so in the end... ¯\_(ツ)_/¯ @Alex @Christopher (Drashna) any ideas?
  11. Shane

    A mess

    I'd suspect some of the NTFS permissions on the drive have been damaged given you as a user can read the files but drivepool can't (note chkdsk does not fix ACLs). One way I'd tackle it: make sure nobody/nothing is using the pool "ignore" the bad drive on the pool - from an administrator command prompt, run: dpcmd -ignore-poolpart poolpath UID where poolpath is the path to the pool (e.g. "P:\") and UID is the name of the hidden poolpart folder on the bad drive (e.g. "PoolPart.123-abc") its content will disappear from the pool but still be on the drive itself, and the pool will alert that the drive is "missing" if this doesn't work: turn off the computer, unplug the bad drive, turn the computer back on, remove the missing drive, plug the bad drive into a non-drivepool computer networked to the drivepool computer and proceed as below adapted as necessary (e.g. by sharing the pool and mapping a network drive to it) remove the missing bad drive from the pool if this doesn't work, you might have damaged permissions on one or more of your "good drives" - see https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/ copy only the user content inside the hidden poolpart folder of the bad drive to the pool, excluding system folders, drivepool metadata, stripping ACLs and skipping files already at the destination e.g. assuming you aren't using nested pools, you could use: robocopy "m:\poolpart.abc-123\" "p:\" /dcopy:dat /copy:dat /e /xc /xn /xo /xx /xd "System Volume Information" /xd "$RECYCLE.BIN" /xd ".covefs" it's important to include the backslashes at the end of the source and target paths if you're wondering, "extra" files are files that were found only on the destination if you're concerned that you might not have the command exactly how you want, instead of starting with robocopy start with robocopy /L (this is basically test mode, it shows you what it would do without actually doing it) confirm that pool is now working properly and all user content is present. if pool still gives unreadable errors, see https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/ if pool still won't duplicate... ouch, I'd try starting a new pool and migraring all your content across. Hope this helps!
  12. Some basic full disclosure. DrivePool v2.3.8.1600 Windows Server 2012R2 NTFS volumes on 4 HHD of differing size. A "standard" drive pool using 2x duplication across the 4 drives. Plenty of free space to remove a drive, yet maintain the 2x duplication. The pool in question holds video files for a Plex server. I removed one of the drives from the pool using the default options, which looks have completed successfully (no errors reported and the remaining drives re-balanced). However, before reformatting the removed drive, I noted the covecube GUID folder still was still present on the removed drive, and there is a seemingly random collection of files and folders (9 files in 9 folders) on still on disc. My expectation is that barring any issues, all replicated files would be removed as part of the drive removal process. So my assumption is now, there were "issues". I ran a SHA-256 hash against just a couple of the files, and they all matched back up with the files remaining in the pool. So, not like they were corrupt or altered. My question is, what should I take away from this condition, or further investigate, before I reformat the drive? Thank you in advance. -Don
  13. Kachoobs

    A mess

    I've just started a new pool with drivepool, but it's kind of a mess. One of the drives I've added is damaged and was mislabelled, we thought it was a good drive but it wasn't. Anyway it has the bulk of data is on this drive with a handful of damaged sectors. We previously removed the drive we thought was damaged, leaving the bulk of the pool data on this actually damaged drive. The good drive is back in the pool along with a 3rd good drive. I've tried everything to get drivepool to duplicate the data back to a good drive. Not to mention another USB drive that's fine but just slow transfer rates. Drivepool won't do anything. I've left it for days, no duplication. I've run chkdsk. I've triggered and changed duplication settings. A bunch of resets. I've even deleted the files it presents in the 'pool organization warnings' which it says it can't read. I've tried every removal option for the damaged drive to even just get it out of the pool and manually connect it afterwards and get the good data from it I can but the same error presents which is 'the file or directory is corrupted and unreadable', which isn't true because I can manually access almost all of the files on this drive. I just need to get everything duplicated before it carks it for good. Drivepool is just doing nothing. My last resort is to just unplug it and remove it as a missing drive but the crucial part is that the data that is good on other drives will not duplicate across to a second drive. Any assistance appreciated.
  14. You are my Hero! I guess it took an extra set of eyes and thinking outside the box. I had tunnel vision and couldn't believe anything else cause the problem. What I did to solve the problem was to add an SSD and connect it to the fastest port in the computer then direct the page file to that new SSD drive. I'm moving a 500GB file right now and still no errors. Thank you a million times over!!!!😀👏 Terry
  15. Hi, short answer is DrivePool doesn't keep a record of files in the pool. Longer answer is File IDs are "kept" in the sense that DrivePool needs to map the File IDs of accessed files in the poolparts to a unique ID representing them in the pool (and the current implementation isn't persistent across reboots) but that's basically just a table of numbers that won't be linkable to the actual files since the failed disk is no longer communicating with the pool.
  16. I recently had one disk in my pool fail, and it's the one disk that isn't protected by Snapraid because the data on it is largely replaceable. However, I stupidly realize I don't have any up-to-date list of which files were on there that I ought to replace. It seems unlikely, but would DrivePool happen to record or log any information about this? Even an access log, so I can check for filenames that existed on drive D:\ underlying pool O:\, for example?
  17. If the same file(s) that can't be copied with Directory Opus can be copied with Windows Explorer at the time, then it might be Directory Opus - but the error text and file sizes you mention being involved made me wonder if it might also involve a Windows memory paging issue and/or SMB File Server tuning issue, which can be a bit of a rabbit hole (a lof of tips that can be found on the internet are for older/32-bit versions of Windows).
  18. This is weird. I just now used Windows Explorer and was able to move a file that was > 3GB to the pooled computer. Yes, I run Malwarebytes and keep it updated with the latest. I have not had luck rebooting the pool computer. There is lots of space on any of the pooled drives to copy/move a 3GB file. The pool machine is an HP DL380 G8 running Windows 10. In the attached picture you see that it has 256GB of memory. Also in the same picture is the Paged Pool and Non-Paged Pool memory. Are these within normal limits? Are you thinking this can possibly be a Directory Opus problem? If so, I'll contact them for support. Thanks, Terry
  19. Earlier
  20. Does it only happen with Directory Opus? e.g. try Windows File Explorer, then try command line? Is there any antivirus running on the pool machine? Does problem go away (temporarily) if pool machine is rebooted? Is there plenty of free space on the drive(s) that would receive the file in the pool (usually the drive(s) with the most free space)? Which OS version is running on the pool machine? Check the Windows Task Manager (Performance, Memory) to record the pool machine's "Paged pool" and "Non-paged pool" memory sizes, when the problem is and isn't ocurring, and compare?
  21. This has been happening for at least 6 months now and instead of kicking the can down the road I thought I'd better ask for help. It happens when trying to copy or move file > 500mb to my main computer where pooled drive is installed. Copy or moves to any other computer or other partition works with no errors. What can I do to fix this because it's now past annoying and taking my time to work around it. The work around is to RDP to the storage computer and perform the copy or move or copy to a non pooled partition then move it to the right location. What can I do to fix it? Terry
  22. There are some SSDs with 10 year warranties, but SSDs are still a lot more expensive than HDD so... I hear you on the brittleness of UEFI. A midway option between N disks (no duplication) and 2N disks (2x duplication) that I've occasionally considered (but haven't really had a pool big enough to make it worth buying the cards/hubs) is pooling RAID6 arrays instead of using duplication with individual disks, basically offloading the redundancy onto the hardware. Con: Needs one or more RAID cards (or RAID-capable DAS boxes), it's not full duplication, any given array would be vulnerable while rebuilding (though I have backups) and card management means yet another UI pane to deal with. Pro: An array only needs 6+ disks to store 4+ disks worth of data (more disks = better ratio but more time to rebuild) and 3 disks would have to die concurrently to actually lose any data, offers a boost to read performance and RAID6 allows detection and repair of bitrot.
  23. I think I actually have a Seagate or two in my Terras, but I need to double-check. Didn't the floods also affect WD pretty badly? Regarding reliability, just so I'm clear, I know a drive that costs a little north of $300 won't last forever, especially if it's being written and read from almost every second of the day. Like I wrote earlier, I'd happily pay a handsome sum for a high-capacity drive that genuinely lasted, at a minimum, 8 years. It's the disposability/e-waste that annoys me, as people have now been conditioned to believe that hard drives should only last a few scant years before going bad. For those of us distributing (ahem) numerous Linux distros, it's become a chronic fear that an un-duplicated drive might implode at a moment's notice, even if that drive has only seen two or three years of use. It's kinda like what bios updating has become. Before, the necessary blocks would be written, you'd see the progress bar, and it only took a minute and a single reboot. Now, updating a bios is like a game of Russian Roulette, with numerous mystery reboots and highly disconcerting black screens throughout. It's like the technology has gotten so "advanced" that it's become proportionally brittle. Regarding the USB dropout, there's a power setting that I always turn off called "USB selective suspend setting", as it sounds like a definite no-no, but I dunno if it does what I think it does. I also make sure to disable any sleep settings. I think I am headed in the duplication direction, much to my chagrin. It's more money, more hardware, more heat, more electricity, and more software complexity, which makes me highly uneasy, even though it's suppose to do the opposite, but it's the only thing I know to do to combat all these drive failures.
  24. That really depends on the size of files you typically add, and how much data you may dump at once. In general, the 2x 512Gb drives should be more than plenty for most anything you'd throw at it. Though, if you wanted to use file placement rules to keep data on there, it still depends ... My personal opinion? 512Gb for the cache, and 1TB drives for the gaming PC.
  25. Honestly, the low reliability was due to the floods, and then one specific line of drive (ST#000DM001) that was the issue. Aside from those, Seagate drives have been rock solid for me (and IIRC, BackBlaze statistics bears that out too). That said, I do use a mix of brands. As for the drives, that sucks. And if they were from the same batch of drives, that definitely makes sense (unfortunately). But for any drive really, it's not "if" it fails, but "when". And maybe ... how catastrophically. As for the dropout, if that is USB, then it's USB. That's normal (and IIRC, allowed by the specs for USB). Unfortunately. And yeah, I hear you on the drives. It's the only reason I don't have duplication for most of my pool. Only the critical stuff is duplicated. The drive capacity keeps in increasing and prices gradually drop (though, lets hope that continues, all things considered). And IMO, it's worth getting the NAS/enterprise drives. Between warranty and features/use-case, it's worth it.
  26. Well now, courtesy of a surprise eBay win, I now have my choice of two 512Gb NvMe drives or two 1Tb NvMe drives to use as my SSD landing pad for the pool (73 Tb of spinners) Which one would be more beneficial to the pool? I was thinking of having the SSD's flush to the pool at 70% of capacity. I guess I'm asking if the additional size of the cache would be worth dedicating the bigger drives to it or stay with the 512Gb drives. Any advise?
  27. That's a little surprising about Seagate, as I recall them being pretty low on the drive tier reliability list years ago. Maybe I should give them another shot. Problem with me trusting the Reds is that I've had two fail on me in the past six months, though one of them had the decency to spit out Smart errors so that I could proactively replace it (the other, by a stroke of cosmic luck, was empty). I just don't trust anyone anymore. If it wasn't for the Smart-auto-evacuate feature, I would've lost GBs and maybe even TBs of files due to failures. And I still see the occasional drive dropout, which is beyond vexing, as I don't know if it's enclosure- or drive-related. A restart is necessary to "resurrect" the drive so that DP recognizes it. It doesn't happen often - maybe once every few months - but it's teeth gnashing when it does occur and makes me think dark thoughts. I suppose I need to seriously consider duplication, which I've avoided my entire computer life due to the byte tax. I have the two Terramasters, so designating one as the primary and the other as the backup should be easy. Problem is investing around $1000 in sufficiently large drives to make it happen. My 401(a) just took a GIGANTIC hit recently with the stock market trying to find the center of the Earth, so there's no relief there (if anything, I should be putting money in to shore it up). Still, I could put three or four drives on my Amazon wishlist in the hope of pulling the trigger at some point in the not-too-distant future.
  28. Shucking drives is not worth the effort, yeah. I won't tell people not to do it, but I do recommend against it. Every shucked drive I have had, has failed within 3 years of purchase. And I can attest to the WD Red and Seagate NAS being good quality drives. In fact.... the seagate NAS drives I have show ~2 years for their age because the SMART data rolled over at 10 years.
  29. It should give a tooltip of "Increase Priority". Some of this is covered by: https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Pool Organization Bar But specifically, balancing and duplication tasks use a background I/O priority. This means that other I/O (such as you reading media from the drives) gets priority over the balancing. This helps prevent these tasks from interferring with normal usage of the pool. Increasing priority means that it uses the same, "normal" priority, and may perform faster at the cost of impacting pool performance. That is correct. This only shows activity that goes through the pool driver. Balancing and duplication tasks do not, and are not shown herer. This is namely because the "performance counters" for the disks only show that information.
  1. Load more activity
×
×
  • Create New...