Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Everything posted by Shane

  1. You may find this thread useful.
  2. No clue. If it helps at all, I have DrivePool v2.3.2.1493 on a Win10Pro box with a mix of nvme ssd and sata hdd, and I even plugged in a usb hdd to see if anything changed, and I'm not getting that issue. Edit: spoke too soon. I am seeing that on the usb hdd I plugged in, but not my other devices. It at least involves the DrivePool service, as stopping the service stops the effect and starting the service starts it back up.
  3. Maybe. If SMART isn't giving any warnings, you could try checking the DrivePool UI -> Cog -> Troubleshooting -> Service log... to see if there's any hints there, or using the File Placement rules to limit duplication to certain disks to see if that gives any clues. Otherwise I would suggest opening a ticket via https://stablebit.com/Contact as Christopher suggested.
  4. I believe licensing issues have to be done via the contact form (now that it's fixed and working again) or via emailing covecube (if the form still isn't working for you). No luck with either of those now?
  5. When you say happened again, do you mean #1 you went to win11 and rolled back to win10 again or #2 it's happened without win11 being involved this time?
  6. None of the above. It creates hidden root folders, or "poolparts", on basic (not dynamic) NTFS-formatted volumes chosen by the user and collectively presents those folders as a single NTFS-style virtual drive and basic volume, or "pool". You can have multiple pools, you can even have pools of pools, and there is also ReFS support. If you're familiar with Linux, it's a "union" virtualization similar to mergerfs but (amongst other things) runs at the system level rather than in userspace. Duplication is handled via multiple instancing of a file on the poolparts; there is no "original" vs "backup", there's just the same file existing on multiple volumes. For example if you created a new pool, let's call it "P", using six drives each formatted as a basic NTFS volume (they don't have to be the same size), let's call them "A" through "F", set 3x duplication for all files, and saved a file "example.txt" to the root folder of the new pool, it would look like this at the file system level (where guidstring is a alphanumberic identifier that uniquely identifies each poolpart): P:\example.txt A:\PoolPart.guidstring\example.txt B:\PoolPart.guidstring\example.txt C:\PoolPart.guidstring\example.txt D:\PoolPart.guidstring\ E:\PoolPart.guidstring\ F:\PoolPart.guidstring\ Basically you just work in the pool drive and DrivePool does its thing in the hidden poolpart folders background; you normally never need to manually deal with the latter unless something has gone wrong (e.g. your old PC's mainboard died, you can't connect to the internet to download DrivePool to your new PC and for whatever reason you need to get a file from the pool right away) or you're using DrivePool in conjunction with other storage management software (such as SnapRAID for additional file integrity/recovery features). There are a few advanced features of NTFS that it doesn't perfectly emulate that some programs complain about (e.g. Microsoft's OneDrive doesn't like to be installed on a pool), but otherwise IMO it's pretty great.
  7. I think it would be a software limitation? As I understand it, DrivePool's design philosophy is "be as lean and simple as possible to minimise use of CPU/RAM/IO". So it won't try to do fancier things like that because that's less resources available for other programs to use (e.g. it doesn't want to cause lag for other apps that are accessing files from the drives). I'd have said it was DrivePool trying to avoid its balancing interfering with normal pool access, but the fact that it initially was managing to balance at 50-80MB/s for the same sorts of files and nothing else is changing has me scratching my head. There's a bottleneck somewhere, yeah. Given 2x14TB is going to take about three weeks at 10-20MB/s, I think it might be worth opening a support ticket. P.S. Isn't balancing normally turned off (except maybe Scanner evacuation and Prevent Drive Overfill) if you're using DrivePool+SnapRAID, to avoid big diff/sync durations?
  8. Glad I could... inadvertently... help? Regarding #3, you don't need to turn it off but feel free since you're not using it.
  9. Hmm. Your settings look ok. What version of DrivePool do you have installed? What version of All In One? Do you have real-time duplication turned off (Manage Pool -> Performance)? Have you tried a Manage Pool -> Re-measure...? Since you manually moved files? Have you tried running dpcmd refresh-all-poolparts from an administrator command prompt? Have you tried turning off All In One, waited for DrivePool to recalculate its balancing, then turned All In One back on? If that didn't help, have you tried uninstalling then reinstalling the All In One plug in?
  10. It looks like bunny.net claims to support FTP and SFTP (and plans to support S3) which are also supported by CloudDrive? https://support.bunny.net/hc/en-us/articles/115003780169-How-to-upload-files-to-your-Bunny-Storage-zone
  11. @sophvvvv The disk transfer rate graph in the performance screenshot for S2D1 looks somewhat like what I'd expect from moving many small files; could you please test copying a folder containing a large amount of small files (e.g. photos are a good choice) from a good drive to the root folder of S2D1 (i.e. to the drive but outside the pool) and observe the performance to see if it is similar? The screenshot shows the drive with 7.46TB on it has a capacity of 7.72 TB.
  12. SyncThing needs to be installed on both; each instance scans its own content and compares notes with the other(s) to detect changes. This is different to FreeFileSync where it goes on one machine and scans both its own content and the remote content to detect differences. The former is better in slower networks, busier networks or involving large numbers of files (the issues compound each other), as it involves much less back-and-forth of network traffic, but FreeFileSync can compare a surprisingly large number of files on a fast network (e.g. about fifty thousand files per minute when my 1Gbps LAN is idle) and I feel its GUI is rather more user-friendly. Whichever option you go for, I'd suggest creating a test pool to trial it before committing your real pool - and you could make two test pools and try both.
  13. I meant the network share option (if you just wanted all devices in both buildings to able to access one pool via the 1GB LAN you mentioned) as an alternative to the SyncThing option (where you'd have two pools sync'd to each other, one in each building). I figured there was a reason you weren't doing that. But yeah, you could use a network share for syncing too - e.g. with the real-time syncing option of FreeFileSync - in which case you might only need to have the syncing software on just one of the machines. There'd be pros and cons to each option.
  14. - Whats the best way to initially duplicate the pool? Set-up a new Pool and then just copy in File Manager? I'd go with that, yes. - How should I set-up the remote sync - would I need cloud Drive for that? Cloud Drive isn't designed for multiple simultaneous clients. You'd need something else, e.g. SyncThing (also check out its GUI wrapper, SyncTrazor). - Would I also need DrivePool on the new computer (if I'm using Cloud Drive)? If you intend to access the drives as a pool at the second site you'd want an instance of DrivePool there too. - Is there a deal on the bundle (DP, Scanner, CD) for existing customers? As I understand it, if you're an existing customer you get a discount on each product by entering your existing activation ID and the discount is bigger if your existing activation ID covers the product you're buying; just having activation ID(s) for DP and Scanner without one for CloudDrive is enough to match the bundle price offered to new customers. TLDR: for what you're describing I'd use two installs of DrivePool (one for each machine) plus SyncThing/SyncTrazor to mirror in near-time (I'm presuming there's reasons you're not just opening a network share to the pool over the LAN). EDIT: make sure to exclude system folders (e.g. System Volume Information) from the sync between the two pools, and make sure to sync at the pool level - not the poolpart level. If you're using SyncThing/SyncTrazor solely across a LAN then I'd suggest disabling relaying and global discovery, and/or using direct addresses, for additional privacy/security/efficiency.
  15. The service handles things like balancing and statistics; detecting pooled disks happens at the kernel/driver level. Note that in DrivePool 2.x each hidden PoolPart folder has a unique extension and associated metadata identifier (even removing and immediately re-adding the same disk will result in a different one); because SnapRAID's parity calculations rely on paths remaining the same you should use the PoolPart folders (e.g. "d:\PoolPart.6cbd4f77-9511-4356-a6b3-2ce26ba79d10") as SnapRAID's data disk mount points rather than the root folders (e.g. "d:\") so that you can update the appropriate data disk mount point in the snapraid config file when you replace a drive to avoid needing a rebuild. Presuming the above use of poolparts as data disk mount points is being done: Pause SnapRAID scheduling (if you've set anything up). Add the new drive to the pool. Stop the DrivePool service. Rename the "PoolPart" section of the old disk's hidden poolpart folder so it stops being an active poolpart folder (e.g. to "oldpart") but optionally leave its extension unchanged to make it easier in case you need to revert. Copy the content (except protected system folders, e.g. "System Volume Information", which you should have set as excluded in SnapRAID) of the old disk's hidden poolpart into the new disk's hidden poolpart, then update the data disk mount point, then verify. See https://www.snapraid.it/faq#repdatadisk If step 5 worked out, then you can delete the old disk's hidden poolpart and continue. Start the DrivePool service. The old drive should be detected as "missing" and then you should remove it from the pool. In the DrivePool GUI, choose Manage Pool -> Re-measure... to ensure it is accounting for your manual movement of content. Resume SnapRAID scheduling (if you've set anything up). P.S. Regarding multiple content list files in your SnapRAID config, keep in mind that due to DrivePool this means that - for example - "d:\poolpart.1\snapraid.content" and "e:\poolpart.2\snapraid.content", where "d:" and "e:" are disks in the same pool, are effectively the same file on the same disk; either keep content list files outside of the pool (e.g. "d:\snapraid.content" and "e:\snapraid.content") or use different filenames (e.g. "d:\poolpart.1\snapraid.content1" and "e:\poolpart.2\snapraid.content2").
  16. If you haven't already, could you see how it reacts if you set the automatic balancing trigger balance ratio to 0% and the automatic balancing trigger data limit to... say... 650 GB? It's also possible the SSD Optimizer will still want to empty it to zero but when the balancing actually runs it's only then that it sees there's actually nothing to move due to the File Placement rules butting in and going "nope, not moving that".
  17. The only thing a slower drive will affect is how long it takes for DrivePoool to access it. Otherwise no, you can have drives of different speeds in the same pool.
  18. DrivePool doesn't need drives of the same capacity (e.g. one of my pools is three 12 TB, a 4 TB and a 3 TB). It will normally put new files on whatever drive has the most free space at the time.
  19. The "real-time file placements set by the balancing plug-ins" only refers to new files, not files that are already on the pool. In short there are two types of placements that can be done by the balancing plug-ins: real-time and on-balancing. The SSD Optimizer can do both: it wants to place new files in the "SSD" drives, which it does in real-time, and place existing files in the "Archive" drives, which it does when balancing is triggered. If the above option is ticked, its real-time placement has priority over the File Placement rules; if the above is not ticked, the File Placement rules have priority over its real-time placement. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Settings
  20. Hi Roger, are you saying that with your current settings: files are never going onto the drives marked "SSD" in the SSD Optimizer plugin (except per the File Placement rules you've set)? you should tick "File placement rules respect real-time file placement limits set by the balancing plug-ins." files do go to the drives marked "SSD" in the SSD Optimizer plugin but are then being emptied faster than you'd prefer? the SSD Optimizer doesn't exclude files that would be prevented from being moved due to File Placement rules when calculating whether it should empty the "SSD" drives, so perhaps try lowering the balance ratio (maybe even to zero) and increasing the "at least this much data" value to compensate? something else?
  21. Not that I know of. It's kinda problematic as balancing by absolute (rather than percentage/relative) used space risks running into the problem of moving files into disks with insufficient free space. As a workaround - I'm not familiar with SNAPRaid to know if this suits - could you perhaps resize the volumes on the drives so they're all the same size, or put non-pooled file(s) on the pooled drive(s) of a suitable size so that they all have the same "free" space? If you really need it though, I'd suggest making a feature request to add the option (perhaps to the Disk Space Equalization balancer?) via the contact form.
  22. Not that I'm aware of. It should be listing the details in its history, is it giving any particular reason as to why?
  23. If a single drive drops from a pool then it should have gone into read-only mode and duplicated files should still be present in the pool, regardless of why the single drive dropped out. If the drive is still present in Explorer / Disk Management and seems to be okay, but is no longer in the pool, DrivePool's metadata (the bit that says "the hidden poolpart folders on drives A, B, C, etc are part of pool X") may have been damaged. I'd try a Manage Pool -> Re-measure... and if that doesn't help try Cog -> Troubleshooting -> Recheck duplication... and if that doesn't help I'd consider lodging a support ticket with Stablebit since the metadata is supposed to be stored in a triply redundant fashion where possible to prevent this sort of problem.
  24. YMMV but I wouldn't trust that drive for storing anything particularly precious that wasn't backed up or duplicated elsewhere; chkdsk /r should not BSOD on a good drive. My guesses: the BSOD happened because the chkdsk ran into the bad sector, tried to recover data from it and the drive behaved in an unexpected way (e.g. maybe it sent back "potato" when the code was only programmed to handle "apple" or "banana"). That's basically what a BSOD is after all - the OS going "something happened that I don't know how to handle, so I'm stopping everything rather than risk making something worse". the drive has either #1 replaced the bad sector with a spare from a reserve that the drive doesn't count as a reallocation (according to the drive manufacturer anyway), #2 performed its own internal repair of the sector and satisfied itself that the sector is no longer bad, or #3 zeroing the sector didn't trip the problem, so as far as it cares all is well in drive land. Anyway glad you haven't lost anything!
  25. Hi, see https://community.covecube.com/index.php?/topic/11439-file-placement-rules-being-ignored/&do=findComment&comment=43160
×
×
  • Create New...