Jump to content

Shane

Moderators
  • Posts

    739
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by Shane

  1. No info, sorry. Someone mentioned a similar problem though, are you also using Windows 11? And is it happening every time your PC restarts? If I'm remembering correctly, their's seemed to be related to a Windows update.
  2. Are you using Microsoft Remote Desktop? If so, have you tried changing the color depth or performance options? Are you using latest graphics driver? Any change if you close and re-open the DrivePool GUI?
  3. While a SATA cable plugged directly into a SATA controller can in theory support a 6 Gb/s transfer rate (before overhead), the transfer rate on a mechanical drive isn't likely to come even close - e.g. a 16TB WD Red Pro's ideal sequential read rate benches a little over 2 Gb/s - and that can drop way down if the drive is being asked to pull many small files non-sequentially (as little as 0.1 Gb/s or less in certain scenarios). If you've an external enclosure doing SATA to USB, the USB connection can be a bottleneck as well. That said, DrivePool is sluggish in evacuating drives; AFAICT it methodically removes each file before starting the next with no queueing optimisation, so it taking noticeably longer than it would take for you to do a bulk move of the contents yourself wouldn't surprise me.
  4. Could it be a lack of contiguous free space? You may have 800GB free total, but the limit for any single incoming file still can't be bigger than the largest amount of free space on any single given volume in the pool (e.g. if you had eight drives in the pool that 800GB might be made up of 50+50+50+50+100+100+100+300GB). Could it be running into the error because it's attempting to pre-allocate space for multiple incoming files at the same time? E.g. if this Sonarr is telling DrivePool to allocate room for twenty incoming files of 7GB each simultaneously and the largest contiguous free space available is less than 140GB then there'd be an error because DrivePool defaults to choosing the poolpart with the most free space at the time but it won't be able to simultaneously fit all twenty there. Could the Sonarr software be mistakenly doing some sort of percentage checking? 800GB of 83TB is less than 1% free... (note too that if you're using duplication, 800GB free is more like 400GB free or less)
  5. That would explain it; DrivePool creates pools with default security settings that prevent guest/anonymous access.
  6. Hmm. I've just installed Solid Explorer on my android phone and tested adding a LAN/SMB cloud connection with username/password access to my pool machine; I was able to access the pool and non-pool shares on it (including a share of the pool drive itself) without issues. Do you have a Windows PC you can use to test re accessing the pool and non-pool shares to see if that has the same problem?
  7. I'd suggest comparing the share permissions and security permissions of the pool and non-pool folders to see if there's any difference?
  8. Shane

    File System Not Scanning

    Maybe check in Windows Disk Management just to make sure (if you haven't already)? Also, what do you see when expanding the drive's entry in Scanner (via the plus sign and then via the drop arrow for File system health)?
  9. I'd expect it to (eventually) have all duplicated files have one of the duplicates on the new drive, yes. Though I'd still recommend putting the Scanner balancer as highest priority.
  10. You can plug a Storage Spaces array into a DrivePool pool as if the array were a normal drive, no problem. It's trying to convert a Storage Spaces array back into separate drives that can't be done non-destructively (nor non-painfully, in my experience):
  11. Unless you can get a direct answer from Google, I'd err on the side of caution that when Google says "read only" they are including disabling edits to existing files (e.g. CloudDrive containers).
  12. Shane

    File System Not Scanning

    Does the drive contain any non-Windows partitions?
  13. You'd only need it to move it if there's an active balancer between it and Scanner. Inactive (greyed out) balancers don't count.
  14. Shane

    Remote connection

    It can be fiddly but as long as they're on the same subnet then yes; see https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Remote Control Details - Scanner has the same file in C:\Program Files (x86)\StableBit\Scanner\Service instead. TLDR: Your user account that is connecting must also be a user on the remote computer, and that user must be part of the built-in Administrators group. Make sure both machines are running the same version of DrivePool/Scanner. Make sure the Remote Control option is ticked in the Cog icon settings menu of DrivePool on both machines, and in the Settings->Scanner settings for Scanner likewise. Make a copy of the RemoteControl.default.xml file in the relevant folders to Remotecontrol.xml in the same folder (it shouldn't matter but don't mix Scanner and DrivePool). You'll need Administrator access to do this. Restart the DrivePool and Scanner services on both machines and launch the relevant GUI. You should see your machine's name up the top of the GUI, click it to see if the other one has shown up. You may have to wait a little while and try again. If the remote machine doesn't get discovered even after waiting, you can try editing the RemoteControl.xml file in the local folder to manually add the hostname or IPv4 address of the remote machine then restart the services again (note: a machine may then show up multiple times in the connection box, I think because once it's manually visible it can become discoverable as well). It can be tricky editing the file in its usual folder; alternatively move the file to your Desktop, edit/save it there, then move it back.
  15. When intending to remove multiple drives I strongly recommend using the Drive Usage Limiter balancer (make sure it has priority over everything else except optionally the StableBit Scanner balancer if the latter is present) to empty all of the drives you wish to get rid of before you proceed to Remove them. While the latter is able to queue multiple drives for removal that function currently operates dumbly and may evacuate each drive to other drives waiting in the queue - so it can potentially waste enormous amounts of time and bandwidth. As far as speeding up the automatic balancing itself, there's not really a way to make that go faster other than clicking on the Increase priority double-arrows adjacent to the Pool Organization bar if they're present (IMO it makes little difference if the pool is otherwise not in use but YMMV); for quicker results you'd have to turn balancing off and manually copy/move the data between the hidden poolparts in parallel yourself (being careful to avoid "crossing the streams" so to speak - it's safe if you know what you're doing but you're still proceeding at your own risk) and then perform a Re-measure afterwards.
  16. MitchC, first of all thankyou for posting this! My (early a.m.) thoughts: (summarised) "DrivePool does not properly notify the Windows FileSystem Watcher API of changes to files and folders in a Pool." If so, this is certainly a bug that needs fixing. Indicating "I changed a file" when what actually happened was "I read a file" could be bad or even crippling for any cohabiting software that needs to respond to changes (as per your example of Visual Studio), as could neglecting to say "this folder changed" when a file/folder inside it is changed. (summarised) "DrivePool isn't keeping FileID identifiers persistent across reboots, moves or renames." Huh. Confirmed, and as I understand it the latter two should be persistent @Christopher (Drashna)? However, attaining persistence across reboots might be tricky given a FileID is only intended to be unique within a volume while a DrivePool file can at any time exist across multiple volumes due to duplication and move between volumes due to balancing and drive replacement. Furthermore as Microsoft itself states "File IDs are not guaranteed to be unique over time, because file systems are free to reuse them". I.e. software should not be relying solely on these over time, especially not backup/sync software! If OneDrive is actually relying on it so much that files are disappearing or swapping content then that would seem to be an own-goal by Microsoft. Digging further, it also appears that FileID identifiers (at least for NTFS) are not actually guaranteed to be collision-free (it's just astronomically improbable in the new 64+64bit format as opposed to the old but apparently still in use 16+48bit format). (quote) "the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers." Ouch. That's a good point. Any suggestions for mitigation until a permanent solution can be found? Perhaps initialising DrivePool's FileID counter using the system clock instead of initialising it to zero, e.g. at 100ns increments (FILETIME) even only an hour's uptime could give us a collision gap of roughly thirty-six billion? (quote) "I would avoid any file backup/synchronization tools and DrivePool drives (if possible)." I disagree; rather, I would opine that any backup/synchronization tool that relies solely on FileID for comparisons should be discarded (if possible); a metric that's not reliable over time should ipso facto not be trusted by software that needs to be reliable over time. Incidentally, on the subject of file hashing I recommend ensuring Manage Pool -> Performance -> Read striping is un-ticked as I've found intermittent hashing errors in a few (not all) third-party tools when this is enabled; I don't know why this happens (maybe low-level disk calls that aren't compatible with non-physical volumes?) but disabling read-striping removes the problem and I've found the performance hit is minor.
  17. At least when testing on my own machines there is caching going on - but my opinion is that's being done by Windows since caching file system queries is part of a modern operating system's job description and having DrivePool do it too seems like it would be redundant (although I know dedicated caching programs e.g. PrimoCache do exist). Certainly there's nothing in DrivePool's settings that mentions caching. Whether a disk gets woken by a particular directory query is going to depend on whether that query can be answered by what's in the cache from previous queries.
  18. To quote Christopher from that thread, "StableBit DrivePool pulls the directory list for the pool directly from the disks, and merges them in memory, basically. So if you have something that is scanning the folders, you may see activity." There may be some directory/file list caching going on in RAM, whether at the Windows OS and/or DrivePool level, but DrivePool itself does not keep any form of permanent (disk-based, reboot-surviving) record of directory contents.
  19. Not currently. StableBit decided against making the software that way due to the risk of data conflicts - with one or more disks missing in the absence of a master directory tree record there is no way (other than the user being extremely careful) to prevent different files being created that occupy the same paths. You could submit a feature request for it as an optional setting. I know once or twice I've run into situations where I would've liked it, but I imagine if StableBit implemented it then it would be very much a Use At Own Risk thing.
  20. Note that hdparm only controls if/when the disks themselves decide to spin down; it does not control if/when Windows decides to spin the disks down, nor does it prevent them from being spun back up by Windows or an application or service accessing the disks, and the effect is (normally?) per-boot, not permanent. If you want to permanently alter the idle timer of a specific hard drive, you should consult the manufacturer. An issue here is that DrivePool does not keep a record of where files are stored, so I presume it would have to wake up (enough of?) the pool as a whole to find the disk containing the file you want if you didn't have (enough of?) the pool's directory stucture cached in RAM by the OS (or other utility).
  21. DrivePool defaults to storing logs in C:\ProgramData\StableBit DrivePool\Service\Logs\Service if that helps any.
  22. It shouldn't cause any issues. When all the drives are unplugged the pool will no longer show up, but DrivePool will redetect the pool when the drives are plugged back in.
  23. Hi EdwardK, see this recent post and links by blunderdome - apparently it is a firmware bug that can affect MX500 drives?
  24. Enabling duplication for your entire pool will default to doubling the used space of your pool. So yes, if you had 36TB of unduplicated files in the pool, enabling the default duplication (x2) would result in those files using 72TB of space (assuming you had that much space available). To answer a later question in your post: if for example you had drives A, B, C and D and you had a file on B (inside the PoolPart folder), enabling duplication x2 will put that file also on either A or C or D. Adding a drive to a pool will NOT put any pre-existing data on that drive into the pool (which is why the pool still looks empty when you add a used drive to it; it's only making use of the free space of that drive, it's not automatically adding files from it). Any pre-existing data will NOT be touched, remaining where it is. You have to move that data yourself if you want it in the pool. It's fine to move pre-existing data on a drive directly into that drive's hidden PoolPart folder to get it quickly into the pool; you just have to be careful to avoid accidentally overlapping anything already in the pool (e.g. on other drives) that has the same folder/file naming. To avoid that accidentally happening, I suggest creating a unique folder (e.g. maybe use that drive's serial number as the folder name) inside each PoolPart and moving your data into that if you're worried; once you've done that for all the pre-used drives you're adding to the pool you can then move them where you want in the pool normally. If you're moving data directly into the hidden PoolPart folder, after you've finished you should also tell DrivePool to re-measure that pool (in the GUI, click Manage Pool -> Re-measure...) so that it can "see" how much data is on each drive in the pool. This helps it perform any balancing accurately. If you use DrivePool's "Remove" function to remove a drive from the pool, the data inside that drive's PoolPart folder will be moved from that drive to other drives in the Pool as part of the operation. Any data on that drive that is outside of that drive's PoolPart folder will NOT be touched (because that data wasn't in the pool). DrivePool doesn't keep a record of which files are on which drives in the pool. You would need to keep your own records for that. Regarding using SnapRAID with DrivePool, #1 you should turn off (or be very careful with) DrivePool's balancing, #2 if you're only going to be using the drives for DrivePool with no data outside the poolparts then I suggest using the PoolPart folders as SnapRAID's mount points as per this post. Your mileage may vary however.
  25. The default log location is C:\ProgramData\StableBit Scanner\Service\Logs if that helps.
×
×
  • Create New...