Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/28/23 in all areas

  1. To start, while new to DrivePool I love its potential I own multiple licenses and their full suite. If you only use drivepool for basic file archiving of large files with simple applications accessing them for periodic reads it is probably uncommon you would hit these bugs. This assumes you don't use any file synchronization / backup solutions. Further, I don't know how many thousands (tens or hundreds?) of DrivePool users there are, but clearly many are not hitting these bugs or recognizing they are hitting these bugs, so this IT NOT some new destructive my files are 100% going to die issue. Some of the reports I have seen on the forums though may be actually issues due to these things without it being recognized as such. As far as I know previously CoveCube was not aware of these issues, so tickets may not have even considered this possibility. I started reporting these bugs to StableBit ~9 months ago, and informed I would be putting this post together ~1 month ago. Please see the disclaimer below as well, as some of this is based on observations over known facts. You are most likely to run into these bugs with applications that: *) Synchronize or backup files, including cloud mounted drives like onedrive or dropbox *) Applications that must handle large quantities of files or monitor them for changes like coding applications (Visual Studio/ VSCode) Still, these bugs can cause silent file corruption, file misplacement, deleted files, performance degradation, data leakage ( a file shared with someone externally could have its contents overwritten by any sensitive file on your computer), missed file changes, and potential other issues for a small portion of users (I have had nearly all these things occur). It may also trigger some BSOD crashes, I had one such crash that is likely related. Due to the subtle nature some of these bugs can present with, it may be hard to notice they are happening even if they are. In addition, these issues can occur even without file mirroring and files pinned to a specific drive. I do have some potential workarounds/suggestions at the bottom. More details are at the bottom but the important bug facts upfront: Windows has a native file changed notification API using overlapped IO calls. This allows an application to listen for changes on a folder, or a folder and sub folders, without having to constantly check every file to see if it changed. Stablebit triggers "file changed" notifications even when files are just accessed (read) in certain ways. Stablebit does NOT generate notification events on the parent folder when a file under it changes (Windows does). Stablebit does NOT generate a notification event only when a FileID changes (next bug talks about FileIDs). Windows, like linux, has a unique ID number for each file written on the hard drive. If there are hardlinks to the same file, it has the same unique ID (so one File ID may have multiple paths associated with it). In linux this is called the inode number, Windows calls it the FileID. Rather than accessing a file by its path, you can open a file by its FileID. In addition it is impossible for two files to share the same FileID, it is a 128 bit number persistent across reboots (128 bits means the number of unique numbers represented is 39 digits long, or has the uniqueness of something like the MD5 hash). A FileID does not change when a file moves or is modified. Stablebit, by default, supports FileIDs however they seem to be ephemeral, they do not seem to survive across reboots or file moves. Keep in mind FileIDs are used for directories as well, it is not just files. Further, if a directory is moved/renamed not only does its FileID change but every file under it changes. I am not sure if there are other situations in which they may change. In addition, if a descendant file/directory FileID changes due to something like a directory rename Stablebit does NOT generate a notification event that it has changed (the application gets the directory event notification but nothing on the children). There are some other things to consider as well, DrivePool does not implement the standard windows USN Journal (a system of tracking file changes on a drive). It specifically identifies itself as not supporting this so applications shouldn't be trying to use it with a drivepool drive. That does mean that applications that traditionally don't use the file change notification API or the FileIDs may fall back to a combination of those to accomplish what they would otherwise use the USN Journal for (and this can exacerbate the problem). The same is true of Volume Shadow Copy (VSS) where applications that might traditionally use this cannot (and drivepool identifies it cannot do VSS) so may resort to methods below that they do not traditionally use. Now the effects of the above bugs may not be completely apparent: For the overlapped IO / File change notification This means an application monitoring for changes on a DrivePool folder or sub-folder will get erroneous notifications files changed when anything even accesses them. Just opening something like file explorer on a folder, or even switching between applications can cause file accesses that trigger the notification. If an application takes actions on a notification and then checks the file at the end of the notification this in itself may cause another notification. Applications that rely on getting a folder changed notification when a child changes will not get these at all with DrivePool. If it isn't monitoring children at all just the folder, this means no notifications could be generated (vs just the child) so it could miss changes. For FileIDs It depends what the application uses the FileID for but it may assume the FileID should stay the same when a file moves, as it doesn't with DrivePool this might mean it reads or backs up, or syncs the entire file again if it is moved (perf issue). An application that uses the Windows API to open a File by its ID may not get the file it is expecting or the file that was simply moved will throw an error when opened by its old FileID as drivepool has changed the ID. For an example lets say an application caches that the FileID for ImportantDoc1.docx is 12345 but then 12345 refers to ImportantDoc2.docx due to a restart. If this application is a file sync application and ImportantDoc1.docx is changed remotely when it goes to write those remote changes to the local file if it uses the OpenFileById method to do so it will actually override ImportantDoc2.docx with those changes. I didn't spend the time to read Windows file system requirements to know when Windows expects a FileID to potentially change (or not change). It is important to note that even if theoretical changes/reuse are allowed if they are not common place (because windows uses essentially a number like an md5 hash in terms of repeats) applications may just assume it doesn't happen even if it is technically allowed to do so. A backup of file sync program might assume that a file with specific FileID is always the same file, if FileID 12345 is c:\MyDocuments\ImportantDoc1.docx one day and then c:\MyDocuments\ImportantDoc2.docx another it may mistake document 2 for document 1, overriding important data or restore data to the wrong place. If it is trying to create a whole drive backup it may assume it has already backed up c:\MyDocuments\ImportantDoc2.docx if it now has the same File ID as ImportantDoc1.docx by the time it reaches it (at which point DrivePool would have a different FileID for Document1). Why might applications use FileIDs or file change notifiers? It may not seem intuitive why applications would use these but a few major reasons are: *) Performance, file change notifiers are a event/push based system so the application is told when something changes, the common alternative is a poll based system where an application must scan all the files looking for changes (and may try to rely on file timestamps or even hashing the entire file to determine this) this causes a good bit more overhead / slowdown. *) FileID's are nice because they already handle hardlink file de-duplication (Windows may have multiple copies of a file on a drive for various reasons, but if you backup based on FileID you backup that file once rather than multiple times. FileIDs are also great for handling renames. Lets say you are an application that syncs files and the user backs up c:\temp\mydir with 1000 files under it. If they rename c:\temp\mydir to c:\temp\mydir2 an application use FileIDS can say, wait that folder is the same it was just renamed. OK rename that folder in our remote version too. This is a very minimal operation on both ends. With DrivePool however the FileID changes for the directory and all sub-files. If the sync application uses this to determine changes it now uploads all these files to the system using a good bit more resources locally and remotely. If the application also uses versioning this may be far more likely to cause a conflict with two or more clients syncing, as mass amounts of files are seemingly being changed. Finally, even if an application is trying to monitor for FileIDs changing using the file change API, due to notification bugs above it may not get any notifications when child FileIDs change so it might assume it has not. Real Examples OneDrive This started with massive onedrive failures. I would find onedrive was re-uploading hundreds of gigabytes of images an videos multiple times a week. These were not changing or moving. I don't know if the issue is onedrive uses FileIDs to determine if a file is already uploaded, or if it is because when it scanned a directory it may have triggered a notification that all the files in that directory changed and based on that notification it reuploads. After this I noticed files were becoming deleted both locally and in the cloud. I don't know what caused this, it might have been because the old file it thought was deleted as the FileID was gone and while there was a new file (actually the same file) in its place there may have been some odd race condition. It is also possible that it queued the file for upload, the FileID changed and when it went to open it to upload it found it was 'deleted' as the FileID no longer pointed to a file and queued the delete operation. I also found that files that were uploaded into the cloud in one folder were sometimes downloading to an alternate folder locally. I am guessing this is because the folder FileID changed. It thought the 2023 folder was with ID XYZ but that now pointed to a different folder and so it put the file in the wrong place. The final form of corruption was finding the data from one photo or video actually in a file with a completely different name. This is almost guaranteed to be due to the FileID bugs. This is highly destructive as backups make this far harder to correct. With one files contents replaced with another you need to know when the good content existed and in what files were effected. Depending on retention policies the file contents that replaced it may override the good backups before you notice. I also had a BSOD with onedrive where it was trying to set attributes on a file and the CoveFS driver corrupted some memory. It is possible this was a race condition as onedrive may have been doing hundreds of files very rapidly due to the bugs. I have not captured a second BSOD due to it, but also stopped using onedrive on DrivePool due to the corruption. Another example of this is data leakage. Lets say you share your favorite article on kittens with a group of people. Onedrive, believing that file has changed, goes to open it using the FileID however that file ID could essentially now correspond to any file on your computer now the contents of some sensitive file are put in the place of that kitten file, and everyone you shared it with can access it. Visual Studio Failures Visual studio is a code editor/compiler. There are three distinct bugs that happen. First, when compiling if you touched one file in a folder it seemed to recompile the entire folder, this due likely to the notification bug. This is just a slow down, but an annoying one. Second, Visual Studio has compiler generated code support. This means the compiler will generate actual source code that lives next to your own source code. Normally once compiled it doesn't regenerate and compile this source unless it must change but due to the notification bugs it regenerates this code constantly and if there is an error in other code it causes an error there causing several other invalid errors. When debugging visual studio by default will only use symbols (debug location data) as the notifications from DrivePool happen on certain file accesses visual studio constantly thinks the source has changed since it was compiled and you will only be able to breakpoint inside source if you disable the exact symbol match default. If you have multiple projects in a solution with one dependent on another it will often rebuild other project deps even when they haven't changed, for large solutions that can be crippling (perf issue). Finally I often had intellisense errors showing up even though no errors during compiling, and worse intellisense would completely break at points. All due to DrivePool. Technical details / full background & disclaimer I have sample code and logs to document these issues in greater detail if anyone wants to replicate it themselves. It is important for me to state drivepool is closed source and I don't have the technical details of how it works. I also don't have the technical details on how applications like onedrive or visual studio work. So some of these things may be guesses as to why the applications fail/etc. The facts stated are true (to the best of my knowledge) Shortly before my trial expired in October of last year I discovered some odd behavior. I had a technical ticket filed within a week and within a month had traced down at least one of the bugs. The issue can be seen https://stablebit.com/Admin/IssueAnalysis/28720 , it does show priority 2/important which I would assume is the second highest (probably critical or similar above). It is great it has priority but as we are over 6 months since filed without updates I figured warning others about the potential corruption was important. The FileSystemWatcher API is implemented in windows using async overlapped IO the exact code can be seen: https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/System.IO.FileSystem.Watcher/src/System/IO/FileSystemWatcher.Win32.cs#L32-L66 That corresponds to this kernel api: https://learn.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o Newer api calls use GetFileInformationByHandleEx to get the FileID but with older stats calls represented by nFileIndexHigh/nFileIndexLow. In terms of the FileID bug I wouldn't normally have even thought about it but the advanced config (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) mentions this under CoveFs_OpenByFileId "When enabled, the pool will keep track of every file ID that it gives out in pageable memory (memory that is saved to disk and loaded as necessary).". Keeping track of files in memory is certainly very different from Windows so I thought this may be the source of issue. I also don't know if there are caps on the maximum number of files it will track as if it resets FileIDs in situations other than reboots that could be much worse. Turning this off will atleast break nfs servers as it mentions it right in the docs "required by the NFS server". Finally, the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers. What is not clear is if there is the chance of potential FileID corruption issues. If when it is assigning these ids in a multi-threaded scenario with many different files at the same time could this system fail? I have seen no proof this happens, but when incremental ids are assigned like this for mass quantities of potential files it has a higher chance of occurring. Microsoft mentions this about deleting the USN Journal: "Deleting the change journal impacts the File Replication Service (FRS) and the Indexing Service, because it requires these services to perform a complete (and time-consuming) scan of the volume. This in turn negatively impacts FRS SYSVOL replication and replication between DFS link alternates while the volume is being rescanned.". Now DrivePool never has the USN journal supported so it isn't exactly the same thing, but it is clear that several core Windows services do use it for normal operations I do not know what backups they use when it is unavailable. Potential Fixes There are advanced settings for drivepool https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings beware these changes may break other things. CoveFs_OpenByFileId - Set to false, by default it is true. This will disable the OpenByFileID API. It is clear several applications use this API. In addition, while DrivePool may disable that function with this setting it doesn't disable FileID's themselves. Any application using FileIDs as static identifiers for files may still run into problems. I would avoid any file backup/synchronization tools and DrivePool drives (if possible). These likely have the highest chance of lost files, misplaced files, file content being mixed up, and excess resource usage. If not avoiding consider taking file hashes for the entire drivepool directory tree. Do this again at a later point and make sure files that shouldn't have changed still have the same hash. If you have files that rarely change after being created then hashing each file at some point after creation and alerting if that file disappears or hash changes would easily act as an early warning to a bug here being hit.
    5 points
  2. I just wanted to say that @Christopher (Drashna)has been very helpful each time I've created a topic. I have found him and the others I've worked with to be thoughtful and professional in their responses. Thanks for all the work you all do. Now we can all stop seeing that other thread previewed every time we look at the forum home.
    4 points
  3. @Shane and @VapechiK - Thank you both for the fantastic, detailed information! As it seemed like the easiest thing to start with, I followed VapechiK's instructions from paragraph 4 of their reply to simply remove the link for DP (Y:) under DP (E:) and it worked instantly, and without issue! Again, I thanks for taking the time to provide solutions, it really is appreciated! Thanks much!
    3 points
  4. Folks, thank all for the ideas. I am not neglecting them but I have some health issues that make it very hard for me to do much of anything for a while. I will try the more promising ones as my health permits but it will take some time. From operation and the lack of errors during writing/reading it seems that I am in no danger except duplication won't work. I can live with that for a while. Keep the ideas coming and, maybe, I can get this mess fixed and report back which tools worked for me. I am making one pass through my pool where I remove a drive from the pool, run full diagnosis and file system diagnosis on that drive then I re-add it back to the pool. When completed that will assure that there are no underlying drive problems and I can proceed from there. Thanks to all who tried to help this old man.
    2 points
  5. USB so-called DAS/JBOD/etc units usually internally use a SATA Port multiplier setup, and is likely the source of your issues. A SATA Port multiplier is a way of connecting multiple SATA Devices to one root port, and due to the way that the ATA Protocol works, when I/O is performed it essentially takes the entire bus that is created from that root port hostage until the requested data is returned. it also is important to know that write caching will skew any write benchmarks results if the enclosure uses UASP or you have explicitly enabled it. these devices perform even worse with a striping filesystem (like raidz, btrfs raid 5/6 or mdraid 5/6), and having highly fragmented data (which will cause a bunch of seek commands that again, will hold the bus hostage until they complete, which with spinning media does create a substantial I/O burden) honestly, your options are either accept the loss of performance (it is tolerable but noticeable on a 4 drive unit, no idea how it is on your 8 drive unit), or invest in something like a SAS JBOD which will actually have sane real world performance. USB isn't meant for more than a disk or two, and between things like this and hidden overhead (like the 8/10b modulation, root port and chip bottlenecks, general inability to pass SMART and other data, USB disconnects, and other issues that aren't worth getting into) it may be worth just using a more capable solution
    2 points
  6. While chasing a different issue I noticed that my Windows Event logs had repeated errors from the Defrag service stating "The storage optimizer couldn't complete defragmentation on DrivePool (D:) because: Incorrect function. (0x80070001)". Digging further, I found that the Optimize Drives feature had included the pool's drive letter in the list of drives to optimize. It has also listed the underlying drives in the pool, but that should be OK To remove the drive pool from optimization, open Disk Optimizer, in Scheduled optimization, Change Settings, then Choose drives, and then un-select the pool drive. See https://imgur.com/PB2WPH0
    2 points
  7. All my drives are installed inside the server chassis, so I'll just remove the HBAs which will have the same effect. Reading back through many, many old posts on the subject, I believe I've been looking at this all wrong. The actual information Drivepool uses to build and maintain the pool is stored on the pooled drives, not on the system disk, so its just a matter of reinstalling Drivepool and it goes looking for the poolpart folders on the physical drives in order to rebuild the pool. I'm not sure why I've been thinking I needed to worry about what was on the C: Drive, other than that's where I placed the junctions for the drives on my system, but that seems to be more of a housekeeping feature to get around the 26 drive letter limit more than anything to do with Drivepool itself. If this is correct, I believe I'm ready to go. It would seem that Stablebit Drivepool is quite a feat of engineering. Thanks again Shane.
    2 points
  8. Bear

    Running Out of Drive Letters

    I duplicate all HDD's, except the ones that have the OS's on them, with those, I use 'minitool partition wizard'. The 4 bay enclosures I linked to above, I have 2 of the 8 bay ones, with a total of 97.3TB & I now have only 6.34TB free space out of that. It works out cheaper to get the little 4 bay ones, and they take HDD's up to 18TB - sweet If you like the black & green, you could always get a pint of XXXX & put green food dye into it, you don't have to wait till St. Patrick's Day. That would bring back uni days as well 🤣 🍺 👍 " A pirate walks into a bar with a steering wheel down his daks He walks up to the bar & orders a drink The barman looks over the bar and says, "do you know you have a steering wheel down your daks?" The pirate goes, "aye, and it's driving me nuts"" 🤣 🤣 🤣 🍺 👍 🍺 cheers
    2 points
  9. I'm stubborn, so I had to figure this out myself. Wireshark showed: SMB2 131 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND SMB2 131 Create Response, Error: STATUS_FILE_IS_A_DIRECTORY SMB2 131 GetInfo Response, Error: STATUS_ACCESS_DENIED SMB2 166 Lease Break Notification I thought it might be NTFS permissions, but even after re-applying security settings per DP's KB: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 I still had issues. The timer is 30 seconds, adding 5 seconds for the SMB handshake collapse. It's the oplock break via the Lease Break Ack Timer. This MS KB helped: Cannot access shared files or folders on a drive in Windows Server 2012 or Windows Server 2012 R2 Per MS (above) to disable SMB2/3 leasing entirely, do this: REG ADD HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters /v DisableLeasing /t REG_DWORD /d 1 /f I didn't need to restart SMB2/3, the change was instant and file lookups and even a simple right-click in Explorer came up instantly. A process that took 8days+ finished in an hour or so Glad to be rid of this problem. Leases are disabled yes, but SMB oplock's are still available.
    2 points
  10. thestillwind

    I cannot work

    Well, this is why always online functionnality is really bad. They need to add at least a grace period or something because my tools aren't working. Before the stablebit cloud thing, I never had any problem with stablebit software.
    2 points
  11. For reference, the beta versions have some changes to help address these: .1648 * Fixed an issue where a read-only force attach would fail to mount successfully if the storage provider did not have write access and the drive was marked as mounted. .1647 * Fixed an issue where read-only mounts would fail to mount drives when write access to the storage provider was not available. .1646 * [Issue #28770] Added the ability to convert Google Drive cloud drives stored locally into a format compatible with the Local Disk provider. - Use the "CloudDrive.Convert GoogleDriveToLocalDisk" command.
    2 points
  12. Note that hdparm only controls if/when the disks themselves decide to spin down; it does not control if/when Windows decides to spin the disks down, nor does it prevent them from being spun back up by Windows or an application or service accessing the disks, and the effect is (normally?) per-boot, not permanent. If you want to permanently alter the idle timer of a specific hard drive, you should consult the manufacturer. An issue here is that DrivePool does not keep a record of where files are stored, so I presume it would have to wake up (enough of?) the pool as a whole to find the disk containing the file you want if you didn't have (enough of?) the pool's directory stucture cached in RAM by the OS (or other utility).
    2 points
  13. hello even if you're not using bitlocker, you MUST change the setting value from null to false in the DrivePool json file. otherwise DP will ping your disks every 5secs or so, and your disks will never sleep at all anyway. then you begin messing around with windows settings to set when they sleep. folks here have had varying degrees of success getting drives to sleep, some with no luck at all. in StableBit Scanner there are various Advanced Power Management (APM) settings that bypass windows and control the drive through its firmware. i have read of success going that route, but have no experience at all since i am old school and my 'production' DP drives spin constantly cuz that's how i like it. to change the json: https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings there are many threads here on this topic as you have seen, but of course i can't find them easily now that i'm looking lol... perhaps @Shane or @Christopher (Drashna) will provide the link where Alex (the developer) explains the APM settings in Scanner and the whole drive sleep issue in general in greater detail. tl;dr you must turn off bitlocker detection first before your drives will ever sleep. BTW if you ever trial StableBit CloudDrive you must change the same setting in its json as well. good luck Edit: found the link: https://community.covecube.com/index.php?/topic/48-questions-regarding-hard-drive-spindownstandby/
    2 points
  14. hello. in Scanner go to settings with the wrench icon > advanced settings and troubleshooting and on the first tab is where to bu/restore its settings. you may have to enable the avanced settings from the 'general' tab under 'Scanner settings...' first. it will create a zip file containing many json files and want to put it in 'documents' by default so just back it up elsewhere. after renaming drives in Scanner i stop the StableBit Scanner service, twiddle my thumbs for a sec or three, restart the service and GUI and and the custom names and locations are all saved (for me anyway). only then do i actually create the Scanner backup zip file. i haven't had to actually restore Scanner using this zip file yet but it seems like it will work (knock on wood). and i don't see how recent scan history would be saved using this method. saved from when the zip file was created maybe but anything truly recent is likely gone. it's probably a good idea to create a spreadsheet or text file etc. with your custom info ALONG WITH the serial number for each of your drives so if it happens again you can easily copy/paste directly into Scanner. i set up my C:\_Mount folder with SNs as well so i always know which drive is which should i need to access them directly. i have 2 UPSs as well. 1 for the computer, network equipment, and the RAID enclosure where qBittorent lives. the other runs my remaining USB enclosures where my BU drives and Drivepool live. even that is not fool-proof; if no one's home when the power dies, the one with all the enclosures will just run till the battery dies, as it's not controlled by any software like the one with the computer and modem etc, which is set to shut down Windows automatically/gracefully after 3 minutes. at least it will be disconnected from Windows before the battery dies and the settings will have a greater chance of being saved. if you have a UPS and it failed during your recent storm, all i can say is WOW bad luck. one of these or something similar is great for power outages: https://www.amazon.com/dp/B00PUQILCS/?coliid=I1XKMGSRN3M77P&colid=2TBI57ZR3HAW7&psc=1&ref_=list_c_wl_lv_ov_lig_dp_it saved my stuff once already. keep it plugged in an outlet near where you sleep and it will wake you up. hope this helps and perhaps others may benefit from some of this info as well. cheers
    2 points
  15. hhmmm yes, seems a hierarchal pool was inadvertently/unintentionally created. i have no idea of your skill level so if you're not comfortable with any of the below... you should go to https://stablebit.com/Contact and open a ticket. in any event removing drives/pools from DrivePool in the DrivePool GUI should NEVER delete your files/data on the underlying physical drives, as any 'pooled drive' is just a virtual placeholder anyway. if you haven't already, enable 'show hidden files and folders.' then (not knowing what your balancing/duplication settings are), under 'Manage Pool ^ > Balancing...' ensure 'Do not balance automatically' IS checked, and 'Allow balancing plug-ins to force etc. etc...' IS NOT checked for BOTH pools. SAVE. we don't want DP to try to run a balance pass while resolving this. maybe take a SS of the settings on DrivePool (Y:) before changing them. so DrivePool (E:) is showing all data to be 'Other' and DrivePool (Y:) appears to be duplicated (x2). i say this based on the shade of blue; the 2nd SS is cut off before it shows a green 'x2' thingy (if it's even there), and drives Q & R are showing unconventional total space available numbers. the important thing here is that DP (E:) is showing all data as 'Other.' under the DrivePool (E:) GUI it's just as simple as clicking the -Remove link for DP (Y:) and then Remove on the confirmation box. the 3 checkable options can be ignored because DrivePool (E:) has no actual 'pooled' data in the pool and those are mostly evacuation options for StableBit Scanner anyway. once that's done DrivePool (E:) should just poof disappear as Y: was the only 'disk' in it, and the GUI should automatically switch back to DrivePool (Y:). at this point, i would STOP the StableBit DrivePool service using Run > services.msc (the DP GUI will disappear), then check the underlying drives in DrivePool (Y:) (drives P, Q, & R in File Explorer) for maybe an additional PoolPart.XXXXXXXX folder that is EMPTY and delete it (careful don't delete the full hidden PoolPart folders that contain data on the Y: pool). then restart the DP service and go to 'Manage Pool^ > Balancing...' and reset any changed settings back the way you had/like them. SAVE. if a remeasure doesn't start immediately do Manage Pool^ > Re-measure > Re-measure. i run a pool of 6 spinners with no duplication. this is OK for me because i am OCD about having backups. i have many USB enclosures and have played around some with duplication and hierarchal pools with spare/old hdds and ssds in the past and your issue seems an easy quick fix. i can't remember whether DP will automatically delete old PoolPart folders from removed hierarchal pools or just make them 'unhidden.' perhaps @Shane or @Christopher (Drashna) will add more. Cheers
    2 points
  16. Thanks Shane for confirming DrivePool was the source and thanks VapechiK for the solution. I set the "BitLocker_PoolPartUnlockDetect" override value to False and after a reboot all the pings were gone. For what it's worth the only reason I noticed this is that I wasn't seeing the external (backup) drive going into standby power mode so I started to look for reasons. Power mode is still active but I think Hard Disk Sentinel's SMART poll may be keeping that drive alive. Not a big deal now that the pings are gone.
    2 points
  17. hi. in windows/file explorer enable 'show hidden files and folders.' then go to: https://wiki.covecube.com/Main_Page and bookmark for future reference. from this page on the left side click StableBit DrivePool > 2.x Advanced Settings. https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings if you are using BitLocker to encrypt your drives you will NOT want to do this, and will just have to live with the drive LEDs flashing and disc pings i assume. i don't use it so i don't know much about it. the given example on this page just happens to be the exact .json setting you need to change to stop DP from pinging your discs every ~5secs. set "BitLocker_PoolPartUnlockDetect" override value from null to false as shown. if StableBit CloudDrive is also installed you will need to change the same setting for it also. opening either of these json files, it just happens to be the very top entry on the file. you may need to give your user account 'full control' on the 'security' tab (right click the json > properties > security) in order to save the changes. this worked immediately for me, no reboot necessary. YMMV... good luck
    2 points
  18. see the image below. I deleted about 200GB but Clouddrive is still showing the google drive as full, and isn't updating, still shows as 500+GB if i log into Google drive. i have tried cleanup, clearing cache and reauth (just in case)
    1 point
  19. It looks like disk location information (e.g. case and bay) is stored in C:\ProgramData\StableBit Scanner\Service\Store\Json\Disk_diskidentifierstring.json if that helps any?
    1 point
  20. Chris_B

    Newbie question about backup

    Shane, I just wanted to jump in and thank you for the reference to FreeFileSync. I have been using another file backup/cloning tool for years. One of my backup sets of over 200,000 files was taking well over 5min to do the source/dest compare before the backup copy operation. FreeFileSync takes 3 seconds and that's not even using parallel operations that their donation edition supports. Still trying to figure out how they do the compare so fast as I see no resident process doing compare stuff in the background...Anyway, very happy with it. Thanks.
    1 point
  21. I should perhaps add here that DrivePool won't stop writing a new file if it hits the % max, only if/when it hits actual full. The % max setting is for files yet to be written (e.g. "this disk is at/over % max, don't put more files here") and/or files that are not in use (e.g. "this disk is at/over % max, move file(s) off it until satisfied"); an example of a balancer controlling both of these (no new files and move old files) can be found in the Prevent Drive Overfill balancer. Likewise, with the SSD Optimizer plugin its option "Fill SSD drives up to X%" doesn't mean "empty at this point", it means "overflow to Storage drives at this point". When it empties is instead controlled by your choices in the main DrivePool settings menu (Balancing -> Settings -> Automatic balancing - Triggers). This may be somewhat counterintuitive, but it is useful to avoid running out of space in situations where the "SSD" is being filled faster than it can be emptied.
    1 point
  22. 1) Is there a way to write files to the disk with the most available space in the pool? DrivePool defaults to attempting this, though if multiple large files start being written more or less simultaneously to the pool there might be an issue (q.v. next answer). 2) File sizes are unknown until the backup is finished. My assumption is that this will be a problem for DrivePool, in that, if it's writing to a disk that only has 4TB free and it ends up being a 6TB backup, then it will fail. Correct? Correct. 3) I'm assuming there's no way to allow "write continuation" to another disk if the current disk fills or hits the % max. Correct. 4) If a disk starts to fill can I set a lower max % , say 50%, and set the balance plugin to run every X minutes? My intent would be that if a disk would start to "balance" other data off the disk and make room for additional write capacity as the current backup being written grows. You can set Balancing to always run immediately upon detecting a trigger condition or only no more often than as small as any integer multiple (including 1x) of 10 minute intervals. Note that (I believe) it cannot balance open files, or at least not files that are actively being written. @Christopher (Drashna)? 5) I would anticipate that we'll use 70-80TB of the almost 100TB that we'll have available to us. We will have headroom, but I'm concerned about managing/maximizing write space. Depending on above answers. I would assume Veeam will start having write failures for larger backup files if there's not enough room on the volumes. Correct. I've had this happen. I take it the enterprise version of Veeam still doesn't support splitting? (I use the standalone agents at home) 6) Can I configure a non-SSD as a cache point, say one of the 20TB SATA volumes, that would then write out to the pool? I'd used it purely as a staging point, rather than for performance. At this point, ANYTHING is faster than our DataDomain's Yes, you can. The SSD Optimizer plugin doesn't actually care whether an "SSD" is actually an SSD or not; it would be more accurate to call it the Cache Optimizer plugin. For example, you might set "Incoming files are cached on drives A and B; when A and B are more than 25% full they empty* to drives C, D, E and F in that preferred order, but try not to fill any storage drive to more than 90% capacity and if any are then move files off them until they are no more than 80% full". Note that you can also make pools of pools (so pool P could consist of pools Q and R which could consist of drives A+C+D and B+E+F respectively) if for some reason you want to have different configurations for different sub-pools. *the SSD Optimizer plugin doesn't have fine control over emptying; when it starts it will attempt to continue until the cache is empty of all files not being written to it. P.S. it is possible to write your own balancing plugins if you've got the programming chops. P.P.S. do not enable Read Striping in DrivePool's Performance options (it defaults to off) in production until you have confirmed that the software you use works reliably with it. I've found some hashing utilities (for doing comparison/integrity/parity checks) seem to expect a single physical disk and intermittently give false readings when read striping is enabled.
    1 point
  23. Well, StableBit DrivePool does support adding a Storage Spaces array to the pool. So until you have more disks and are able to migrate the data away, you could add both to a pool.
    1 point
  24. Specifically, StableBit DrivePool and Windows doesn't need letters for the disks, nor even folder mount paths. These are there to make it easier for users to access the drives. But as somebody with 20+ drives, mounting the drives to folders makes things very easy. And we do have a guide on how to do so: https://wiki.covecube.com/StableBit_DrivePool_Q4822624
    1 point
  25. DaveJ

    Running Out of Drive Letters

    I have a similar setup on my backup NAS. All non-Drivepool drives are mounted to folders at c:\mount and I can access the drives directly from there if needed.
    1 point
  26. Shane

    Running Out of Drive Letters

    Windows only supports drive letters A through Z. However, it isn't necessary for a drive (other than the boot, system and pagefile drive, and perhaps CD/DVD drives and similar) to have a letter; drives can instead be accessed by mounting them as folders in another drive (e.g. C:\Array\Drive27, C:\Array\Drive28, etc) and furthermore itself DrivePool can have drives form part of a pool without being lettered or mounted at all. To add/remove drive letters or mount drives as folders in other drives, use Windows Disk Management: right-click a volume and click Change Drive Letters and Paths... Late edit for future readers: DON'T mount them as folders inside the pool drive itself, nor inside a poolpart folder. That risks a recursive loop, which would be bad.
    1 point
  27. As far as I know it'd need to be converted to be mountable locally and the current tool doesn't support GoogleDrive (only Amazon and DropBox). I don't know if there are plans to update the tool for GD. @Christopher (Drashna) I can find this https://community.covecube.com/index.php?/topic/2879-moving-clouddrive-folder-between-providers/#comment-19900 but it doesn't indicate whether it ended up being possible?
    1 point
  28. The whole issue became moot when one of the two drives arrived DOA. I installed the good drive and have now moved all the data from one 12Tb drive to the 20Tb. I am awaiting delivery of the second drive and will repeat the process. The machine was able to act as Plex server while the file transfers took place. THANKS for the assist! Dale
    1 point
  29. Shane

    File Pool Duplication

    Yes, to completely prevent any chance of data loss from 2 drives suddenly failing at the same time you'd need 3 times duplication. Note that scanner doesn't protect against sudden failures; that's why they're called "sudden". Scanner protects against the type of failures that'll take longer to kill your drive than you/drivepool will take to rescue any data you want off it. Basically there are what I'd consider to be four types of drive failure: Sudden - bang, it's dead. This is what things like Duplication and RAID are meant to protect against. Imminent - you get some warning it's coming. Duplication and RAID also protect against this type, and Scanner tries to give you enough time for pre-emptive action. Eventual - no drive lasts forever. Scanner helps with this too by notifying you of a drive's age, duty cycle, etc, so you can budget ahead of time for new ones. Subtle - the worst but thankfully rarest kind, instead of the drive dying it starts corrupting your data. Scanner can sometimes give clues, otherwise you need some method of being able to detect/repair it (e.g. some RAID types, SnapRAID, PAR2, etc) or at least having intact backups elsewhere. DrivePool might help here, depending on whether you notice the corruption before it gets to the other duplicate(s). If it helps any, I suggest following the old 3-2-1 rule of backup best practice, which means having at least three copies (production and two backups), at least two different types (back then it was disk and tape, today might be local and cloud) and at least one of those backups being kept offsite, or some variant of that rule suitable for your situation. For example, my setup: DrivePool with 2x duplication (3x for the most important folders) to protect against sudden mechanical drive failure on the home server. Pool is network-shared; a dedicated backup PC on the LAN takes regular snapshots to protect against ransomware and for quick restores. Pool is also backed up to a cloud provider to protect against environmental failures (e.g. fire, flood, earthquake, theft).
    1 point
  30. When scanning a new drive, scanner constantly scans until the drive is fully scanned, this could be several days. Then, it mostly sits idle for the 30/60/90 days until the scan expires. Then several more days of constant scanning. A great feature would be to calculate how many sectors per day/hour need to be scanned in order to make it back around to the oldest scanned sector before it expires, and pace the scanner in order to meet that schedule. If the scanner falls behind, due to being offline or throttled, it can increase the blocks/day dynamically to make it back to the oldest block in time. Possible easy implementation : In the "mark all sectors as good" menu, have an option to mark them as good, but not all with the same dates.
    1 point
  31. No clue. If it helps at all, I have DrivePool v2.3.2.1493 on a Win10Pro box with a mix of nvme ssd and sata hdd, and I even plugged in a usb hdd to see if anything changed, and I'm not getting that issue. Edit: spoke too soon. I am seeing that on the usb hdd I plugged in, but not my other devices. It at least involves the DrivePool service, as stopping the service stops the effect and starting the service starts it back up.
    1 point
  32. Definitely overthinking it. Specifically, while StableBit DrivePool will rebalance data, most of the default enabled balancers handle edge cases, so there should be very little balancing that occurs, once the pool has "settled". There is a brief summary of these balancers here: https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Plug-ins#Default Plug-ins But for ease: StableBit Scanner This plug-in is designed to work in conjunction with the StableBit Scanner version 2.2 and newer. It performs automatic file evacuation from damaged drives and temperature control. Volume Equalization This balancer is responsible to equalizing the disk space used on multiple volumes that reside on the same physical disk. it has no user configurable settings. Disk Usage Limiter This plug-in lets you designate which disks are allowed to store unduplicated vs. duplicated files. It doesn't do anything unless you change its settings to limit file placement on a disk. Prevent Drive Overfill This plug-in tries to keep an empty buffer of free space on each drive part of the pool in order to facilitate existing file expansion. Duplication Space Optimizer This plug-in examines the current data distribution on all of your pooled disks and decides if some data needs to be rebalanced in order to provide optimal disk space availability for duplicated files (see About Balancing for more information). The StableBit Scanner balancer may move stuff around a lot, but only if it detects issues with a drive. And the Duplication Space Optimizer will try to rebalance the data to minimize the amount of "Unusable for duplication" space on the pool. Aside from that, none of these should move data around much, normally.
    1 point
  33. Especially if this is not an NVMe drive, SSDs use all sorts of different values for SMART. What is valid and okay for one drive may be out of spec on another drive. And that's not asuming that the OEM isn't using some sort of encryption/obsfuscation for the numbers. Which is super common. NVMe has an actual, published standard, and is generally better about this (though, we've seen a few instances of issues with this).
    1 point
  34. Thanks for your insight and review, @VapechiK I'll definitely keep it out of a mission critical role. Right now it just has duplicated data from my drivepool, that is backed up elsewhere, and Steam game installs, so nothing I'm afraid of losing. It's the first offbrand SSD I've bought so I wanted to give it a shot and tried to grow my drivepool cheaply. I have an Intel 1TB nVME boot drive and a Samsung 1TB EVO and 2TB QVO and a Crucial 1TB besides the SP drive in the same machine. I had 2 Samsung 128GB Pro boot drives that stopped being useful after 10 years of power on time. They would randomly corrupt system files until it wouldn't boot. I'd say they were a success. I bought a Kingston 480GB drive to replace the last one of those and it seems to be doing well. I had an OCZ that would hangup at boot time and not be detected after only a year or so, that one really sucked. Never any weird Scanner reporting with any of these drives. I think I'll hold out a little longer to see what happens further to the price of the name brand 2TB SSDs.
    1 point
  35. and 104F = 40C, so it is likely the same crappy firmware on yours that was on the 2 that i returned. i guess they can get away with it as 'most' users will just buy it, install it, and not really monitor it. so these errors and notifications are never seen by joe average user who runs games from it and is suprised when it fails suddenly (check out all the 1 star reviews on the amazon page lol). i have 4 SSDs. all are WD. 1x Red SN700 500GB NVME boot drive. 2x Red 1TB 2.5" and 1x Blue 2TB M.2. they all show up fine SMARTwise in Scanner and HDSentinel. i guess this makes me a WD 'fangirl,' but really that's just how it worked out. before i tried the Silicon Power drives, i tried both the LEVEN JS600 SSD 2TB and the fanxiang S101 2TB SSD. all 3 were underwhelming performance-wise compared to the WDs. gotta love amazon's 30 day return policy iceman1337 is correct and he's echoing christopher (drashna). SMART on SSDs is all over the place as manufacturers just make it up as they go. your SP might last for years ignoring whatever errors it shows. but i wouldn't trust it for mission critical data. just my .02 cents...
    1 point
  36. I'm certainly not reading it backwards, as Scanner has a clear 100% life used warning in red, that I ignored for this SSD and the SSD Life Remaining field saying 0%. Also has way too many power cycles because my PC was sleeping and waking almost immediately, I finally tracked that down to being Scanner's wake to scan feature, which made no sense, because because it didn't scan after it woke up the PC.
    1 point
  37. SyncThing needs to be installed on both; each instance scans its own content and compares notes with the other(s) to detect changes. This is different to FreeFileSync where it goes on one machine and scans both its own content and the remote content to detect differences. The former is better in slower networks, busier networks or involving large numbers of files (the issues compound each other), as it involves much less back-and-forth of network traffic, but FreeFileSync can compare a surprisingly large number of files on a fast network (e.g. about fifty thousand files per minute when my 1Gbps LAN is idle) and I feel its GUI is rather more user-friendly. Whichever option you go for, I'd suggest creating a test pool to trial it before committing your real pool - and you could make two test pools and try both.
    1 point
  38. welcome to the world of cheap SSDs. a few months ago i bought 2x2TB Silicon Power 2.5" SSDs from amazon. hooked them both up to system SATA bus. one drive reported 1863GB and the other 1907GB. diskpart was ineffectual. both were locked at SMART temp 40C (idle or high I/O). writing to them from my WD Red NVMe was ~300MB/s so much less than advertised. there is a downloadable SP monitoring tool similar to the WD dashboard. i did not notice an option to upgrade firmware when i ran it. so after a few days of this i returned them to amazon easy peasy. aside from a much needed firmware update from SP, i doubt scanner can do much other than report whatever the drive is feeding it. the drive itself may be perfectly fine and ignoring the warning is probably safe for now. i myself cannot abide that type of discrepancy staring me in the face. YMMV, but if it were me i'd return it and get a WD, which is what i actually did.
    1 point
  39. As an update, I have been informed that the latest beta now includes this feature.
    1 point
  40. Shane

    Losing duplicated files

    If a single drive drops from a pool then it should have gone into read-only mode and duplicated files should still be present in the pool, regardless of why the single drive dropped out. If the drive is still present in Explorer / Disk Management and seems to be okay, but is no longer in the pool, DrivePool's metadata (the bit that says "the hidden poolpart folders on drives A, B, C, etc are part of pool X") may have been damaged. I'd try a Manage Pool -> Re-measure... and if that doesn't help try Cog -> Troubleshooting -> Recheck duplication... and if that doesn't help I'd consider lodging a support ticket with Stablebit since the metadata is supposed to be stored in a triply redundant fashion where possible to prevent this sort of problem.
    1 point
  41. Shane

    n00b; SMART Unstable Sector

    YMMV but I wouldn't trust that drive for storing anything particularly precious that wasn't backed up or duplicated elsewhere; chkdsk /r should not BSOD on a good drive. My guesses: the BSOD happened because the chkdsk ran into the bad sector, tried to recover data from it and the drive behaved in an unexpected way (e.g. maybe it sent back "potato" when the code was only programmed to handle "apple" or "banana"). That's basically what a BSOD is after all - the OS going "something happened that I don't know how to handle, so I'm stopping everything rather than risk making something worse". the drive has either #1 replaced the bad sector with a spare from a reserve that the drive doesn't count as a reallocation (according to the drive manufacturer anyway), #2 performed its own internal repair of the sector and satisfied itself that the sector is no longer bad, or #3 zeroing the sector didn't trip the problem, so as far as it cares all is well in drive land. Anyway glad you haven't lost anything!
    1 point
  42. That's a plus! That said, I know it doesn't help now, but there are a couple of posts on the forum here that cover how to index/catalog your files. It may be worth checking those out. Also, if it's TV shows/movies/etc, there are software that can help make inventorying them easier.
    1 point
  43. I'm sorry to hear about the drive failure! And yeah, any drives that were only on the drive that failed would no longer show up in the pool.
    1 point
  44. Sounds like you'd want the following for each pool? Balancing -> Balancers -> only SSD Optimizer ticked under Drives, tick the SSD / Archive drives as appropriate to set your cache drive under Options, set sliders as desired (these only concern filling, they don't empty) under Ordered placement, leave both Unduplicated and Duplicated unticked (or, if you want to use it, make sure "Leave the files as they are" is selected). Balancing -> Settings under Automatic balancing, select Balance immediately, with the "Not more often than..." unticked under Automatic balancing - Triggers, select 100% / unticked (as you want it always moved straight away) under Plug-in settings, "Allow balancing plug-ins to force immediate balancing..." ticked (so it should move straight away anyway) under File placement settings, should be irrelevant since you're not using the File Placement section. This should result in any files copied to the pool going via your SSD cache drive first then being immediately moved to the others. As always with "production" data, I recommend confirming the behaviour is as expected with a test pool.
    1 point
  45. I have a pool that has 2 8TB drives. I am going to upgrade both drives to 16TB but I cannot add a new 16TB without first removing a 8TB. I wasn't doing any pool duplication so several days ago I kicked it off and it is 25% done. Takes forever. My plan was to wait until this is done, then remove one of the 8TB drives and replace with a 16TB, wait for it to re-write the pool to the 16TB, then remove the 2nd 8TB and add the 16TB. To me this is the only way given I cannot have all the drives connected at the same time. Does this make sense?
    1 point
  46. That would work (presuming you have enough free space). Alternatively you could use a USB drive dock to connect and add one of the new drives before removing one of the old drives, then repeat the process with the other new and old drives. Though this assumes you have a spare USB port and a USB dock to plug into it. There's also manual tricks you can use to more quickly (still takes a while) replace pooled drives with new ones, but they require a certain level of "knowing what you're doing" in case anything doesn't go according to plan, involving copying from inside the pool drives' hidden PoolPart folders.
    1 point
  47. Thanks Rob, DrivePool correctly found the drives. Am I correct that if I wanted to re-assign the drive letters of the physical drives it will work the same way? EDIT - went ahead and re-assigned drive letters - DrivePool adjusted itself immediately
    1 point
  48. As long as Drivepool can see the drives it'll know exactly which pool they belong to. I move drives around regularly between my server and an external diskshelf and they always reconnect to the correct pools.
    1 point
  49. AFAIK, copying, even cloning does not work. The easiest way is: 1. Install/connect new HDD 2. Add HDD to the Pool 3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance. 4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
    1 point
  50. Getting the correct power state of the disk is a little tricky. There are really 2 separate mechanisms that control whether a disk has spun down or not, the OS and the disk's firmware. Here are the tricky parts: The disk's firmware can spin a drive down at any time without the OS's knowledge. But this is typically disabled on new drives. This behavior can be modified under Disk Control in the StableBit Scanner. In order to get the actual power state of the drive we can query it directly, instead of asking Windows. The problem here is that, to Windows, this appears as disk access and it will prevent the OS from ever spinning down the drive. What the StableBit Scanner does by default is it always asks the OS and it never queries the drive directly. This ensures that the OS will spin the drive down correctly. even though we're querying it for the power state. But the issue here is that just because the OS thinks that the drive is active doesn't mean that it's actually spun up. If the disk's firmware has spun down the drive, the OS has no way to know that. The StableBit Scanner deals with this by reporting in the UI that the drive is Standby or Active. Since we can't attempt to query the drive directly without your explicit permission (this will upset Windows' power management), this is the best answer we can give you. The Query power mode directly from disk option, which is found in Disk Settings, is there to work around this shortcoming. When enabled, the StableBit Scanner will attempt to query the power mode directly from the disk. Keep in mind that this can fail if it can't establish Direct I/O to the disk, in which case we fall beck to relying on the OS. The way it works is like this: Query the OS. If the disk has spun down then this must be the case. The disk is in Standby. If the disk is Active (spun up) then we can't trust the OS because the disk firmware could have spun it down. If the user has not explicitly allowed us to query the power mode from the disk, we must assume Standby or Active. If the user has allowed us to query the power mode from the disk, query the power mode. If the query succeeds, set the mode to Active or to Standby (not both, because we know the power state for sure). If the query fails, fall back to the OS and set the mode to Standby or Active. So when should you enable Query power mode directly from disk? When you don't want to use the OS's power management features to spin the disk down. Why would you do this? Pros: You can control the spin down on a per disk basis. You get exact disk power states being reported in the StableBit Scanner with no ambiguity. Avoid disk spin up issues when querying SMART (I will explain below). Cons: Requires Direct I/O to the disk. To the OS (and to any other program that queries the OS) the disk will appear to be always spun up. When the OS spins down a disk it flushes the file system cache prior to spinning it down. This ensures that the disk is not spun up very quickly after that because it needs to write some additional data to it from the cache. When the firmware spins a disk down, this does not happen and there is a chance that the disk will be spun up very quickly after that to perform another write. From my experience, this is not common in practice. What about S.M.A.R.T. queries? In the StableBit Scanner, by default, SMART is queried from WMI first. If Direct I/O is not available then all the SMART data has to come from WMI. Typically this would not spin up a disk. If Direct I/O is possible to the disk then at least some additional SMART data will come from Direct I/O (and if WMI doesn't have the SMART data then all of the SMART data comes from Direct I/O). One potential problem here is that Windows considers any communication with the disk, disk activity. So if you're communicating with the disk to retrieve SMART every couple of minutes then Windows will not spin the disk down. You can avoid this problem in 2 ways: Don't let Windows control your disk spin down and set up a Standby timer in Disk Control (this has the pros and cons as outlined above). Set Throttle queries in Settings -> Scanner Settings -> SMART -> Throttle queries. Set the throttle minutes to something higher than the Windows disk spin down time (which can be examined in Power Options under the Windows Control Panel). The option Only query during the work window or if scanning controls SMART queries and has no effect on power queries. Again, by default power queries do not spin up a disk unless you've manually enabled Query power mode directly from disk (in which case you are effectively saying that you don't want the OS to ever spin down a disk).
    1 point
×
×
  • Create New...