Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 05/27/13 in Posts

  1. To start, while new to DrivePool I love its potential I own multiple licenses and their full suite. If you only use drivepool for basic file archiving of large files with simple applications accessing them for periodic reads it is probably uncommon you would hit these bugs. This assumes you don't use any file synchronization / backup solutions. Further, I don't know how many thousands (tens or hundreds?) of DrivePool users there are, but clearly many are not hitting these bugs or recognizing they are hitting these bugs, so this IT NOT some new destructive my files are 100% going to die issue. Some of the reports I have seen on the forums though may be actually issues due to these things without it being recognized as such. As far as I know previously CoveCube was not aware of these issues, so tickets may not have even considered this possibility. I started reporting these bugs to StableBit ~9 months ago, and informed I would be putting this post together ~1 month ago. Please see the disclaimer below as well, as some of this is based on observations over known facts. You are most likely to run into these bugs with applications that: *) Synchronize or backup files, including cloud mounted drives like onedrive or dropbox *) Applications that must handle large quantities of files or monitor them for changes like coding applications (Visual Studio/ VSCode) Still, these bugs can cause silent file corruption, file misplacement, deleted files, performance degradation, data leakage ( a file shared with someone externally could have its contents overwritten by any sensitive file on your computer), missed file changes, and potential other issues for a small portion of users (I have had nearly all these things occur). It may also trigger some BSOD crashes, I had one such crash that is likely related. Due to the subtle nature some of these bugs can present with, it may be hard to notice they are happening even if they are. In addition, these issues can occur even without file mirroring and files pinned to a specific drive. I do have some potential workarounds/suggestions at the bottom. More details are at the bottom but the important bug facts upfront: Windows has a native file changed notification API using overlapped IO calls. This allows an application to listen for changes on a folder, or a folder and sub folders, without having to constantly check every file to see if it changed. Stablebit triggers "file changed" notifications even when files are just accessed (read) in certain ways. Stablebit does NOT generate notification events on the parent folder when a file under it changes (Windows does). Stablebit does NOT generate a notification event only when a FileID changes (next bug talks about FileIDs). Windows, like linux, has a unique ID number for each file written on the hard drive. If there are hardlinks to the same file, it has the same unique ID (so one File ID may have multiple paths associated with it). In linux this is called the inode number, Windows calls it the FileID. Rather than accessing a file by its path, you can open a file by its FileID. In addition it is impossible for two files to share the same FileID, it is a 128 bit number persistent across reboots (128 bits means the number of unique numbers represented is 39 digits long, or has the uniqueness of something like the MD5 hash). A FileID does not change when a file moves or is modified. Stablebit, by default, supports FileIDs however they seem to be ephemeral, they do not seem to survive across reboots or file moves. Keep in mind FileIDs are used for directories as well, it is not just files. Further, if a directory is moved/renamed not only does its FileID change but every file under it changes. I am not sure if there are other situations in which they may change. In addition, if a descendant file/directory FileID changes due to something like a directory rename Stablebit does NOT generate a notification event that it has changed (the application gets the directory event notification but nothing on the children). There are some other things to consider as well, DrivePool does not implement the standard windows USN Journal (a system of tracking file changes on a drive). It specifically identifies itself as not supporting this so applications shouldn't be trying to use it with a drivepool drive. That does mean that applications that traditionally don't use the file change notification API or the FileIDs may fall back to a combination of those to accomplish what they would otherwise use the USN Journal for (and this can exacerbate the problem). The same is true of Volume Shadow Copy (VSS) where applications that might traditionally use this cannot (and drivepool identifies it cannot do VSS) so may resort to methods below that they do not traditionally use. Now the effects of the above bugs may not be completely apparent: For the overlapped IO / File change notification This means an application monitoring for changes on a DrivePool folder or sub-folder will get erroneous notifications files changed when anything even accesses them. Just opening something like file explorer on a folder, or even switching between applications can cause file accesses that trigger the notification. If an application takes actions on a notification and then checks the file at the end of the notification this in itself may cause another notification. Applications that rely on getting a folder changed notification when a child changes will not get these at all with DrivePool. If it isn't monitoring children at all just the folder, this means no notifications could be generated (vs just the child) so it could miss changes. For FileIDs It depends what the application uses the FileID for but it may assume the FileID should stay the same when a file moves, as it doesn't with DrivePool this might mean it reads or backs up, or syncs the entire file again if it is moved (perf issue). An application that uses the Windows API to open a File by its ID may not get the file it is expecting or the file that was simply moved will throw an error when opened by its old FileID as drivepool has changed the ID. For an example lets say an application caches that the FileID for ImportantDoc1.docx is 12345 but then 12345 refers to ImportantDoc2.docx due to a restart. If this application is a file sync application and ImportantDoc1.docx is changed remotely when it goes to write those remote changes to the local file if it uses the OpenFileById method to do so it will actually override ImportantDoc2.docx with those changes. I didn't spend the time to read Windows file system requirements to know when Windows expects a FileID to potentially change (or not change). It is important to note that even if theoretical changes/reuse are allowed if they are not common place (because windows uses essentially a number like an md5 hash in terms of repeats) applications may just assume it doesn't happen even if it is technically allowed to do so. A backup of file sync program might assume that a file with specific FileID is always the same file, if FileID 12345 is c:\MyDocuments\ImportantDoc1.docx one day and then c:\MyDocuments\ImportantDoc2.docx another it may mistake document 2 for document 1, overriding important data or restore data to the wrong place. If it is trying to create a whole drive backup it may assume it has already backed up c:\MyDocuments\ImportantDoc2.docx if it now has the same File ID as ImportantDoc1.docx by the time it reaches it (at which point DrivePool would have a different FileID for Document1). Why might applications use FileIDs or file change notifiers? It may not seem intuitive why applications would use these but a few major reasons are: *) Performance, file change notifiers are a event/push based system so the application is told when something changes, the common alternative is a poll based system where an application must scan all the files looking for changes (and may try to rely on file timestamps or even hashing the entire file to determine this) this causes a good bit more overhead / slowdown. *) FileID's are nice because they already handle hardlink file de-duplication (Windows may have multiple copies of a file on a drive for various reasons, but if you backup based on FileID you backup that file once rather than multiple times. FileIDs are also great for handling renames. Lets say you are an application that syncs files and the user backs up c:\temp\mydir with 1000 files under it. If they rename c:\temp\mydir to c:\temp\mydir2 an application use FileIDS can say, wait that folder is the same it was just renamed. OK rename that folder in our remote version too. This is a very minimal operation on both ends. With DrivePool however the FileID changes for the directory and all sub-files. If the sync application uses this to determine changes it now uploads all these files to the system using a good bit more resources locally and remotely. If the application also uses versioning this may be far more likely to cause a conflict with two or more clients syncing, as mass amounts of files are seemingly being changed. Finally, even if an application is trying to monitor for FileIDs changing using the file change API, due to notification bugs above it may not get any notifications when child FileIDs change so it might assume it has not. Real Examples OneDrive This started with massive onedrive failures. I would find onedrive was re-uploading hundreds of gigabytes of images an videos multiple times a week. These were not changing or moving. I don't know if the issue is onedrive uses FileIDs to determine if a file is already uploaded, or if it is because when it scanned a directory it may have triggered a notification that all the files in that directory changed and based on that notification it reuploads. After this I noticed files were becoming deleted both locally and in the cloud. I don't know what caused this, it might have been because the old file it thought was deleted as the FileID was gone and while there was a new file (actually the same file) in its place there may have been some odd race condition. It is also possible that it queued the file for upload, the FileID changed and when it went to open it to upload it found it was 'deleted' as the FileID no longer pointed to a file and queued the delete operation. I also found that files that were uploaded into the cloud in one folder were sometimes downloading to an alternate folder locally. I am guessing this is because the folder FileID changed. It thought the 2023 folder was with ID XYZ but that now pointed to a different folder and so it put the file in the wrong place. The final form of corruption was finding the data from one photo or video actually in a file with a completely different name. This is almost guaranteed to be due to the FileID bugs. This is highly destructive as backups make this far harder to correct. With one files contents replaced with another you need to know when the good content existed and in what files were effected. Depending on retention policies the file contents that replaced it may override the good backups before you notice. I also had a BSOD with onedrive where it was trying to set attributes on a file and the CoveFS driver corrupted some memory. It is possible this was a race condition as onedrive may have been doing hundreds of files very rapidly due to the bugs. I have not captured a second BSOD due to it, but also stopped using onedrive on DrivePool due to the corruption. Another example of this is data leakage. Lets say you share your favorite article on kittens with a group of people. Onedrive, believing that file has changed, goes to open it using the FileID however that file ID could essentially now correspond to any file on your computer now the contents of some sensitive file are put in the place of that kitten file, and everyone you shared it with can access it. Visual Studio Failures Visual studio is a code editor/compiler. There are three distinct bugs that happen. First, when compiling if you touched one file in a folder it seemed to recompile the entire folder, this due likely to the notification bug. This is just a slow down, but an annoying one. Second, Visual Studio has compiler generated code support. This means the compiler will generate actual source code that lives next to your own source code. Normally once compiled it doesn't regenerate and compile this source unless it must change but due to the notification bugs it regenerates this code constantly and if there is an error in other code it causes an error there causing several other invalid errors. When debugging visual studio by default will only use symbols (debug location data) as the notifications from DrivePool happen on certain file accesses visual studio constantly thinks the source has changed since it was compiled and you will only be able to breakpoint inside source if you disable the exact symbol match default. If you have multiple projects in a solution with one dependent on another it will often rebuild other project deps even when they haven't changed, for large solutions that can be crippling (perf issue). Finally I often had intellisense errors showing up even though no errors during compiling, and worse intellisense would completely break at points. All due to DrivePool. Technical details / full background & disclaimer I have sample code and logs to document these issues in greater detail if anyone wants to replicate it themselves. It is important for me to state drivepool is closed source and I don't have the technical details of how it works. I also don't have the technical details on how applications like onedrive or visual studio work. So some of these things may be guesses as to why the applications fail/etc. The facts stated are true (to the best of my knowledge) Shortly before my trial expired in October of last year I discovered some odd behavior. I had a technical ticket filed within a week and within a month had traced down at least one of the bugs. The issue can be seen https://stablebit.com/Admin/IssueAnalysis/28720 , it does show priority 2/important which I would assume is the second highest (probably critical or similar above). It is great it has priority but as we are over 6 months since filed without updates I figured warning others about the potential corruption was important. The FileSystemWatcher API is implemented in windows using async overlapped IO the exact code can be seen: https://github.com/dotnet/runtime/blob/57bfe474518ab5b7cfe6bf7424a79ce3af9d6657/src/libraries/System.IO.FileSystem.Watcher/src/System/IO/FileSystemWatcher.Win32.cs#L32-L66 That corresponds to this kernel api: https://learn.microsoft.com/en-us/windows/win32/fileio/synchronous-and-asynchronous-i-o Newer api calls use GetFileInformationByHandleEx to get the FileID but with older stats calls represented by nFileIndexHigh/nFileIndexLow. In terms of the FileID bug I wouldn't normally have even thought about it but the advanced config (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) mentions this under CoveFs_OpenByFileId "When enabled, the pool will keep track of every file ID that it gives out in pageable memory (memory that is saved to disk and loaded as necessary).". Keeping track of files in memory is certainly very different from Windows so I thought this may be the source of issue. I also don't know if there are caps on the maximum number of files it will track as if it resets FileIDs in situations other than reboots that could be much worse. Turning this off will atleast break nfs servers as it mentions it right in the docs "required by the NFS server". Finally, the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers. What is not clear is if there is the chance of potential FileID corruption issues. If when it is assigning these ids in a multi-threaded scenario with many different files at the same time could this system fail? I have seen no proof this happens, but when incremental ids are assigned like this for mass quantities of potential files it has a higher chance of occurring. Microsoft mentions this about deleting the USN Journal: "Deleting the change journal impacts the File Replication Service (FRS) and the Indexing Service, because it requires these services to perform a complete (and time-consuming) scan of the volume. This in turn negatively impacts FRS SYSVOL replication and replication between DFS link alternates while the volume is being rescanned.". Now DrivePool never has the USN journal supported so it isn't exactly the same thing, but it is clear that several core Windows services do use it for normal operations I do not know what backups they use when it is unavailable. Potential Fixes There are advanced settings for drivepool https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings beware these changes may break other things. CoveFs_OpenByFileId - Set to false, by default it is true. This will disable the OpenByFileID API. It is clear several applications use this API. In addition, while DrivePool may disable that function with this setting it doesn't disable FileID's themselves. Any application using FileIDs as static identifiers for files may still run into problems. I would avoid any file backup/synchronization tools and DrivePool drives (if possible). These likely have the highest chance of lost files, misplaced files, file content being mixed up, and excess resource usage. If not avoiding consider taking file hashes for the entire drivepool directory tree. Do this again at a later point and make sure files that shouldn't have changed still have the same hash. If you have files that rarely change after being created then hashing each file at some point after creation and alerting if that file disappears or hash changes would easily act as an early warning to a bug here being hit.
    5 points
  2. malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
    5 points
  3. hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
    4 points
  4. I just wanted to say that @Christopher (Drashna)has been very helpful each time I've created a topic. I have found him and the others I've worked with to be thoughtful and professional in their responses. Thanks for all the work you all do. Now we can all stop seeing that other thread previewed every time we look at the forum home.
    4 points
  5. srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
    4 points
  6. VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
    4 points
  7. @Shane and @VapechiK - Thank you both for the fantastic, detailed information! As it seemed like the easiest thing to start with, I followed VapechiK's instructions from paragraph 4 of their reply to simply remove the link for DP (Y:) under DP (E:) and it worked instantly, and without issue! Again, I thanks for taking the time to provide solutions, it really is appreciated! Thanks much!
    3 points
  8. My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
    3 points
  9. Folks, thank all for the ideas. I am not neglecting them but I have some health issues that make it very hard for me to do much of anything for a while. I will try the more promising ones as my health permits but it will take some time. From operation and the lack of errors during writing/reading it seems that I am in no danger except duplication won't work. I can live with that for a while. Keep the ideas coming and, maybe, I can get this mess fixed and report back which tools worked for me. I am making one pass through my pool where I remove a drive from the pool, run full diagnosis and file system diagnosis on that drive then I re-add it back to the pool. When completed that will assure that there are no underlying drive problems and I can proceed from there. Thanks to all who tried to help this old man.
    2 points
  10. USB so-called DAS/JBOD/etc units usually internally use a SATA Port multiplier setup, and is likely the source of your issues. A SATA Port multiplier is a way of connecting multiple SATA Devices to one root port, and due to the way that the ATA Protocol works, when I/O is performed it essentially takes the entire bus that is created from that root port hostage until the requested data is returned. it also is important to know that write caching will skew any write benchmarks results if the enclosure uses UASP or you have explicitly enabled it. these devices perform even worse with a striping filesystem (like raidz, btrfs raid 5/6 or mdraid 5/6), and having highly fragmented data (which will cause a bunch of seek commands that again, will hold the bus hostage until they complete, which with spinning media does create a substantial I/O burden) honestly, your options are either accept the loss of performance (it is tolerable but noticeable on a 4 drive unit, no idea how it is on your 8 drive unit), or invest in something like a SAS JBOD which will actually have sane real world performance. USB isn't meant for more than a disk or two, and between things like this and hidden overhead (like the 8/10b modulation, root port and chip bottlenecks, general inability to pass SMART and other data, USB disconnects, and other issues that aren't worth getting into) it may be worth just using a more capable solution
    2 points
  11. Bear

    Running Out of Drive Letters

    I duplicate all HDD's, except the ones that have the OS's on them, with those, I use 'minitool partition wizard'. The 4 bay enclosures I linked to above, I have 2 of the 8 bay ones, with a total of 97.3TB & I now have only 6.34TB free space out of that. It works out cheaper to get the little 4 bay ones, and they take HDD's up to 18TB - sweet If you like the black & green, you could always get a pint of XXXX & put green food dye into it, you don't have to wait till St. Patrick's Day. That would bring back uni days as well 🤣 🍺 👍 " A pirate walks into a bar with a steering wheel down his daks He walks up to the bar & orders a drink The barman looks over the bar and says, "do you know you have a steering wheel down your daks?" The pirate goes, "aye, and it's driving me nuts"" 🤣 🤣 🤣 🍺 👍 🍺 cheers
    2 points
  12. I'm stubborn, so I had to figure this out myself. Wireshark showed: SMB2 131 Create Response, Error: STATUS_OBJECT_NAME_NOT_FOUND SMB2 131 Create Response, Error: STATUS_FILE_IS_A_DIRECTORY SMB2 131 GetInfo Response, Error: STATUS_ACCESS_DENIED SMB2 166 Lease Break Notification I thought it might be NTFS permissions, but even after re-applying security settings per DP's KB: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 I still had issues. The timer is 30 seconds, adding 5 seconds for the SMB handshake collapse. It's the oplock break via the Lease Break Ack Timer. This MS KB helped: Cannot access shared files or folders on a drive in Windows Server 2012 or Windows Server 2012 R2 Per MS (above) to disable SMB2/3 leasing entirely, do this: REG ADD HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters /v DisableLeasing /t REG_DWORD /d 1 /f I didn't need to restart SMB2/3, the change was instant and file lookups and even a simple right-click in Explorer came up instantly. A process that took 8days+ finished in an hour or so Glad to be rid of this problem. Leases are disabled yes, but SMB oplock's are still available.
    2 points
  13. thestillwind

    I cannot work

    Well, this is why always online functionnality is really bad. They need to add at least a grace period or something because my tools aren't working. Before the stablebit cloud thing, I never had any problem with stablebit software.
    2 points
  14. For reference, the beta versions have some changes to help address these: .1648 * Fixed an issue where a read-only force attach would fail to mount successfully if the storage provider did not have write access and the drive was marked as mounted. .1647 * Fixed an issue where read-only mounts would fail to mount drives when write access to the storage provider was not available. .1646 * [Issue #28770] Added the ability to convert Google Drive cloud drives stored locally into a format compatible with the Local Disk provider. - Use the "CloudDrive.Convert GoogleDriveToLocalDisk" command.
    2 points
  15. Note that hdparm only controls if/when the disks themselves decide to spin down; it does not control if/when Windows decides to spin the disks down, nor does it prevent them from being spun back up by Windows or an application or service accessing the disks, and the effect is (normally?) per-boot, not permanent. If you want to permanently alter the idle timer of a specific hard drive, you should consult the manufacturer. An issue here is that DrivePool does not keep a record of where files are stored, so I presume it would have to wake up (enough of?) the pool as a whole to find the disk containing the file you want if you didn't have (enough of?) the pool's directory stucture cached in RAM by the OS (or other utility).
    2 points
  16. hello even if you're not using bitlocker, you MUST change the setting value from null to false in the DrivePool json file. otherwise DP will ping your disks every 5secs or so, and your disks will never sleep at all anyway. then you begin messing around with windows settings to set when they sleep. folks here have had varying degrees of success getting drives to sleep, some with no luck at all. in StableBit Scanner there are various Advanced Power Management (APM) settings that bypass windows and control the drive through its firmware. i have read of success going that route, but have no experience at all since i am old school and my 'production' DP drives spin constantly cuz that's how i like it. to change the json: https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings there are many threads here on this topic as you have seen, but of course i can't find them easily now that i'm looking lol... perhaps @Shane or @Christopher (Drashna) will provide the link where Alex (the developer) explains the APM settings in Scanner and the whole drive sleep issue in general in greater detail. tl;dr you must turn off bitlocker detection first before your drives will ever sleep. BTW if you ever trial StableBit CloudDrive you must change the same setting in its json as well. good luck Edit: found the link: https://community.covecube.com/index.php?/topic/48-questions-regarding-hard-drive-spindownstandby/
    2 points
  17. hello. in Scanner go to settings with the wrench icon > advanced settings and troubleshooting and on the first tab is where to bu/restore its settings. you may have to enable the avanced settings from the 'general' tab under 'Scanner settings...' first. it will create a zip file containing many json files and want to put it in 'documents' by default so just back it up elsewhere. after renaming drives in Scanner i stop the StableBit Scanner service, twiddle my thumbs for a sec or three, restart the service and GUI and and the custom names and locations are all saved (for me anyway). only then do i actually create the Scanner backup zip file. i haven't had to actually restore Scanner using this zip file yet but it seems like it will work (knock on wood). and i don't see how recent scan history would be saved using this method. saved from when the zip file was created maybe but anything truly recent is likely gone. it's probably a good idea to create a spreadsheet or text file etc. with your custom info ALONG WITH the serial number for each of your drives so if it happens again you can easily copy/paste directly into Scanner. i set up my C:\_Mount folder with SNs as well so i always know which drive is which should i need to access them directly. i have 2 UPSs as well. 1 for the computer, network equipment, and the RAID enclosure where qBittorent lives. the other runs my remaining USB enclosures where my BU drives and Drivepool live. even that is not fool-proof; if no one's home when the power dies, the one with all the enclosures will just run till the battery dies, as it's not controlled by any software like the one with the computer and modem etc, which is set to shut down Windows automatically/gracefully after 3 minutes. at least it will be disconnected from Windows before the battery dies and the settings will have a greater chance of being saved. if you have a UPS and it failed during your recent storm, all i can say is WOW bad luck. one of these or something similar is great for power outages: https://www.amazon.com/dp/B00PUQILCS/?coliid=I1XKMGSRN3M77P&colid=2TBI57ZR3HAW7&psc=1&ref_=list_c_wl_lv_ov_lig_dp_it saved my stuff once already. keep it plugged in an outlet near where you sleep and it will wake you up. hope this helps and perhaps others may benefit from some of this info as well. cheers
    2 points
  18. hhmmm yes, seems a hierarchal pool was inadvertently/unintentionally created. i have no idea of your skill level so if you're not comfortable with any of the below... you should go to https://stablebit.com/Contact and open a ticket. in any event removing drives/pools from DrivePool in the DrivePool GUI should NEVER delete your files/data on the underlying physical drives, as any 'pooled drive' is just a virtual placeholder anyway. if you haven't already, enable 'show hidden files and folders.' then (not knowing what your balancing/duplication settings are), under 'Manage Pool ^ > Balancing...' ensure 'Do not balance automatically' IS checked, and 'Allow balancing plug-ins to force etc. etc...' IS NOT checked for BOTH pools. SAVE. we don't want DP to try to run a balance pass while resolving this. maybe take a SS of the settings on DrivePool (Y:) before changing them. so DrivePool (E:) is showing all data to be 'Other' and DrivePool (Y:) appears to be duplicated (x2). i say this based on the shade of blue; the 2nd SS is cut off before it shows a green 'x2' thingy (if it's even there), and drives Q & R are showing unconventional total space available numbers. the important thing here is that DP (E:) is showing all data as 'Other.' under the DrivePool (E:) GUI it's just as simple as clicking the -Remove link for DP (Y:) and then Remove on the confirmation box. the 3 checkable options can be ignored because DrivePool (E:) has no actual 'pooled' data in the pool and those are mostly evacuation options for StableBit Scanner anyway. once that's done DrivePool (E:) should just poof disappear as Y: was the only 'disk' in it, and the GUI should automatically switch back to DrivePool (Y:). at this point, i would STOP the StableBit DrivePool service using Run > services.msc (the DP GUI will disappear), then check the underlying drives in DrivePool (Y:) (drives P, Q, & R in File Explorer) for maybe an additional PoolPart.XXXXXXXX folder that is EMPTY and delete it (careful don't delete the full hidden PoolPart folders that contain data on the Y: pool). then restart the DP service and go to 'Manage Pool^ > Balancing...' and reset any changed settings back the way you had/like them. SAVE. if a remeasure doesn't start immediately do Manage Pool^ > Re-measure > Re-measure. i run a pool of 6 spinners with no duplication. this is OK for me because i am OCD about having backups. i have many USB enclosures and have played around some with duplication and hierarchal pools with spare/old hdds and ssds in the past and your issue seems an easy quick fix. i can't remember whether DP will automatically delete old PoolPart folders from removed hierarchal pools or just make them 'unhidden.' perhaps @Shane or @Christopher (Drashna) will add more. Cheers
    2 points
  19. Thanks Shane for confirming DrivePool was the source and thanks VapechiK for the solution. I set the "BitLocker_PoolPartUnlockDetect" override value to False and after a reboot all the pings were gone. For what it's worth the only reason I noticed this is that I wasn't seeing the external (backup) drive going into standby power mode so I started to look for reasons. Power mode is still active but I think Hard Disk Sentinel's SMART poll may be keeping that drive alive. Not a big deal now that the pings are gone.
    2 points
  20. hi. in windows/file explorer enable 'show hidden files and folders.' then go to: https://wiki.covecube.com/Main_Page and bookmark for future reference. from this page on the left side click StableBit DrivePool > 2.x Advanced Settings. https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings if you are using BitLocker to encrypt your drives you will NOT want to do this, and will just have to live with the drive LEDs flashing and disc pings i assume. i don't use it so i don't know much about it. the given example on this page just happens to be the exact .json setting you need to change to stop DP from pinging your discs every ~5secs. set "BitLocker_PoolPartUnlockDetect" override value from null to false as shown. if StableBit CloudDrive is also installed you will need to change the same setting for it also. opening either of these json files, it just happens to be the very top entry on the file. you may need to give your user account 'full control' on the 'security' tab (right click the json > properties > security) in order to save the changes. this worked immediately for me, no reboot necessary. YMMV... good luck
    2 points
  21. Reparse point information is stored in the .covefs folder in the root of the pool. Worst case, delete the link, remove the contents of the .covefs folder, and then reboot.
    2 points
  22. Any update on WebDAV? Is there a specific issue preventing it being added? It's been a while...
    2 points
  23. As long as Drivepool can see the drives it'll know exactly which pool they belong to. I move drives around regularly between my server and an external diskshelf and they always reconnect to the correct pools.
    2 points
  24. My backup pool has a 1TB SSD that I use to speed up my backups over a 10Gbs link. Works great, as I can quickly backup my deltas from the main pool --> backup pool and in slow time it pushes the files to the large set of spinning rust (100TB). However, when I try to copy a file that is larger than the 1TB SSD I get a msg saying there is not enough available space. Ideally, the SSD Optimiser should just push the file directly to a HDD in such cases (feature request?), but for now what would be the best way of copying this file into the pool? - Manually copy directly to one of the HDD behind DrivePools back then rescan? - Turn off the SSD optimiser then copy the file" or, - Some other method? Thanks Nathan
    2 points
  25. Not currently, but I definitely do keep on bringing it up.
    2 points
  26. AFAIK, copying, even cloning does not work. The easiest way is: 1. Install/connect new HDD 2. Add HDD to the Pool 3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance. 4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
    2 points
  27. To get this started apparently: My server was kind of piecemeal constructed. I recently purchased a 42U HP Rack from a local company (via Craigslist), for super cheap ($50, so literally couldn't pass it up) Sophos UTM (Home): Case: Antec ISK 110 VESA case, Mobo (SoC): ASRock RACK J1900D2Y RAM: 4GB of non-ECC RAM OS Drive: Samsung 850 Pro 120GB SSD Storage Server: Case: SuperMicro 847E26-R1K28LPB OS: Windows Server 2012R2 Essentials CPU: AMD FX-8120 Intel Xeon E3 1245v3 (link) MoBo: ASRock 990FX Extreme3 Supermicro MBD-X10SAT-O (link) RAM: 2x8GB Crucial ECC GFX: nVidia geForce 9400 Intel HD 4600 (on processor GFX) PSU: Built in, 2x redundant power supplies (1280W 80+ Gold) OS Drive: Crucial MX200 256GB SSD Storage Pool: 146TB: 4x 4TB (Seagate NAS ST4000VN000) + 8x 4TB (WD40EFRX) + 12x 8TB Seagate Archive (ST8000AS0002), 2x 8TB Seagate Barracudas (ST8000DM004), 2x 128GB OCZ Vertex 4s Misc Storage: 500GB, used for temp files (downloads) HDD Controller card: IBM ServeRAID M1015, cross flashed to "IR Mode" (RAID options, used to pass through disks only), plus an Intel SAS Expander card USB: 2TB Seagate Backup Plus for Server Backup (system drive, and system files) using a WD Green EARS NVR (Network Video Record, aka IP camera box) via BlueIris: Case: Norco ITX-S4 OS: Windows 10 CPU: Intel Core i3-4130T MoBo: ASRock Rack E3C226D2I RAM: 2x8GB G.Skill GFX: ASPEED 2300 PSU: 450W 1U OS Drive: 128GB SSD, Crucial M550 Storage Pool: 2x4TB Toshiba HDD HyperV VM Lab: Case: Supermicro SYS-6016T-NTF (1U case) OS: HyperV Server 2012R2 CPU: Intel Xeon 5560 (x2, hyperthreading disabled) MoBo: Supermicro X8DTU RAM: 64GBs (8x8GB) Hynix Registered ECC (DDR3-1333) GFX: ASPEED 2300 PSU: 560W 1U OS Drive: 160GB HDD Storage: 500GB Crucial MX200 SSD, using Data Deduplication for VMs Emby Server: Case: Unknown (1U case) OS: Windows 10 Pro x64 CPU: Dual Intel Xeon x5660's (hardware fairy swung by) MoBo: Supermicro X8DTi RAM: 20GB (5x4GB) Samsung Registered ECC GFX: Matrox (Onboard) PSU: 560W 1U OS Drive: 64GB SSD, Storage: 128GB (cache, metadata, transcoding temp) Netgear GS724T Smart Switch 24 port, Gigabit, Managed Switch (one port is burned out already, but it was used). Dell 17" keyboard and monitor tray (used, damaged, propped up). Images here: http://imgur.com/a/WRhZf Here is my network hardware. Not a great image, but that's the 24 port, managed switch, a punchout block, waaay too long cables, cable modem and Sophos UTM box. Misc drawers and unused spares. And my servers. HyperV system in the 1U, and my storage server in the 4U. And the Cyberpower UPS at the bottom. What you don't see is the NVR box, as it's been having issues, and I've been troubleshooting those issues.
    2 points
  28. gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
    2 points
  29. They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
    2 points
  30. I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
    2 points
  31. Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
    2 points
  32. The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
    2 points
  33. You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
    2 points
  34. srcrist

    Optimal settings for Plex

    Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. So let's say you have an existing CloudDrive volume at E:. First you'll use DrivePool to create a new pool, D:, and add E: Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space. Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive. Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F: Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool. Then just navigate to D: and all of your content will be there, as well as plenty of room for more. You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. My drive layout looks like this:
    2 points
  35. Hi everyone, First, I would like to share that I am very satisfied with DP&Scanner. This IS a "State of the art" software. Second, I have personally experienced 4 HDD drives fail, burned by the PSU,(99% data was professionally $$$$ recovered) and a content information, would have been comfortable, just to rapid compare and have a status overview. I also asked myself, how to catalog the pooled drives content, logging/versioning, just to know, if a pooled drive will die, if professional recovery make sense (again), but also, to check the duplication algorithm is working as advertised. Being a fan of "as simple as it get's", I have found a simple free File lister, command line capable. https://www.jam-software.com/filelist/ I have build up a .cmd file to export Drive letter (eg: %Drive_letter_%Label%_YYYYMMDDSS.txt), for each pooled drives. Then I scheduled a job to run every 3hours, and before running, just pack all previous .txt's into an archive, for versioning purposes. I get for each 10*2TB, 60% filled pooled HDD's, around 15-20MB .txt file (with excluding content filter option) in ~20minute time. An zipped archive, with all files inside, is getting 20MB per archive. For checking, I just use Notepad++ "Find in Files" function, point down to the desired .txt's folder path, and I get what I'm looking for, on each file per drive. I would love to see such options for finding the file on each drive, built up in DP interface. Hopefully good info, and not a long post. Good luck!
    2 points
  36. Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
    2 points
  37. Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
    2 points
  38. The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
    2 points
  39. As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
    2 points
  40. I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
    2 points
  41. It means the pool drive. And yeah... how Windows handles disk/partition/volume stuff is confusing... at best. For this ... take ownership of the folder, change it's permissions, and delete it (on the pool). Then resolve the issue. It should fix the issue, and shouldn't come back.
    2 points
  42. You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
    2 points
  43. It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
    2 points
  44. "dpcmd remeasure-pool x:"
    2 points
  45. "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
    2 points
  46. Alex

    check-pool-fileparts

    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line. If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information. Previously, I didn't recommend that people mess with this command because it wasn't really meant for public consumption. But the latest internal build of StableBit DrivePool, 2.2.0.659, includes a completely rewritten dpcmd.exe which now has some more useful functions for more advanced users of StableBit DrivePool, and I'd like to talk about some of these here. Let's start with the new check-pool-fileparts command. This command can be used to: Check the duplication consistency of every file on the pool and show you any inconsistencies. Report any inconsistencies found to StableBit DrivePool for corrective actions. Generate detailed audit logs, including the exact locations where each file part is stored of each file on the pool. Now let's see how this all works. The new dpcmd.exe includes detailed usage notes and examples for some of the more complicated commands like this one. To get help on this command type: dpcmd check-pool-fileparts Here's what you will get: dpcmd - StableBit DrivePool command line interface Version 2.2.0.659 The command 'check-pool-fileparts' requires at least 1 parameters. Usage: dpcmd check-pool-fileparts [parameter1 [parameter2 ...]] Command: check-pool-fileparts - Checks the file parts stored on the pool for consistency. Parameters: poolPath - A path to a directory or a file on the pool. detailLevel - Detail level to output (0 to 4). (optional) isRecursive - Is this a recursive listing? (TRUE / false) (optional) Detail levels: 0 - Summary 1 - Also show directory duplication status 2 - Also show inconsistent file duplication details, if any (default) 3 - Also show all file duplication details 4 - Also show all file part details Examples: - Perform a duplication check over the entire pool, show any inconsistencies, and inform StableBit DrivePool >dpcmd check-pool-fileparts P:\ - Perform a full duplication check and output all file details to a log file >dpcmd check-pool-fileparts P:\ 3 > Check-Pool-FileParts.log - Perform a full duplication check and just show a summary >dpcmd check-pool-fileparts P:\ 0 - Perform a check on a specific directory and its sub-directories >dpcmd check-pool-fileparts P:\MyFolder - Perform a check on a specific directory and NOT its sub-directories >dpcmd check-pool-fileparts "P:\MyFolder\Specific Folder To Check" 2 false - Perform a check on one specific file >dpcmd check-pool-fileparts "P:\MyFolder\File To Check.exe" The above help text includes some concrete examples on how to use this commands for various scenarios. To perform a basic check of an entire pool and get a summary back, you would simply type: dpcmd check-pool-fileparts P:\ This will scan your entire pool and make sure that the correct number of file parts exist for each file. At the end of the scan you will get a summary: Scanning... ! Error: Can't get duplication information for '\\?\p:\System Volume Information\storageconfiguration.xml'. Access is denied Summary: Directories: 3,758 Files: 47,507 3.71 TB (4,077,933,565,417 File parts: 48,240 3.83 TB (4,214,331,221,046 * Inconsistent directories: 0 * Inconsistent files: 0 * Missing file parts: 0 0 B (0 ! Error reading directories: 0 ! Error reading files: 1 Any inconsistent files will be reported here, and any scan errors will be as well. For example, in this case I can't scan the System Volume Information folder because as an Administrator, I don't have the proper access to do that (LOCAL SYSTEM does). Another great use for this command is actually something that has been requested often, and that is the ability to generate audit logs. People want to be absolutely sure that each file on their pool is properly duplicated, and they want to know exactly where it's stored. This is where the maximum detail level of this command comes in handy: dpcmd check-pool-fileparts P:\ 4 This will show you how many copies are stored of each file on your pool, and where they're stored. The output looks something like this: Detail level: File Parts Listing types: + Directory - File -> File part * Inconsistent duplication ! Error Listing format: [{0}/{1} IM] {2} {0} - The number of file parts that were found for this file / directory. {1} - The expected duplication count for this file / directory. I - This directory is inheriting its duplication count from its parent. M - At least one sub-directory may have a different duplication count. {2} - The name and size of this file / directory. ... + [3x/2x] p:\Media -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media [Device 0] -> \Device\HarddiskVolume3\PoolPart.6a76681a-3600-4af1-b877-a31815b868c8\Media [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media [Device 2] - [2x/2x] p:\Media\commandN Episode 123.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 123.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 123.mov [Device 2] - [2x/2x] p:\Media\commandN Episode 124.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 124.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 124.mov [Device 2] ... The listing format and listing types are explained at the top, and then for each folder and file on the pool, a record like the above one is generated. Of course like any command output, it could always be piped into a log file like so: dpcmd check-pool-fileparts P:\ 4 > check-pool-fileparts.log I'm sure with a bit of scripting, people will be able to generate daily audit logs of their pool Now this is essentially the first version of this command, so if you have an idea on how to improve it, please let us know. Also, check out set-duplication-recursive. It lets you set the duplication count on multiple folders at once using a file pattern rule (or a regular expression). It's pretty cool. That's all for now.
    2 points
  47. I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
    2 points
  48. I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
    2 points
  49. HellDiverUK

    Build Advice Needed

    Ah yes, I meant to mention BlueIris. I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3. It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras. Before I used QS, it was sitting at about 60% CPU use. With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption. I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution. I record 24/7 on to an 8TB WD Purple drive, with events turned on. QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K. I get great performance from a creaky old Haswell.
    2 points
  50. Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
    2 points
×
×
  • Create New...