Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11540
  • Joined

  • Last visited

  • Days Won

    365

Community Answers

  1. Christopher (Drashna)'s post in Switching from folder to file (pool) duplication was marked as the answer   
    Correct.  Duplication is inheritted unless explicity set.  Enabling pool file duplication enables it for the root, and everything else gets inherrited.   And when you change it, it checks to see which files need to be duplicated or unduplicated (the "checking duplication" part that you may have seen).  So, it shouldn't mess with the existing data. 
  2. Christopher (Drashna)'s post in Duplication won’t complete was marked as the answer   
    Just a heads up, the duplication is essentially copying files from one of the drives to another drive.  The speed depends on a number of factors, such as the size of the files being moved (more and  smaller files will take longer than less and larger files, for the same amount of data).  Also, factors such as pool usage will impact this, since the duplication pass runs in a bachground priority. 
    You can temporarily boost the priority by clicking in the >> button next to the bar.   There is also an advanced setting to permanently boost the priority too. 
    https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings
    The "FileDuplication_BackgroundIO" setting controls this. 
  3. Christopher (Drashna)'s post in Pool File Duplication ON, "Duplication Inconsistent" was marked as the answer   
    You have a lot of data on the O:\ drive, it looks like.  You may want/need to run a disk check pass on the drive, if that should be in the pool. 
    Also, it is likely why the balancing is reporting as such.
    https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Pool Organization Bar
  4. Christopher (Drashna)'s post in Chkdsk reports DrivePool drive as RAW so will not chkdsk it, but... was marked as the answer   
    You also opened a ticket for this. 
    But to repeat what I said there, and echo what vapechiK said, 
    This is perfectly normal, and expected.  In all cases, here. 
    the pool drive itself is fully virtual, but doesn't have any blocks/data on the drive. All of the data is "reverse proxied" to the underlying disks, and handled invisible.  So CHKDSK won't work on the drive, and it does not appear in StableBit Scanner.   And the size is normal, as Windows needs a size before that can be properly read, so the 2TB size is a placeholder. 
  5. Christopher (Drashna)'s post in What is the purpose of .covefs? was marked as the answer   
    It contains internal data for the drive.  Primarily, information about reparse points on the pool (junctions, symlinks, etc).  
    If you have none of this (you can check by opening the folder), then it should be okay.  But if you do, then messing with the folder can/will break the reparse points.
  6. Christopher (Drashna)'s post in Junction points & symbolic links keep being recreated. How to clean reparse points / folder metadata? was marked as the answer   
    Reparse point information is stored in the .covefs folder in the root of the pool.  Worst case, delete the link, remove the contents of the .covefs folder, and then reboot. 
  7. Christopher (Drashna)'s post in What is this error message all about - first time ever having an "update" issue? was marked as the answer   
    This is for older versions of StableBit DrivePool and StableBit Scanner.  Updating to the latest version will fix this.
  8. Christopher (Drashna)'s post in Limit number of drives scanned simultaneously? was marked as the answer   
    Yes and no.
    Specifically, by default, StableBit Scanner will only scan one drive per controller.  And in fact, you have to get into the advanced configuration to increase that.
    So if you're seeing multiple drives being scanned at once, it's likely because they are connected to different controllers (you can verify this by selecting the "group by controllers" option in the UI.  
  9. Christopher (Drashna)'s post in Any benefit from using multiple SSD for caching? was marked as the answer   
    If you're using duplication, then definitely, as each copy of duplicated files are written in parallel.  Having multiple SSDs (and using the SSD Optimizer balancer) will get better write speeds, as it is much less likely to have to use a spinning drive for writing. 
     
  10. Christopher (Drashna)'s post in Duplicate increases a lot was marked as the answer   
    Yeah, the duplication creates multiple copies of the protected files, on other disks in the pool.  These are 1:1 copies of the files, and will take up the same amount of space as the original.
  11. Christopher (Drashna)'s post in Not showing any disk activity in 'Disk Performance' section of the Main UI window was marked as the answer   
    You may need to do this: 
    https://wiki.covecube.com/StableBit_DrivePool_Q2150495
  12. Christopher (Drashna)'s post in 2x 250GB NVMe Drives. SSD Optimization or PrimoCache? was marked as the answer   
    Sounds like a good plan then.  
    Also, you may not need to split the drive.  You can add it to the pool, and still use it for downloading or what not. 
  13. Christopher (Drashna)'s post in Drives need to be assigned a letter to be seen. was marked as the answer   
    Yup. This is an issue we see from time to time, and it's a known issue with Windows (not our software). 
    Specifically, the issue is that ... sometimes, windows will not mount a volume if it doesn't have a letter (or path) assigned to it. 
    Fortunately, this is a dead simple fix:
    https://wiki.covecube.com/StableBit_DrivePool_F3540
  14. Christopher (Drashna)'s post in Cloud Used Discrepancy with GUI was marked as the answer   
    This is normal.  When deleting data from the drive, you're not actually removing that data.  NTFS just unlinks that data, and removes the file entries.   That actual data still resides on the disk.    This is how and why data recovery works, actually.  
    Utilities like "SDELETE" zero out the data, as well, which should free it up. 
  15. Christopher (Drashna)'s post in Best organization of drives was marked as the answer   
    Honestly, it shouldn't matter too much.  The drives aren't going to saturate the bus, so this wont make a different. 
    For instance, each PCI-e card, assuming it's PCI-e 2.0 8-lane, that can support ~20 hard drives being saturated.  And that you have 2, and even if the onboard sata ports were using just a single lane... 
    You should have no problems here.  (I'm using 24 drives on a single LSI SAS controller, and have had zero issues)
  16. Christopher (Drashna)'s post in Best organization of drives was marked as the answer   
    Honestly, it shouldn't matter too much.  The drives aren't going to saturate the bus, so this wont make a different. 
    For instance, each PCI-e card, assuming it's PCI-e 2.0 8-lane, that can support ~20 hard drives being saturated.  And that you have 2, and even if the onboard sata ports were using just a single lane... 
    You should have no problems here.  (I'm using 24 drives on a single LSI SAS controller, and have had zero issues)
  17. Christopher (Drashna)'s post in Google Drive: The limit for this folder's number of children (files and folders) has been exceeded was marked as the answer   
    For 
    This is a known issue. It's not CloudDrive specifically.   It's a change with Google's APIs.  We've seen this mostly with personal APIs but it seems that Google has "flipped the switch" and enabled this limitation globally, now.
     The public release version does not have the fix to handle this, right now.   However, the latest beta versions do.  But upgrading to the beta reorganizes the contents, so you can't downgrade, and expect to have te drive working, in that case. 
    you can wait for a new release with this fix (which will likely be soon, because we're seeing a significant number of people running into this issue). Or you can grab the beta here:
    http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.2.0.1356_x64_BETA.exe
  18. Christopher (Drashna)'s post in Scanner with a RAID 5 DAS (QNAP TR-004)? was marked as the answer   
    IIt does work, as it queries the sectors on the RAID array, rather than the disk.  And that can still detect errors and cause the RAID controller to fix them.
     
    But SMART definitely won't work here. 
  19. Christopher (Drashna)'s post in Issue removing a drive from a pool that has a SSD cache set up was marked as the answer   
    The simplest option would, yes, be to disable the SSD Optimizer plugin.
     
  20. Christopher (Drashna)'s post in Drive size wrong in Scanner, correct everywhere else was marked as the answer   
    Ah, they're both in an enclosure. That ... may be part of the problem, as we've seen some odd behaviors in the past.
    If you haven't yet, please open a ticket at https://stablebit.com/Contact
  21. Christopher (Drashna)'s post in Manualy delete duplicates? was marked as the answer   
    StableBit DrivePool doesn't have a master and subordinate/duplicate copy. Both are equally viable, and treated as such.  This is very different from how Drive Bender handles things.5
    As for being documented, not really, but sort of. Eg: https://wiki.covecube.com/StableBit_DrivePool_Knowledge_base#How_To.27s
     
    That said, if the data is in the same relative path under the PoolPart folders, they're considered duplicates.  Changing the duplication settings, or even remeasuring can kick off a duplication pass that will automatically prune the duplicates, as needed. 
    Also, the "dpcmd" utility has an option to disable duplication for the entire pool, recursively. However, that kicks off a duplication pass that actually manages the files. 
    Just have both products installed. That's it.  You can fine tune settings in StableBit DrivePool, in the balancing settings, as the "StableBit Scanner" balancer is one of the 5 preinstalled balancer plugins. 

    That should be fixed now.  Though, the file system scan won't trigger the drive evacuation.  And yeah, that fix was shipping in the 2.5.5 version, and the latest stable release is 2.5.6, so this definitely shouldn't be an issue anymore.  (we haven't seen it in a while). 
     
  22. Christopher (Drashna)'s post in Simple Question was marked as the answer   
    Simple answer is, you don't. 
    The best way to do this would be be to add the 2x4TB drives to a pool, and then add that pool and the 8TB disk to another pool.  Enable redundancy on the top level pool.  
    That would get you what you want. 
  23. Christopher (Drashna)'s post in File System damaged - The "Repair Volume" window does not open was marked as the answer   
    For anyone still experiencing this, please download this version (or a newer release):
    http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.6.0.3350_BETA.exe
    That should fix the issue causing this problem, and allow the software to work correctly. 
  24. Christopher (Drashna)'s post in Any way to change the measure-on-every-reconnect behavior? was marked as the answer   
    Unfortunately, no, there isn't.  
    It's been asked before, but we don't have any plans on adding an option, since it introduces too many potential issues. 
  25. Christopher (Drashna)'s post in System Disk w/DrivePool Installed at 3% Life... was marked as the answer   
    Sorry for not getting back to you sooner. 
    So, it's a SSD, then, and almost out of spare nand.   
    For migrating to a new SSD, you should be able to just clone the system drive, no problem.  
×
×
  • Create New...