Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Reputation Activity

  1. Thanks
    Christopher (Drashna) got a reaction from bob19 in Does scanner remember scan status between reboots?   
    Well, normally, I'd agree with you.  But StableBit Scanner isn't tracking the scan as one whole thing.   It actually uses a sector map of the drive. So it tracks sections of the drive, independently.   Each section is checked based on that status, and it over time, spreads out when the scanning happens. And should actually make it more intelligently scan (by ideally scanning at times and days when the drives are less likely to be active). 
     
    You can check out this by double clicking on the disk, and looking at the sector map.  It will show different colors for different regions.
  2. Like
    Christopher (Drashna) reacted to JulesTop in Pools, Cloud Drive, and Duplication (Optimal configuration question)   
    OK, I came up with a solution to my own problem which will likely be the best of both worlds.
    Setup my Cloud Pool to Pool Duplicate, also setup my HDD Pool to Pool Duplicate.
    Then use both the pools in the storage pool with no duplication as normal.
  3. Like
    Christopher (Drashna) reacted to billis777 in How many hdds an evga 1600w t2 can support?   
    Found it, thank you! It was the serial number i was looking for.
  4. Like
    Christopher (Drashna) reacted to Wiidesire in [Bug?] Prior scan data and settings not preserved on update   
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash.
    I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  5. Thanks
    Christopher (Drashna) got a reaction from jdwarne in Disk Activity   
    Unfortunately, it may be. 
    There is a setting that we have enabled by default that may be causing this behavior.  Specifically, the BitLocker setting. 
    This setting queries the system for data, which creates a WMI query, which causes disk activity.  
    That said, you can disable this:
    http://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings
    And the setting is "BitLocker_CloudPartUnlockDetect", which is actually used in the example.  Set the "override" value to "false", save the file and reboot the system.  That should fix the issue, hopefully. 
  6. Thanks
    Christopher (Drashna) got a reaction from Mick Mickle in Stablebit Scanner loses all settings on unexpected shutdowns.   
    Well, a number of others were having this as well, and I've posted this info in a number of those threads, so hopefully, confirmation will come soon.  
  7. Thanks
    Christopher (Drashna) got a reaction from Chris Downs in Performance column empty   
    This information is pulled from Windows' Performance counters. 
    So it may not have been working properly temporarily. 
    Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  8. Like
    Christopher (Drashna) reacted to steffenmand in Request: Increased block size   
    i think you mean mbit :-P
    Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer.

    Plus i got upload verification on, that also cuts off speeds on uploads  I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  9. Like
    Christopher (Drashna) got a reaction from srcrist in CloudDrive File System Damaged   
    We've definitely talked about it.  
     
    And to be honest, I'm not sure what we can do.  Already, we do store the file system data, if you have pinning enabled, in theory.  Though, there are circumstances that can cause it to purge that info.
    The other issue, is that by default, every block is checksummed.  That is checked on download.  So, if corrupted data is downloaded, then you would get errors, and a warning about it. 
    However, that didn't happen here.  And if that is the case, more than likely, it sent old/out of date data.  Which ... I'm not sure how we can handle that in a way that isn't extremely complex. 
    But again, this is something that is on our mind. 
  10. Like
    Christopher (Drashna) got a reaction from postcd in SSD Optimizer Balancing Plugin   
    I'm not sure what you mean here.
     
    There is the read striping feature which may boost read speeds for you.
     
    Aside from that, there is the file placement rules, which you could use to lock certain files or folders to the SSDs to get better read speeds.
  11. Thanks
    Christopher (Drashna) reacted to Wiidesire in Stablebit Scanner loses all settings on unexpected shutdowns.   
    To keep everyone up-to-date:
    With the help of Alex I've identified the root cause of the issue. The LastSeen variable inside the DiskId files is changed literally every second. This means that the DiskId files are constantly being changed and in the event of a crash there is a high chance that it crashes while the new file is being written thus the DiskId files get corrupted.
    The LastSmartUpdate variable inside the SmartPersistentInfo files is updated in a more reasonable one minute interval so I'm hoping it is a quick fix to simply adjust the write interval of the LastSeen variable.
    Besides changing the interval there would have to be backup diskid files to completely eliminate the issue. So instead of creating new DiskId files when corrupt files have been detected it should copy over an older backup file of the DiskId file(s) in question. Or the LastSeen value gets completely eliminated from the DiskId file and moved somewhere else to avoid changing the DiskId files at all.
  12. Like
    Christopher (Drashna) reacted to Minni1986 in SSD Optimizer not working   
    Yes, there was something wrong in the program. They gave me a newer updated Beta that fixed this issue. 
    http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.963_x64_BETA.exe
     
     
  13. Like
    Christopher (Drashna) reacted to santacruzskim in Can You Change the Google Drive Folder Location?   
    Understood, (and kind of assumed, but thought it was worth asking).  Getting pretty deep into CloudDrive testing and loving it.  Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools!
    Thanks for following up.
    -eric
  14. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    I'm not sure?  But the number of threads is set by our program. Mostly, it's just the number of open/active connections. 
    Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. 
    Parallel connections.  For stuff like prefetching, it makes a different.  Or if you have a lot of random access on the drives...
    But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  15. Like
    Christopher (Drashna) got a reaction from The_Saver in How do I correctly backup Stablebit Drivepool?   
    Because there is no documentation on how to support VSS on the file system level.  
    There is documentation on how to access VSS, and plenty of it.  But that's not the issue.  The problem is how the file system is supposed to handle the VSS calls.  There is NO documentation on this, in the wild.   Any documentation that may exist is internal Microsoft documentation. 
    If by Samba, you mean Samba/SMB/CIFS/Windows Shares, then you're just connecting to the API.  You're relying on the underlying drive that the SMB share resides on supporting VSS.   This is the top level VSS stuff, we need the bottom/low level info, how you'd implement it on a different file system.
    So, right now, we'd have to reverse engineer exactly how VSS interacts with NTFS, at the file system level.  That is not a simple thing, at all. And it would be incredibly time consuming.
    If you mean a specific software, could you link it? 
    Back up the underlying disks in the pool, not the pool drive. 
    As for restoring .... basically the same. 
    That or used something file based, or a sync utility (such as AllWays sync, good sync, free file sync, synctoy, etc).
  16. Like
    Christopher (Drashna) got a reaction from adn34 in First-time setup scenario to keep files   
    Absolutely! 
    You want to "seed" the drive, and we have a guide on how to do that:
    http://wiki.covecube.com/StableBit_DrivePool_Q4142489
    Basically, it's moving the files into the pool's folder structure and remeasuring. 
     
    You may need to reconfigure things in Plex (or change a drive letter).  But otherwise, that should cover what you need to do. 
  17. Like
    Christopher (Drashna) reacted to MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that.
    In an enterprise environment, sure, lots of big databases and constant access/changes/etc.
    I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  18. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    Thread count is fine. We really haven't seen issues with 10.  
    However, the settings you have set WILL cause bottlenecking and issues. 
    Download threads: 10
    Upload threads: 10
    Minimum download size: 20MB
    Prefetch trigger: 5MB
    Prefetch forward: 150 MB
    Prefetch time windows: 30 seconds
     
    The Prefetch forward should be roughly 75% of download threads x minimum download size.   If you can set a higher minimum size, then you can increase the forward.  
     
  19. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    That depends ENTIRELY on your use case.  It's not a question that others can really answer. 
     
    But if performance is important, then the SSD is going to be the better choice for you. 
    But if you're accessing a lot of data (reading and writing), then a hard drive may be a better option.
  20. Like
    Christopher (Drashna) reacted to fattipants2016 in Integrate Snapraid with Drivepool - into the product   
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule.
    If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  21. Like
    Christopher (Drashna) reacted to PocketDemon in Different size hdd's   
    Oh, certainly... Which is why I'd written on the 22nd of March in the thread that -
    "Obviously the downside to what we're suggesting though is voiding the warranty by shucking them..."
    So, it was about agreeing with you that going for NAS/Enterprise drives is a good thing; esp as you start to increase the drive count - BUT that this didn't contradict what had been suggested earlier about shucking the WD externals IF purchase price trumped warranty.
  22. Like
    Christopher (Drashna) reacted to PocketDemon in Different size hdd's   
    Along with balancing personal budget, price/TB & warranty (if that matters to you) & whatnot...
    ...it's also about how many HDDs you can physically connect up vs how your data's growing - since many people get by with just a small SSD in a laptop - whilst others (like myself) are 'data-whores' have many 10s or 100s of TBs of random stuff.
     
    As to looking at NAS storage, part of the reason why people look at shucking the higher capacity WD external drives is that they all use WD/HGSC helium 5400rpm filled drives - which are effectively equivalent to the WD Reds...
    (some of the smaller capacity ones switched to using WD Greens/Blues - I believe only <=4TB but I don't know that for certain)
    ...though they 'may' alternatively be some version of a WD Gold or HTSC HC500 or...???
    ...all of which are designed for NAS - but buying the external drives is cheaper.
  23. Like
    Christopher (Drashna) got a reaction from JesterEE in Surface scan and SSD   
    Saiyan,
     
    No. The surface scan is read only. The only time we write is if we are able to recover files, after you've told it to. 
     
    The same thing goes with the file system check. We don't alter any of the data on the drives without your explicit permission.
     
     
     
     
    And to clarify, we don't really identify if it's a SSD or HDD. We just identify the drive (using Windows APIs). How we handle the drive doesn't change between SSD or HDD.  And in fact, because of what Scanner does, it doesn't matter what kind of drive it is because we are "hands off" with your drives. Grabbing the information about the drives and running the scans are all "read only" and doesn't modify anything on the drives. The only time we write to the drives is when you explicitly allow it (repair unreadable data, or fix the file system). And because we use built in tools/API when we do this, Windows should handle any "SSD" specific functionality/features. 
     
    I just wanted to make this clarification, because you seem to be very hesitant about Scanner and SSDs. 
    But basically Scanner itself doesn't care if the drive is a SSD or not, because nothing we do should ever adversely affect your SSD.
    Data integrity is our top priority, and we try to go out of our way to preserve your data.
  24. Like
    Christopher (Drashna) got a reaction from postcd in TrueCrypt and DrivePool   
    We recommend BitLocker, actually. It's baked into windows, and works seamlessly with the system. Additionally, the "Automatical unlock" option works very well with the pool.
     
     
    However, a lot of people do not trust BitLocker.
    And TrueCrypt, it's forks and a few other encryption solutions bypass the VDS system altogether. Since that is a big part of DrivePool... they don't work. And would require a nearly complete rewrite of the code JUST to support these products. 
     
    I'm not trying to start a debate, but just stating this and explaining why we don't support them.
  25. Like
    Christopher (Drashna) reacted to TeleFragger in My Rackmount Server   
    ok so I am redoing everything and shuffling stuff around. what has stayed is ...
    Network... this is a beauty.. ive got $75 into.. HP Procurve 6400CL - 6 port CX4 port 10gb switch 5x ConnectX-1 CX4 port 10gb NIC running force fw to 2.9.1000 ConnectX-2 CX4 port 10gb NIC running Mellanox custom forced 2.10.xxxx fw!!!!! just got it and toying...I get that people say cx4 ports are old and dead but for $75 to be fully up for me is just the right price... then the hardware/software...
     
    Case: Antec 900 
    OS: Server 2019 Standard (Essentials role is gone.. im sad)
    CPU: Intel i5-6600k
    MoBo: Gigabyte GA-Z170XP-SLI 
    RAM: 4x8gb ddr4 
    GFX: Onboard Intel HD 530
    PSU: Corsair HX 620W 
    OS Drive: 128GB SSD, Samsung
    Storage Controllers: 2x HP H220 SAS controllers flashed to current fw
    Hot Swap Cages: ICY DOCK 6 x 2.5" SATA /SAS HDD/SSD Hot Swap 
    Storage Pool1: SSD
    Storage Pool2: Sata with 500GB SSD Cache
     
    pics are garbage and I haven't moved it into my utility room...
     
     



×
×
  • Create New...