Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11514
  • Joined

  • Last visited

  • Days Won

    362

Reputation Activity

  1. Like
    Christopher (Drashna) got a reaction from srcrist in CloudDrive File System Damaged   
    We've definitely talked about it.  
     
    And to be honest, I'm not sure what we can do.  Already, we do store the file system data, if you have pinning enabled, in theory.  Though, there are circumstances that can cause it to purge that info.
    The other issue, is that by default, every block is checksummed.  That is checked on download.  So, if corrupted data is downloaded, then you would get errors, and a warning about it. 
    However, that didn't happen here.  And if that is the case, more than likely, it sent old/out of date data.  Which ... I'm not sure how we can handle that in a way that isn't extremely complex. 
    But again, this is something that is on our mind. 
  2. Like
    Christopher (Drashna) got a reaction from postcd in SSD Optimizer Balancing Plugin   
    I'm not sure what you mean here.
     
    There is the read striping feature which may boost read speeds for you.
     
    Aside from that, there is the file placement rules, which you could use to lock certain files or folders to the SSDs to get better read speeds.
  3. Thanks
    Christopher (Drashna) reacted to Wiidesire in Stablebit Scanner loses all settings on unexpected shutdowns.   
    To keep everyone up-to-date:
    With the help of Alex I've identified the root cause of the issue. The LastSeen variable inside the DiskId files is changed literally every second. This means that the DiskId files are constantly being changed and in the event of a crash there is a high chance that it crashes while the new file is being written thus the DiskId files get corrupted.
    The LastSmartUpdate variable inside the SmartPersistentInfo files is updated in a more reasonable one minute interval so I'm hoping it is a quick fix to simply adjust the write interval of the LastSeen variable.
    Besides changing the interval there would have to be backup diskid files to completely eliminate the issue. So instead of creating new DiskId files when corrupt files have been detected it should copy over an older backup file of the DiskId file(s) in question. Or the LastSeen value gets completely eliminated from the DiskId file and moved somewhere else to avoid changing the DiskId files at all.
  4. Like
    Christopher (Drashna) reacted to Minni1986 in SSD Optimizer not working   
    Yes, there was something wrong in the program. They gave me a newer updated Beta that fixed this issue. 
    http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.963_x64_BETA.exe
     
     
  5. Like
    Christopher (Drashna) reacted to santacruzskim in Can You Change the Google Drive Folder Location?   
    Understood, (and kind of assumed, but thought it was worth asking).  Getting pretty deep into CloudDrive testing and loving it.  Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools!
    Thanks for following up.
    -eric
  6. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    I'm not sure?  But the number of threads is set by our program. Mostly, it's just the number of open/active connections. 
    Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. 
    Parallel connections.  For stuff like prefetching, it makes a different.  Or if you have a lot of random access on the drives...
    But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  7. Like
    Christopher (Drashna) got a reaction from The_Saver in How do I correctly backup Stablebit Drivepool?   
    Because there is no documentation on how to support VSS on the file system level.  
    There is documentation on how to access VSS, and plenty of it.  But that's not the issue.  The problem is how the file system is supposed to handle the VSS calls.  There is NO documentation on this, in the wild.   Any documentation that may exist is internal Microsoft documentation. 
    If by Samba, you mean Samba/SMB/CIFS/Windows Shares, then you're just connecting to the API.  You're relying on the underlying drive that the SMB share resides on supporting VSS.   This is the top level VSS stuff, we need the bottom/low level info, how you'd implement it on a different file system.
    So, right now, we'd have to reverse engineer exactly how VSS interacts with NTFS, at the file system level.  That is not a simple thing, at all. And it would be incredibly time consuming.
    If you mean a specific software, could you link it? 
    Back up the underlying disks in the pool, not the pool drive. 
    As for restoring .... basically the same. 
    That or used something file based, or a sync utility (such as AllWays sync, good sync, free file sync, synctoy, etc).
  8. Like
    Christopher (Drashna) got a reaction from adn34 in First-time setup scenario to keep files   
    Absolutely! 
    You want to "seed" the drive, and we have a guide on how to do that:
    http://wiki.covecube.com/StableBit_DrivePool_Q4142489
    Basically, it's moving the files into the pool's folder structure and remeasuring. 
     
    You may need to reconfigure things in Plex (or change a drive letter).  But otherwise, that should cover what you need to do. 
  9. Like
    Christopher (Drashna) reacted to MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that.
    In an enterprise environment, sure, lots of big databases and constant access/changes/etc.
    I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  10. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    Thread count is fine. We really haven't seen issues with 10.  
    However, the settings you have set WILL cause bottlenecking and issues. 
    Download threads: 10
    Upload threads: 10
    Minimum download size: 20MB
    Prefetch trigger: 5MB
    Prefetch forward: 150 MB
    Prefetch time windows: 30 seconds
     
    The Prefetch forward should be roughly 75% of download threads x minimum download size.   If you can set a higher minimum size, then you can increase the forward.  
     
  11. Thanks
    Christopher (Drashna) got a reaction from MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    That depends ENTIRELY on your use case.  It's not a question that others can really answer. 
     
    But if performance is important, then the SSD is going to be the better choice for you. 
    But if you're accessing a lot of data (reading and writing), then a hard drive may be a better option.
  12. Like
    Christopher (Drashna) reacted to fattipants2016 in Integrate Snapraid with Drivepool - into the product   
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule.
    If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  13. Like
    Christopher (Drashna) reacted to PocketDemon in Different size hdd's   
    Oh, certainly... Which is why I'd written on the 22nd of March in the thread that -
    "Obviously the downside to what we're suggesting though is voiding the warranty by shucking them..."
    So, it was about agreeing with you that going for NAS/Enterprise drives is a good thing; esp as you start to increase the drive count - BUT that this didn't contradict what had been suggested earlier about shucking the WD externals IF purchase price trumped warranty.
  14. Like
    Christopher (Drashna) reacted to PocketDemon in Different size hdd's   
    Along with balancing personal budget, price/TB & warranty (if that matters to you) & whatnot...
    ...it's also about how many HDDs you can physically connect up vs how your data's growing - since many people get by with just a small SSD in a laptop - whilst others (like myself) are 'data-whores' have many 10s or 100s of TBs of random stuff.
     
    As to looking at NAS storage, part of the reason why people look at shucking the higher capacity WD external drives is that they all use WD/HGSC helium 5400rpm filled drives - which are effectively equivalent to the WD Reds...
    (some of the smaller capacity ones switched to using WD Greens/Blues - I believe only <=4TB but I don't know that for certain)
    ...though they 'may' alternatively be some version of a WD Gold or HTSC HC500 or...???
    ...all of which are designed for NAS - but buying the external drives is cheaper.
  15. Like
    Christopher (Drashna) got a reaction from JesterEE in Surface scan and SSD   
    Saiyan,
     
    No. The surface scan is read only. The only time we write is if we are able to recover files, after you've told it to. 
     
    The same thing goes with the file system check. We don't alter any of the data on the drives without your explicit permission.
     
     
     
     
    And to clarify, we don't really identify if it's a SSD or HDD. We just identify the drive (using Windows APIs). How we handle the drive doesn't change between SSD or HDD.  And in fact, because of what Scanner does, it doesn't matter what kind of drive it is because we are "hands off" with your drives. Grabbing the information about the drives and running the scans are all "read only" and doesn't modify anything on the drives. The only time we write to the drives is when you explicitly allow it (repair unreadable data, or fix the file system). And because we use built in tools/API when we do this, Windows should handle any "SSD" specific functionality/features. 
     
    I just wanted to make this clarification, because you seem to be very hesitant about Scanner and SSDs. 
    But basically Scanner itself doesn't care if the drive is a SSD or not, because nothing we do should ever adversely affect your SSD.
    Data integrity is our top priority, and we try to go out of our way to preserve your data.
  16. Like
    Christopher (Drashna) got a reaction from postcd in TrueCrypt and DrivePool   
    We recommend BitLocker, actually. It's baked into windows, and works seamlessly with the system. Additionally, the "Automatical unlock" option works very well with the pool.
     
     
    However, a lot of people do not trust BitLocker.
    And TrueCrypt, it's forks and a few other encryption solutions bypass the VDS system altogether. Since that is a big part of DrivePool... they don't work. And would require a nearly complete rewrite of the code JUST to support these products. 
     
    I'm not trying to start a debate, but just stating this and explaining why we don't support them.
  17. Like
    Christopher (Drashna) reacted to TeleFragger in My Rackmount Server   
    ok so I am redoing everything and shuffling stuff around. what has stayed is ...
    Network... this is a beauty.. ive got $75 into.. HP Procurve 6400CL - 6 port CX4 port 10gb switch 5x ConnectX-1 CX4 port 10gb NIC running force fw to 2.9.1000 ConnectX-2 CX4 port 10gb NIC running Mellanox custom forced 2.10.xxxx fw!!!!! just got it and toying...I get that people say cx4 ports are old and dead but for $75 to be fully up for me is just the right price... then the hardware/software...
     
    Case: Antec 900 
    OS: Server 2019 Standard (Essentials role is gone.. im sad)
    CPU: Intel i5-6600k
    MoBo: Gigabyte GA-Z170XP-SLI 
    RAM: 4x8gb ddr4 
    GFX: Onboard Intel HD 530
    PSU: Corsair HX 620W 
    OS Drive: 128GB SSD, Samsung
    Storage Controllers: 2x HP H220 SAS controllers flashed to current fw
    Hot Swap Cages: ICY DOCK 6 x 2.5" SATA /SAS HDD/SSD Hot Swap 
    Storage Pool1: SSD
    Storage Pool2: Sata with 500GB SSD Cache
     
    pics are garbage and I haven't moved it into my utility room...
     
     



  18. Like
    Christopher (Drashna) got a reaction from Mauricio Maurente in Defragging: Individual Drives or DrivePool?   
    There is ABSOLUTELY NO ISSUE using either the built in defragmentation software, or using 3rd party software.
     
    Specifically, StableBit DrivePool stores the files on normal NTFS volumes. We don't do anything "weird" that would break defragmentation. At all.
     
    In fact, I personally use PerfectDisk on my server (which has 12 HDDs), and have never had an issue with it.
  19. Haha
    Christopher (Drashna) reacted to denywinarto in Backup and restore mounted NTFS folders?   
    Yeah i just figured it out myself, it didn't work even with registry imported,
    This might take some time if i fill up all 60 drives someday..
  20. Like
    Christopher (Drashna) reacted to TeleFragger in SSD Optimizer Balancing Plugin   
    thanks.. yeah I went back in and now I have it set for just I: as cache and rest are archive... so that now the drive is empty and seems to be functioning correctly where it is a straight copy without the dwindling speeds.... I added the SSD as a cache as you see I'm having copying file issues. Now that I have set this, I'm still having an issue but I believe it is my machine itself.
    before you see slowness, now it copies at a full 450MB/s but another machine I have (plex) copies at 750MB/s. While it is totally faster from my plex box and funny how that works as the computer not copying as fast is the main rig that edits videos, photos, large iso copies, etc... so id want it faster there...
    but still 450MB/s on 10gb is still faster than 120MB/s on my 1gb network!!! so while 4x faster.. not full speed. ive got a system issue.. because..
    iperf shows super fast across the 10gb (and think iperf does memory to memory omitting hardware) so network is good.
    my machine has 2x nvme on a quad pci-e 16x card that copying across each of them, they get 1.35GB/s.. its just exiting this machine... so more for me to test when I get time.
  21. Like
    Christopher (Drashna) reacted to TeleFragger in My Rackmount Server   
    Wow yall got awesome setups!
    I don't have a rack, nor do I want the sound of the rack servers.
    what I have started using was a Lenovo ThinkStation Towers - dual xeon - 16 slots for memory!!!!! and now Lenovo P700 and P710's.
    they are all quiet and can be pumped up on drives and ram and dual xeon's
    ESXI 6.7 Machine 1 - 2x Xeon E5-2620 v4 @ 2.10GHz  - 64gb ram
    ESXI 6.7 Machine 2 - 2x Xeon E5-2620 v0 @ 2.0GHZ - 128 GB Ram
    ESXI 6.7 Machine 3 - 2x Xeon E5-2620 v4 @ 2.10GHz  - 64gb ram
    FreeNAS 11.1 - 1x Xeon E5-2620 V3 - 24gb ram - 6x 2tb wd black (yeah I know reds not back but ive got them and they work.. hah)
    Server 2016 / stablebit drive pool - HP Z420 - OS-128gb SSD / pool - 3x 2tb wd black + 2x 4tb wd black + 512gb ssd crucial for SSD Optimizer
    Server 2016 is getting ready to gain 2 ( 6x2.5" hot swap bays) and filled with 12x 512gb crucial ssd running off 2x HP 220 SAS controllers
     
    Network... this is a beauty.. ive got $75 into..
    HP Procurve 6400CL - 6 port CX4 port 10gb switch
    5x ConnectX-1 CX4 port 10gb NIC running HP FW 2.8
    1x ConnectX-2 CX4 port 10gb NIC running Mellanox custom forced 2.10.xxxx fw!!!!! just got it and toying...
    I get that people say cx4 ports are old and dead but for $75 to be fully up for me is just the right price...
  22. Like
    Christopher (Drashna) got a reaction from TeleFragger in SSD Optimizer Balancing Plugin   
    @TeleFragger From the image, it looks like it's writing to the G:\ drive, which is not an SSD.  
    So my guess is your settings are not configured correctly. 
    If you could, open a ticket at https://stablebit.com/Contact
  23. Thanks
    Christopher (Drashna) reacted to Tarh in Is there a way to disable write caching for Covecube Virtual Disk?   
    I just did, thank you!
  24. Like
    Christopher (Drashna) reacted to GaPony in Duplication time is extremely long!   
    Whatever you all do, don't wait 3 years and 8,000 movies (taking up 50TB), later to decide duplication would be a good idea. When I noticed my pool was getting full, it finally dawned on me I'd have a miserable time replacing lost movies if even one of the 15 WD40EFRX 4TB drives went south. Not only did it blast a hole in my wallet this week, to fill the remainder of my RPC-4224 case with 8x new WD80EFAX 8TB and 1x new WD100EFAX 10TB drive (experimental), it appears it will take a month of Sundays to get the job done. I probably doesn't help than I'm doing this on an old WHS2011 machine with 3x AOC-SASLP2-MV8 controllers, one of which is running in a 4x slot. I just hope I don't kill something in the process. I honestly didn't think the 10TB drive would work. I had it initialize, partition and format it on a newer PC for some reason. So I'm still not 100% sure how reliable its going to be.
    After 4 hours, it actually looks like its copying about 500GB per hour. So maybe it won't a full month of Sundays... 
  25. Thanks
    Christopher (Drashna) reacted to eujanro in [HOWTO] File Location Catalog   
    Hi everyone,
    First, I would like to share that I am very satisfied with DP&Scanner. This IS a "State of the art" software.
    Second, I have personally experienced 4 HDD drives fail, burned by the PSU,(99% data was professionally $$$$ recovered) and a content information, would have been comfortable, just to rapid compare and have a status overview.
    I also asked myself, how to catalog the pooled drives content, logging/versioning, just to know, if a pooled drive will die, if professional recovery make sense (again), but also, to check the duplication algorithm is working as advertised.
    Being a fan of "as simple as it get's", I have found a simple free File lister, command line capable.
    https://www.jam-software.com/filelist/
    I have build up a .cmd file to export Drive letter (eg: %Drive_letter_%Label%_YYYYMMDDSS.txt), for each pooled drives. Then I scheduled a job to run every 3hours, and before running, just pack all previous .txt's into an archive, for versioning purposes. 
    I get for each 10*2TB, 60% filled pooled HDD's, around 15-20MB .txt file (with excluding content filter option) in ~20minute time. An zipped archive, with all files inside, is getting 20MB per archive. For checking, I just use Notepad++ "Find in Files" function, point down to the desired .txt's folder path, and I get what I'm looking for, on each file per drive.
    I would love to see such options for finding the file on each drive, built up in DP interface.
    Hopefully good info, and not a long post.
    Good luck!
     
×
×
  • Create New...