Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 03/20/19 in Posts

  1. 1 point
    Mick Mickie: Thanks for jumping in Mick! Christopher just said "Crashes", so I thought the "Fix" might not help me. I don't usually upgrade to Beta versions of anything, but in this case I think I might make an exception. I was just upgrading to sort of get away from the built-in wssx versions in preparation for my upcoming move to Server 2016 Essentials. Again, thanks! Gary
  2. 1 point
    I've had it happen with normal reboots as well, just not as often as with crashes. It just depends on the timing. Imagine what happens on a reboot. Windows is forcefully shutting down services, including the Stablebit Scanner Service. So if this service gets shutdown at the timeframe where it is writing new DiskId files the files can end up corrupted, thus after a reboot the Service creates new DiskId files meaning all previous scan status is lost. Now the DiskId are not written literally every second anymore (which increases the risk that the service gets killed at the time of writing files significantly) but instead 20-40 minutes (don't know the exact interval now) . That's a reduction of a factor of 1200 to 2400 so the risk that you reboot at the exact time the files are written should basically be zero now.
  3. 1 point
    Well, a number of others were having this as well, and I've posted this info in a number of those threads, so hopefully, confirmation will come soon.
  4. 1 point
    The cause of the issue is fixed with Beta 3246. Since my machine is not frequently crashing, maybe once per month, it will take a while to verify it on my side.
  5. 1 point
    i think you mean mbit :-P Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer. Plus i got upload verification on, that also cuts off speeds on uploads I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  6. 1 point
    We've definitely talked about it. And to be honest, I'm not sure what we can do. Already, we do store the file system data, if you have pinning enabled, in theory. Though, there are circumstances that can cause it to purge that info. The other issue, is that by default, every block is checksummed. That is checked on download. So, if corrupted data is downloaded, then you would get errors, and a warning about it. However, that didn't happen here. And if that is the case, more than likely, it sent old/out of date data. Which ... I'm not sure how we can handle that in a way that isn't extremely complex. But again, this is something that is on our mind.
  7. 1 point
    srcrist

    Request: Increased block size

    Again, other providers *can* still use larger chunks. Please see the changelog: This was because of issue 24914, documented here. Again, this isn't really correct. The problem, as documented above, is that larger chunks results in more retrieval calls to particular chunks, thus triggering Google's download quota limitations. That is the problem that I could not remember. It was not because of concerns about the speed, and it was not a general problem with all providers. EDIT: It looks like the issue with Google Drive might be resolved with an increase in the partial read size as you discussed in this post, but the code change request for that is still incomplete. So this prerequisite still isn't met. Maybe something to follow up with Christopher and Alex about.
  8. 1 point
    To keep everyone up-to-date: With the help of Alex I've identified the root cause of the issue. The LastSeen variable inside the DiskId files is changed literally every second. This means that the DiskId files are constantly being changed and in the event of a crash there is a high chance that it crashes while the new file is being written thus the DiskId files get corrupted. The LastSmartUpdate variable inside the SmartPersistentInfo files is updated in a more reasonable one minute interval so I'm hoping it is a quick fix to simply adjust the write interval of the LastSeen variable. Besides changing the interval there would have to be backup diskid files to completely eliminate the issue. So instead of creating new DiskId files when corrupt files have been detected it should copy over an older backup file of the DiskId file(s) in question. Or the LastSeen value gets completely eliminated from the DiskId file and moved somewhere else to avoid changing the DiskId files at all.
  9. 1 point
    Yes, there was something wrong in the program. They gave me a newer updated Beta that fixed this issue. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.963_x64_BETA.exe
  10. 1 point
    Understood, (and kind of assumed, but thought it was worth asking). Getting pretty deep into CloudDrive testing and loving it. Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools! Thanks for following up. -eric
  11. 1 point
    Because there is no documentation on how to support VSS on the file system level. There is documentation on how to access VSS, and plenty of it. But that's not the issue. The problem is how the file system is supposed to handle the VSS calls. There is NO documentation on this, in the wild. Any documentation that may exist is internal Microsoft documentation. If by Samba, you mean Samba/SMB/CIFS/Windows Shares, then you're just connecting to the API. You're relying on the underlying drive that the SMB share resides on supporting VSS. This is the top level VSS stuff, we need the bottom/low level info, how you'd implement it on a different file system. So, right now, we'd have to reverse engineer exactly how VSS interacts with NTFS, at the file system level. That is not a simple thing, at all. And it would be incredibly time consuming. If you mean a specific software, could you link it? Back up the underlying disks in the pool, not the pool drive. As for restoring .... basically the same. That or used something file based, or a sync utility (such as AllWays sync, good sync, free file sync, synctoy, etc).
  12. 1 point
    I'm not sure? But the number of threads is set by our program. Mostly, it's just the number of open/active connections. Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. Parallel connections. For stuff like prefetching, it makes a different. Or if you have a lot of random access on the drives... But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  13. 1 point
    Absolutely! You want to "seed" the drive, and we have a guide on how to do that: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 Basically, it's moving the files into the pool's folder structure and remeasuring. You may need to reconfigure things in Plex (or change a drive letter). But otherwise, that should cover what you need to do.
  14. 1 point
    srcrist

    Warning from GDrive (Plex)

    To my knowledge, Google does not throttle bandwidth at all, no. But they do have the upload limit of 750GB/day, which means that a large number of upload threads is relatively pointless if you're constantly uploading large amounts of data. It's pretty easy to hit 75mbps or so with only 2 or 3 upload threads, and anything more than that will exceed Google's upload limit anyway. If you *know* that you're uploading less than 750GB that day anyway, though, you could theoretically get several hundred mbps performance out of 10 threads. So it's sort of situational. Many of us do use servers with 1gbps synchronous pipes, in any case, so there is a performance benefit to more threads...at least in the short term. But, ultimately, I'm mostly just interested in understanding the technical details from Christopher so that I can experiment and tweak. I just feel like I have a fundamental misunderstanding of how the API limits work.
  15. 1 point
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that. In an enterprise environment, sure, lots of big databases and constant access/changes/etc. I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  16. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  17. 1 point
    Thread count is fine. We really haven't seen issues with 10. However, the settings you have set WILL cause bottlenecking and issues. Download threads: 10 Upload threads: 10 Minimum download size: 20MB Prefetch trigger: 5MB Prefetch forward: 150 MB Prefetch time windows: 30 seconds The Prefetch forward should be roughly 75% of download threads x minimum download size. If you can set a higher minimum size, then you can increase the forward.
  18. 1 point
    PocketDemon

    Different size hdd's

    Oh, certainly... Which is why I'd written on the 22nd of March in the thread that - "Obviously the downside to what we're suggesting though is voiding the warranty by shucking them..." So, it was about agreeing with you that going for NAS/Enterprise drives is a good thing; esp as you start to increase the drive count - BUT that this didn't contradict what had been suggested earlier about shucking the WD externals IF purchase price trumped warranty.
  19. 1 point
    PocketDemon

    Different size hdd's

    Along with balancing personal budget, price/TB & warranty (if that matters to you) & whatnot... ...it's also about how many HDDs you can physically connect up vs how your data's growing - since many people get by with just a small SSD in a laptop - whilst others (like myself) are 'data-whores' have many 10s or 100s of TBs of random stuff. As to looking at NAS storage, part of the reason why people look at shucking the higher capacity WD external drives is that they all use WD/HGSC helium 5400rpm filled drives - which are effectively equivalent to the WD Reds... (some of the smaller capacity ones switched to using WD Greens/Blues - I believe only <=4TB but I don't know that for certain) ...though they 'may' alternatively be some version of a WD Gold or HTSC HC500 or...??? ...all of which are designed for NAS - but buying the external drives is cheaper.
  20. 1 point
    That depends ENTIRELY on your use case. It's not a question that others can really answer. But if performance is important, then the SSD is going to be the better choice for you. But if you're accessing a lot of data (reading and writing), then a hard drive may be a better option.
  21. 1 point
    PocketDemon

    Different size hdd's

    There's no issue with different sizes - &, within the context of your drives & what you're likely to be doing, there's no issue with sizes & using the capacity. Yeah, the only time that there would be an issue is if the number of times you're duplicating wouldn't work... ...so, imagining someone were looking at duplicating the entire pool, for example - - with 2x duplication &, say a 1TB & 8TB drive, they could only actually duplicate 1TB - & with 3x duplication &, say a 1TB, 4TB & 8TB drive, they could still only actually duplicate 1TB ...however, unless you're after >=6x duplication (which is highly unlikely), there's no problem whatsoever. If you are using duplication & your current drives are pretty full already, after adding the new drive then I would suggest pushing the "Duplication Space Optimiser" to the top & forcing a rebalance run just before going to bed... As this then should prevent there being any issues moving forward.
  22. 1 point
    It won't really limit your ability to upload larger amounts of data, it just throttles writes to the drive when the cache drive fills up. So if you have 150GB of local disk space on the cache drive, but you copy 200GB of data to it, the first roughly 145GB of data will copy at essentially full speed, as if you're just copying from one local drive to another, and then it will throttle the drive writes so that the last 55GB of data will slowly copy to the CloudDrive drive as chunks are uploaded from your local cache to the cloud provider. Long story short: it isn't a problem unless high speeds are a concern. As long as you're fine copying data at roughly the speed of your upload, it will work fine no matter how much data you're writing to the CloudDrive drive.
  23. 1 point
    MandalorePatriot

    CloudDrive Cache - SSD vs HDD

    Thank you, I really appreciate your quick and informative answer!
  24. 1 point
    srcrist

    CloudDrive Cache - SSD vs HDD

    SSD. Disk usage for the cache, particularly with a larger drive, can be heavy. I always suggest an SSD cache drive. You'll definitely notice a significant impact. Aside from upload space, most drives don't need or generally benefit from a cache larger than 50-100GB or so. You'll definitely get diminishing returns with anything larger than that. So speed is far more important than size.
  25. 1 point
    I'm late to the convo but from my personal experience... I got a 1600W Titanium EVGA PSU that powers a Zenith ROG extreme mobo with a 2950x OC'd to 4.1ghz. It also powers 2xTitan XP (SLI) Nvidia cards. Attached, I have 2 SSDs, 3 NVME drives, 6 Bluray drives, and various fans. Three HBA cards (a LSI 9201-16i and two Intel expanders). I also have 37 Green drives from 3TB-12TB and 3 SAS drives. My PSU is able to support all that. Now...that being said, I can't plug the vacuum into the wall in that loft because that'll trip the switch since my system is likely pulling max from the wall.
  26. 1 point
    srcrist

    Warning from GDrive (Plex)

    That's just a warning. You thread count is a bit too high, and you're probably getting throttled. Google only allows around 15 simultaneous threads at a time. Try dropping your upload threads to 5 and keeping your download threads where they are. That warning will probably go away. Ultimately, though, even temporary network hiccups can occasionally cause those warnings. So it might also be nothing. It's only something to worry about if it happens regularly and frequently.
  27. 1 point
    I'm truly sorry, as it clearly can be done. I won't delete the previous posts, but I will strike through everything that's incorrect so as to not to confuse anyone.
  28. 1 point
    jellis413

    10gb speeds using ssd cache?

    I just ran into this with a pair of Aquantia 10g nics that I purchased. It seems to be a different amount that I could copy depending on the SSD that I used. Their support confirmed that after the SSD write cache was filled, it would drop to below gigabit speeds. I setup a RAM drive and passed it through as an SSD to the SSD Optimizer and speeds consistently stay where they should be and dont drop off like I was experiencing. https://www.softperfect.com/products/ramdisk/ is the product I used and had to make sure I selected the option for Hard Disk Emulation
  29. 1 point
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  30. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  31. 1 point
    Based on the feedback from the community, here is how to get your ESXi Host to pass on SMART data for Scanner in your guest VMs. All you need to do is the following and then any disk (not USB) you plugin in thereafter will be available for RDM: In the ESXi Console window, highlight your server Go to the Configuration tab Under Software, click Advanced Settings Click RdmFilter Uncheck the box for RdmFilter.HbaIsShared Click OK Used the Advanced Settings for StableBit Scanner to enable "UnsafeDirectIo" to get the SMART data from the virtual controller: http://wiki.covecube.com/StableBit_Scanner_Advanced_Settings And make sure that "UnsafeDirectIo" is set to "True", and reboot. *Note: UnsafeDirectIo is "unsafe" for a reason. It is possible that it can cause issues or glitches, or in extremely rare conditions it can cause BSODs. In a large majority of cases, these issues don't occur, but it is a possibility. So definitely "at your own risk". Original Post: Hi Guys, I have a Dell Precision T3500 Workstation, that are running VMware ESXi 5.1.0. On this host i have created 2 virtual machines, both are Windows 2012 server standard. One of these are running Stablebit Scanner v. 2.4.0.2928. My problem is, that it does not show SMART status, temperatures or anything for any of my drives. This i all the data i get (se picture). Is there something i need to install on my ESXi host, or is this just not possible on my setup, because i use VMware? This i what i have on the host server: Thank you in advance..
  32. 1 point
    I'm not sure what you mean here. There is the read striping feature which may boost read speeds for you. Aside from that, there is the file placement rules, which you could use to lock certain files or folders to the SSDs to get better read speeds.
  33. 1 point
    Christopher (Drashna)

    Surface scan and SSD

    Saiyan, No. The surface scan is read only. The only time we write is if we are able to recover files, after you've told it to. The same thing goes with the file system check. We don't alter any of the data on the drives without your explicit permission. And to clarify, we don't really identify if it's a SSD or HDD. We just identify the drive (using Windows APIs). How we handle the drive doesn't change between SSD or HDD. And in fact, because of what Scanner does, it doesn't matter what kind of drive it is because we are "hands off" with your drives. Grabbing the information about the drives and running the scans are all "read only" and doesn't modify anything on the drives. The only time we write to the drives is when you explicitly allow it (repair unreadable data, or fix the file system). And because we use built in tools/API when we do this, Windows should handle any "SSD" specific functionality/features. I just wanted to make this clarification, because you seem to be very hesitant about Scanner and SSDs. But basically Scanner itself doesn't care if the drive is a SSD or not, because nothing we do should ever adversely affect your SSD. Data integrity is our top priority, and we try to go out of our way to preserve your data.
  34. 1 point
    Alex

    Surface scan and SSD

    Hi Saiyan, I'm the developer. The Scanner never writes to SSDs while performing a surface scan and therefore does not in any way impact the lifespan of the SSD. However, SSDs do benefit from full disk surface scans, just like spinning hard drive, in that the surface scan will bring the drive's attention to any latent sectors that may become unreadable in the future. The Scanner's disk surface scan will force your SSD to remap the damaged sectors before the data becomes unreadable. In short, there is no negative side effect to running the Scanner on SSDs, but there is a positive one. Please let me know if you need more information.
  35. 0 points
    Umfriend

    10gb speeds using ssd cache?

    I doubt Stablebit would want to go the RamCache route because of the risk of any system failure causing the loss of (more) data (compared to SSD Cache or normal storage). I don;t but I know there are people here that succesfully use the SSD Cache. And it really depends on what SSD you are using. If it is a SATA SSD then you would not expect the 10G to be saturated. In any case, @TeleFragger (OP) does use duplication so he/you will need two SSDs for this to work.

Announcements

×
×
  • Create New...