Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11700
  • Joined

  • Last visited

  • Days Won

    388

Reputation Activity

  1. Like
    Christopher (Drashna) got a reaction from Shane in Change allocation unit size of all drives in DrivePool: Best way?   
    LOL.  I just recently did this on my own pool, actually. 
     
     
    There isn't a good way. You need to format the drives to do this (or use a 3rd party tool, and hope it doesn't corrupt your disk).
     
    The simplest way is to use the balancing system.  Use the "Disk Space Limiter" balancer to clear out one or more disks at a time. Once it's done that (it make take a few hours or longer), remove the disk from the pool and reformat it. Re-add it and repeat until you've cycled out ALL of the disks.
     
     
    Specifically, the reason that I mention the Disk Space Limiter balancer is that it runs in the background and doesn't set the pool to be read only. And is mostly automated.
  2. Thanks
    Christopher (Drashna) got a reaction from TPham in Cloud Providers   
    Well, Alex has covered the development side of this, I've been looking into the pricing of the different providers.     While "unlimited" is clearly the best option for many here, I want focus on the "big" providers (Amazon S3, Azure Storage, and Google Cloud Storage).  Unfortunately, for many users, the high cost associated with these providers may immediately put them out of range. But we still think it is a good idea to compare the pricing (at least for reference sake).    All three of these providers include a storage pricing (how much data you're storing), "Request" pricing (how many API requests you make), and data transfer pricing (how much data you've transferred to and from. And all prices are listed for the US Standard region, ATM. I've tried to reorder lists, so that each provider is shown using the same layout for their different tiers.    Amazon S3     Storage Pricing (Amount stored)                                           Reduced Redundancy             Standard                  Glacier Up to     1TB / Month             $0.0240 per GB               $0.0300 per GB        $0.0100 per GB Up to   50TB / Month             $0.0236 per GB               $0.0295 per GB        $0.0100 per GB Up to 500TB / Month             $0.0232 per GB               $0.0290 per GB        $0.0100 per GB Up to     1PB / Month             $0.0228 per GB               $0.0285 per GB        $0.0100 per GB Up to     4PB / Month             $0.0224 per GB               $0.0280 per GB        $0.0100 per GB Over      5PB / Month             $0.0220 per GB               $0.0275 per GB        $0.0100 per GB   Specifically, Amazon lists "for the next" pricing, so the pricing may be cumulative. Also, "reduced Redundancy" means that they're using mostly only local servers to you, and not redundant throughout various regions.  And this is ~$25 per TB per month of storage for the Reduced Redundancy, about $30 per TB per month for Standard and $10.24 per TB per month for Glacier.   This may seem like a deal, but lets look at the data transfer pricing.   Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.090 per GB "Next"   40TB / Month (50TB)       $0.085 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Amazon S3 for special consideration   That's $92 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $115/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    PUT, COPY, POST, LIST Requests            $0.005 per 1000 requests
    Glacier Archive and Restore Requests        $0.050 per 1000 requests
    DELETE Requests                                         Free (caveat for Glacier)
    GET and other requests                              $0.004 per 10,000 requests
    Glacier Data Restores                                  Free
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    Needless to say, that every time you list contents, you may be making multiple requests (we minimize this as much as possible with the caching/prefetching options, but that only limits it to a degree).  This one is hard to quantify without actual usage.
     
     
    Microsoft Azure Storage   Storage Pricing (Amount stored) for Block Blob                                                                  LRS                       ZRS                        GRS                       RA-GRS First          1TB / Month                  $0.0240 per GB    $0.0300 per GB        $0.0480 per GB        $0.0610 per GB "Next"     49TB / Month (50TB)      $0.0236 per GB    $0.0295 per GB        $0.0472 per GB        $0.0599 per GB "Next"   450TB / Month (500TB)    $0.0232 per GB    $0.0290 per GB        $0.0464 per GB        $0.0589 per GB "Next"   500TB / Month (1000TB)  $0.0228 per GB    $0.0285 per GB        $0.0456 per GB        $0.0579 per GB "Next" 4000TB / Month (5000TB)  $0.0224 per GB    $0.0280 per GB        $0.0448 per GB        $0.0569 per GB Over   5000PB / Month                                      Contact Microsoft Azure for special consideration   The LRS and ZRS "zones" are priced identically to Amazon S3 here.  However, lets explain these terms: LRS:  Multiple copies of the data on different physical servers, as the same datacenter (one location). ZRS: Three copies at different data centers within a region, or in different regions. For "blob storage only". GRS: Same as LRS, but with multiple (asynchronous) copies at other another datacenter. RA-GRS: Same was GRS, but with read access to the secondary data center   And this is ~$25 per TB per month of storage for the LRS, about $30 per TB per month for ZRS, about $50 per TB per month for GRS, and about $60 per TB per month for RA-GRS.   Microsoft Azure offers other storage types, but it gets much more expensive, very quickly (double the of what's listed for Blob storage, or higher).   Transfer Rate   Unfortunately, Microsoft isn't as forthcoming about their transfer rates. They tuck it away on another page, so it's harder to access.  However, it is    Data Transfer IN  (upload)          $0.000 per GB Data Transfer OUT to the internet (download) First       5GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.087 per GB "Next"   40TB / Month (50TB)       $0.083 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Micorosft Azure for special consideration   That's $89 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    This is slightly cheaper than Amazon S3, but not by a whole lot, and it heavily depends on the level of redundancy and storage type you use. 
     
    Request Pricing (API Calls)
    Any Request                         $0.0036 per 10000 requests
    Import/Export (HDDs)           $80 per drive, may not be not suitable for CloudDrive
     
     
    This is definitely much cheaper than Amazon S3's request pricing.
    It's still going to run you around $100 per TB per month to store and transfer, but it's a bit better than Amazon S3. And that's not counting the request transaction pricing.
     
    Google Cloud Storage     Storage Pricing (Amount stored)     DRA Storage                    Standard          Cloud Storage Nearline $0.0200 per GB               $0.0260 per GB        $0.0100 per GB   DRA (Durability Reduced Availability) means that the data is not always available. While this is the cheapest, it will definitely cause latency issues (or worse).   Cloud Storage Nearly is a step cheaper, and is at a reduced performance, and has less Availability.   However, this is a flat rate, so it's very simple to figure out what your cost will be here.   And this is ~$20.48 per TB per month of storage for the DRA Storage, $26.63 per TB per month for Standard and $10.24 per TB per month for Cloud Storage Nearline.   Now lets look at the transfer pricing. Transfer Pricing   Data Transfer In to Google (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                           $0.120 per GB "Next"   10TB / Month                           $0.110 per GB Over     40TB / Month (50TB)               $0.080 per GB   That's about $122 per TB per month, up to 10TBs
     
     
    So, that boils down to $140/month to store and access 1TB per month. This is definitely more expensive than either Amazon S3 or Azure Storage.
     
    Additionally, Google Cloud Storage does charge you per API call, as well.
     
    Request Pricing (API Calls)
    LIST, PUT, COPY, POST Requests       $0.010 per 1000 requests
    GET, and others Requests                     $0.001 per 1000 requests
    DELETE Requests                                  Free
     
    Google is definitely significantly more expensive when it comes to API calls. 
     
     
     
    Backblaze B2     Storage Pricing (Amount stored)    Flat Storage Rate    $0.005 per GB     The first 10GBs is free, but that's a small account, so we won't even bother computing it (it's a $0.05 difference, specifically) But that's basically $5.12 per TB per month for storage.      Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month                   $0.000 per GB Past 1GB / Month                   $0.050 per GB   That's $51 per TB per month transferred. This is by far, the cheapest option here.
    And chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $56/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    DELETE bucket/file version, HIDE, UPLOAD Requests            Free
    GET, DOWNLOAD file by ID/Name                                          $0.004 per 10,000 requests
    Authorize, CREATE, GET, LIST, UPDATE Requests               $0.004 per 1000 requests
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    First 2500 are free each day, and this is different from the other providers.  However, as above, it's hard to predict the usage without actual usage.
     
     

    Is there a clear winner here? No. Depending on the available, amount of data and traffic, and usage, it varies depending on how you want to use the provider.
     
    Well, in regards to pricing, Backblaze is clearly the winner here. But giving other issues with Backblaze (eg, sourcing, reporting statistically insignificant findings, etc), the question is "Will they be able to maintain their B2 business?" And that is a significant one. Only time will tell. 
  3. Like
    Christopher (Drashna) got a reaction from otravers in Clouddrive continues to attempt uploading when at gdrive upload limit   
    Just a heads up, yes, it does continue to attempt to upload, but we use an exponential backoff when the software gets throttling requests like this. 
    However, a daily limit or schedular is something that has been requested and is on our "to do"/feature request list.  I just don't have an ETA for when it would be considered or implemented.
  4. Like
    Christopher (Drashna) got a reaction from Shane in Drivepool - Drive size mismatch (growing and shrinking) - Size and Size on disk   
    Deduplication, ntfs compression, and "slack space" can all account/contribute to this. 
  5. Like
    Christopher (Drashna) got a reaction from roirraWedorehT in Cloud Drive Using up my Local Storage   
    The local cache uses "sparse files".  These files can take up a very large space, or none at all, and they'll still report the same size.
    This is how part of how we keep track of what chunks are used where, etc.
     
    And if you right click on the file/folder, and check properties, you'll notice that it will report the size and size on disk, and these should be very different values. In fact, the "size on disk" should be closer to the local cache size.
     
    However, the cache size isn't a hard limit for what can be stored on the disk.  We grow past that in the case that the cache is filled up completely.  
     
    And if the cache is 'overfilled', as it uploads the data, it will reduce the size down until it hits the specified cache size. 
     
     
     
    As for the deadlock issue, we are looking into this issue, as it is definitely a serious one. However, it is a very complicated issue, so we don't have a quick fix (we want to fix it properly, rather than half-assing it).
  6. Like
    Christopher (Drashna) got a reaction from Doug in Measuring the entire pool at every boot?   
    Wow, nice digging! 
    And sorry for not getting back here sooner! 
    Also, for the permissions, this should work too:
    http://wiki.covecube.com/StableBit_DrivePool_Q5510455
  7. Like
    Christopher (Drashna) got a reaction from Brig in Drives need to be assigned a letter to be seen.   
    Yup. This is an issue we see from time to time, and it's a known issue with Windows (not our software). 
    Specifically, the issue is that ... sometimes, windows will not mount a volume if it doesn't have a letter (or path) assigned to it. 
    Fortunately, this is a dead simple fix:
    https://wiki.covecube.com/StableBit_DrivePool_F3540
  8. Like
    Christopher (Drashna) got a reaction from roirraWedorehT in Why isn't XYZ Provider available?   
    Not every Cloud Provider is going to be listed. And that's okay.

    For the public beta, we focused on the most prolific and popular cloud storage providers.

    If you don't see a specific provider in StableBit CloudDrive, let us know and we'll look into it.
    If you can provide a link to the SDK/API, it would be helpful, but it's not necessary.
     
    Just because you see it listed here does not me we will add the provider. Whether we add them or not depends on a number of factors, including time to develop them, stability, reliability, and functionality, among other factors.

    Providers already requested:
    Mega https://stablebit.com/Admin/IssueAnalysis/15659 SharePoint https://stablebit.com/Admin/IssueAnalysis/16678 WebDAV IceDrive OwnCloud (SabreDAV) https://stablebit.com/Admin/IssueAnalysis/16679 OpenStack Swift https://stablebit.com/Admin/IssueAnalysis/17692 OpenDrive https://stablebit.com/Admin/IssueAnalysis/17732 Added, but free tier not suitable for use with StableBit CloudDrive  Yandex.Disk https://stablebit.com/Admin/IssueAnalysis/20833 EMC Atmos https://stablebit.com/Admin/IssueAnalysis/25926 Strato HiDrive https://stablebit.com/Admin/IssueAnalysis/25959 Citrix ShareFile https://stablebit.com/Admin/IssueAnalysis/27082 Email support (IMAP, maybe Pop) https://stablebit.com/Admin/IssueAnalysis/27124 May not be reliable , as this heavily depends on the amount of space that the provider allows. And some providers may prune messages that are too old, go over the quota, etc. JottaCloud https://stablebit.com/Admin/IssueAnalysis/27327 May not be usable, as there is no publicly documented API. FileRun https://stablebit.com/Admin/IssueAnalysis/27383 FileRun is a Self Hosted, PHP based cloud storage solution.  Free version limited to 3 users, enterprise/business licensing for more users/features available.    JSON based API.  SpiderOak https://stablebit.com/Admin/IssueAnalysis/27532 JSON based API Privacy minded pCloud https://stablebit.com/Admin/IssueAnalysis/27939 JSON based API OVH Cloud https://stablebit.com/Admin/IssueAnalysis/28204 StorJ https://stablebit.com/Admin/IssueAnalysis/28364 Providers like tardigrade.io appear to use this library/API iDrive Amazon S3 Compatible API. No need for separate provider. ASUS WebStorage https://stablebit.com/Admin/IssueAnalysis/28407 Documentation ... is difficult to read, making it hard to tell if support is possible.  Apple iCloud https://stablebit.com/Admin/IssueAnalysis/28548 uptobox https://stablebit.com/Admin/IssueAnalysis/28633 sCloud https://stablebit.com/Admin/IssueAnalysis/28650 Providers that will not be added:
    Degoo No publicly accessible API. Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider.  Sync.com No publicly accessible API Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider.  Amazon Glacier https://stablebit.com/Admin/IssueAnalysis/16676 There is a 4+ hour wait time to access uploaded data. This means Amazon Glacier completely unusable to us. We couldn't even perform upload verification on content due to this limitation.  HubiC https://stablebit.com/Admin/IssueAnalysis/16677 It's OpenStack - No need for a separate provider. CrashPlan? https://stablebit.com/Admin/IssueAnalysis/15664 The API provided appears to be mostly for monitoring and maintenance. No upload/download calls, so not suitable for StableBit CloudDrive, unfortunately.  MediaFire Not suitable due to stability issues. Thunderdrive No publicly accessible API. LiveDrive No Publicly accessible API. Zoolz No Publicly accessible (developer) API Proton Drive No Publicly accessible API
  9. Like
    Christopher (Drashna) got a reaction from Shane in how to force that a folder content is always copied on only one physical drive, at a predifined depth of folder hierarchy ?   
    You might be able to use the file placemeent rules to do this.  Eg. "/*/*/*"  However, I'm not sure that this will work.
  10. Like
    Christopher (Drashna) got a reaction from zeoce in Using system drive as local cache vs non system drive   
    Yes.  But YMMV. 
    Mostly, having the cache on a different drive means that you're separating out the I/O load, which can definitely improve performance.   Though, this depends on the SSDs you're using.  Things like AHCI vs NVMe, and the IOPS rating of the drive make more of a difference. 
  11. Like
    Christopher (Drashna) got a reaction from Chris Downs in Microsoft Storage Space Drives not detecting   
    This is a topic that comes up from time to time.  
    Yes, it is possible to display the SMART data from the underlying  drives in Storage Spaces.  
    However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple.  At best, it will require a drastic change, if not outright rewrite of the UI.  And that's not a small undertaking. 
    So, can we? Yes.  But do we have the resources to do so? not as much (we are a very small company)
  12. Like
    Christopher (Drashna) got a reaction from KlausTheFish in Microsoft Storage Space Drives not detecting   
    This is a topic that comes up from time to time.  
    Yes, it is possible to display the SMART data from the underlying  drives in Storage Spaces.  
    However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple.  At best, it will require a drastic change, if not outright rewrite of the UI.  And that's not a small undertaking. 
    So, can we? Yes.  But do we have the resources to do so? not as much (we are a very small company)
  13. Thanks
    Christopher (Drashna) got a reaction from TPham in NTFS Permissions and DrivePool   
    (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
  14. Like
    Christopher (Drashna) got a reaction from lemkeant in Duplication Warnings   
    It means the pool drive.  And yeah... how Windows handles disk/partition/volume stuff is confusing... at best. 
    For this ... take ownership of the folder, change it's permissions, and delete it (on the pool). 
    Then resolve the issue.  It should fix the issue, and shouldn't come back. 
  15. Thanks
    Christopher (Drashna) got a reaction from AK96SS in SSD Optimizer and v2.3.0.1144 Beta   
    Awesome, glad to hear that! 
    And yeah, AV can be annoying sometimes, but that it's been that long since you have had an issue is a good thing (or a horrible, horrible thing  )
  16. Thanks
    Christopher (Drashna) got a reaction from sonicdevo in Windows Defender and CloudDrive   
    This is a false positive, and happens from time to time.   And the specific "match" is the predictive engine, which is more prone to false positives. 
    You can safely ignore this. 
  17. Like
    Christopher (Drashna) got a reaction from Shane in NTFS Permissions and DrivePool   
    (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
  18. Like
    Christopher (Drashna) got a reaction from Shane in Permissions Confusion?   
    Also, there is this: 
    https://wiki.covecube.com/StableBit_DrivePool_Q5510455
  19. Like
    Christopher (Drashna) got a reaction from Shane in Removing drive from pool   
    I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. 
    But that's my resolution this year: to make a big effort to keep up with the forum. 
  20. Like
    Christopher (Drashna) got a reaction from TPham in Removing drive from pool   
    I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. 
    But that's my resolution this year: to make a big effort to keep up with the forum. 
  21. Like
    Christopher (Drashna) got a reaction from TPham in Another Cannot Remove Disk issue   
    This can happen if the PoolPart folder itself is damaged/corrupted.  Part of the removal process is marking the drive as "removed".  But also, it has to be able to read from the folder, as well.   File system corruption can stop both. 
    nn
    While it's removing, yes, it should. Afterwards, no.  But you can use the Drive Usage Limiter to prevent files from being placed on the drive, and to move files off of the drive.
    Even better is the "dp-cmd"'s "ignore-poolpart" command.  This immediately ejects the drive from the pool.  This doesn't move the drive contents, but only marks the disk as "removed" from the driver. It will show up as "missing" in the UI, and then you can remove it from the UI. 
    And from there, you can absolutely move the contents of the "poolpart" folder on that drive back into the pool. And ideally, using the "skip existing files" option to speed things up. 
     
     
  22. Like
    Christopher (Drashna) got a reaction from BIFFTAZ in Drive temperature not updating.   
    Setting the SMART queries to be throttled like that (720 minutes) means that the temperature (a SMART value) is only going to be updated every 12 hours.  
     
    If you uncheck the throttling option here, you'll see it updated much more rapidly. And setting it to something like "60 minutes" (1 hour) should see it update less frequently, but will be updated more often, and may get a more accurate reading. 
  23. Like
    Christopher (Drashna) got a reaction from gtaus in Cannot write to pool "Catastrophic Failure (Error 0x8000FFFF)", cannot remove disk from pool "The Media is Write Protected"   
    Yuuuuup.    This happens from time to time (I've seen it 4-5 times in the last 10 years, including both my systems and other peoples, so exceptionally rare). 
    I'm glad you were able to figure this out, and posted the solution!  Hopefully, no more weird stuff for you!
  24. Sad
    Christopher (Drashna) got a reaction from Jeff in WSL 2 support   
    Unfortunately, we don't have any plans on adding support for WSL2 at this time. 
  25. Like
    Christopher (Drashna) got a reaction from trader860 in Whats the procedure to migrate DrivePool from one machine to another?   
    Deactivate the license on the old system move the drives over to the new system install the software activate the license That's it.  The software will see the pooled drives and automatically recreate the pool.  Duplication information will be retained, but balancing information won't be.  
     
    You may want to reset the permissions on the pool, but that depends on if you customized them or not.
     
    For StableBit Scanner, just deactivate the license and activate it on the new system.
     
    To do so:
    StableBit DrivePool 2.X/Stablebit CloudDrive: Open the UI on the system that the software is installed on, click on the "Gear" icon in the top, right corner and select the "manage license" option. StableBit Scanner: Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link. This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system.   Otherwise, activate the trial period on the new system, and contact us at https://stablebit.com/contactand let us know. 
×
×
  • Create New...