Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Reputation Activity

  1. Like
    Christopher (Drashna) got a reaction from klepp0906 in Duplication Warnings / Duplication Disabled   
    If you are using symlinks, information about these are stored in the metadata folder, and that is set to x3 duplication by default. 
  2. Like
    Christopher (Drashna) got a reaction from klepp0906 in Resetting scanner info?   
    You are very weclome!
    Correct.  At least for that system.  Reconnecting it should automatically re-activate, though. 
  3. Like
    Christopher (Drashna) got a reaction from klepp0906 in Temps display out of pool, but not in pool. intended?   
    Thanks for letting us know. 
    I can confirm this behavior, as well, and have flagged it.
    https://stablebit.com/Admin/IssueAnalysis/28645
  4. Like
    Christopher (Drashna) got a reaction from klepp0906 in Few questions about scanner   
    You will need to manually update the location information. 
    In StableBit DrivePool, disable the bitlocker detection.  That helps from that end. 
    https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings
    For StableBit Scanner, throttle the SMART queries, as that usually helps. 
    https://stablebit.com/Support/Scanner/2.X/Manual?Section=SMART
    As for the work window, that's defined in the "general" tab.  So if you don't have a window set, then this setting doesn't do you much good. 
    Meets or exceeds it.   Also, there is a threshold setting to.  It will warn you as it approaches that maximum.  (default is for a 15C window)
    It's one disk per controller.  So if you have multiple controllers, such as onboard, and a controller card, it can scan multiple disks. 
    It checks the disks, and see if there are any disks with sections that haven't been scanned in "x" days, with x being the settings for how frequently to scan. 
    There are some other factors, such as disk activity, etc. 
    It means that the scanning is done in such a way that any normal activity takes priority over the scans, and may stop the scanning from occurring.   This way, it minimizes the performance impact for the drives, ideally. 
  5. Confused
    Christopher (Drashna) got a reaction from Jonibhoni in Primocache is limited to 16 disks, what are my options?   
    Oh wow. 
    It may be worth contacting PrimoCache's company to see if they have a recommendation here. 
    That may work, but it would definitely be better to target the underlying disks, as the actual reads occur there.  You'll get better results that way, without a doubt. 
  6. Like
    Christopher (Drashna) got a reaction from klepp0906 in feature requests! save window location, dark mode, tray!   
    Ah, okay, I see what you mean about the UI saving, now.  I'll flag that as a bug, since I can definitely reproduce. 
    https://stablebit.com/Admin/IssueAnalysis/28642
  7. Like
    Christopher (Drashna) got a reaction from zeoce in Is dark mode planned for DrivePool & CloudDrive?   
    Not currently, but I definitely do keep on bringing it up. 
  8. Like
    Christopher (Drashna) got a reaction from igobythisname in How can I use StableBit DrivePool with StableBit CloudDrive?   
    Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already.
     
    The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds.
     
     
    Why is this important and how does it affect your pool?
    You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive.  You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk.  
     
    These are some very useful tricks to integrate the products, already. 
    And if anyone else finds some neat tips or tricks, we'll add them here as well.
     
  9. Thanks
    Christopher (Drashna) got a reaction from TPham in Organizing Drive Pool properly. Help is needed, please   
    I think you're looking for this:
    https://wiki.covecube.com/StableBit_DrivePool_Q4142489
     
  10. Like
    Christopher (Drashna) reacted to Ryo in Is dark mode planned for DrivePool & CloudDrive?   
    i would also like to see dark mode addedto all stablebit products
  11. Like
    Christopher (Drashna) got a reaction from red in Is dark mode planned for DrivePool & CloudDrive?   
    Not currently, but I definitely do keep on bringing it up. 
  12. Thanks
    Christopher (Drashna) got a reaction from danfer in Current development   
    I mean, alex is alive and well.  but yeah, some issues that don't directly involve Covecube.  
    If alex wants to answer what exacty so going on, he will.  But stuff is still happening. 
  13. Thanks
    Christopher (Drashna) got a reaction from Bob Headrick in StableBit Scanner not showing SMART Data when there is a SCSI to ATA Translation layer between the drives and the OS   
    You posted on another thread too. 
     
    But You may need to enable the "Unsafe" "Direct IO" option to get the SMART data off of these drives, due to the controller. 
    https://wiki.covecube.com/StableBit_Scanner_Advanced_Settings#Advanced_Settings
    if that doesn't work, enable the "NoWmi" option for "Smart", as well. 
  14. Thanks
    Christopher (Drashna) got a reaction from LewJones77 in Does DrivePool work on Windows 11 ?   
    We have a number of people using Windows 11 already,  but we won't say that it's officially supported until Windows 11 is no longer a beta. This is because the OS is in too much flux until it's RTM'ed, and takes a lot of work to officially support it before then. 
  15. Like
    Christopher (Drashna) reacted to Edward in Drivepool read only, individual drives fine   
    Christopher (at support) came back on my ticket.  He reenabled my ID and all now good.  Can write to the DrivePool.
    Just in case someone else encounters this problem use a fresh trial ID to get things going while waiting for support to reenable the paid ID. 
  16. Thanks
    Christopher (Drashna) got a reaction from chaostheory in Current development   
    I mean, alex is alive and well.  but yeah, some issues that don't directly involve Covecube.  
    If alex wants to answer what exacty so going on, he will.  But stuff is still happening. 
  17. Like
    Christopher (Drashna) got a reaction from Haldanetti in Understanding file placement   
    First, thank you for your interest in our product(s)! 
     
     
    The default file placement strategy is to place files on the drive(s) with the most available free space (measured absolutely, rather than based on percentage). This happens regardless of the balancing status.   In fact, it's the balancers themselves that can (will) change the placement strategy of new files. 
     
    For what you want, that isn't ideal.... and before I get to the solution: 
     
     
     
    The first issue here is that there is a misconception about how the balancing engine works (or more specifically, with how frequently or aggressive it is).
     
    For the most part, the balancing engine DOES NOT move files around.   For a new, empty pool, balancing will rarely, if ever move files around.  Partly because, it will proactively control where files are placed in the first place. 
     
    That said, each balancer does have exceptions here.  But just so you understand how and why each balancer works and when it would actually move files, let me enumerate each one and give a brief description of them. 
     
    StableBit Scanner (the balancer). 
    This balancer only works if you have StableBit Scanner installed on the same system.  By default, it is only configured to move contents off of a disk if "damaged sectors" (aka "Unreadable sectors") are detected during the surface scan.  This is done in an attempt to prevent data loss from file corruption.  
    Optionally, you can do this for SMART warnings as well. And to avoid usage if the drive has "overheated". If you're using SnapRAID, then it may be worth turning this balancer off, as it isn't really needed Volume Equalization.
    This only affects drives that are using multiple volumes/partitions on the same physical disk.  It will equalize the usage, and help prevent duplicates from residing on the same physical disk. Chances are that this balancer will never do anything on your system.  Drive Usage Limiter
    This balancer controls what type of data (duplicated or unduplicated) can reside on a disk.  For the most part, most people won't need this.  
    We recommend using it for drive removal or "special configurations" (eg, my gaming system uses it to store only duplicated data, aka games, on the SSD, and store all unduplicated data on the hard drive)Unless configured manually, this balancer will not move data around.  Prevent Drive Overfill
    This balancer specifically will move files around, and will do so only if the drive is 90+% full by default. This can be configured, based on your needs. However, this will only move files out of the drive until the drive is 85% filled. This is one of the balancers that is likely to move data.  But this will only happen on very full pool.
    This can be disabled, but may lead to situations where the drives are too full. Duplication Space Optimizer.
    This balancer's sole job is to rebalance the data in such a way that removes the "Unusable for duplication" space on the pool.   If you're not using duplication at all, you can absolutely disable this balancer  
     
     
    So, for the most part, there is no real reason to disable balancing.  Yes, I understand that it can cause issues for SnapRAID. But depending on the system, it is very unlikely to.  And the benefits you gain by disabling it may be outweighed by the what the balancers do.  
     
    Especially because of the balancer plugins. 
     
     
    Specifically, you may want to look at the "Ordered File Placement" balancer plugin.  This specifically fills up one drive at a time.  Once the pool fills up the disk to the preset threshold, it will move onto the next disk. 
     
    This may help keep the contents of a specific folders together.  Meaning that it may help keep the SRT file in the same folder as the AVI file. Or at least, better about it than the default placement strategy.  This won't guarantee the folder placement, but significantly increases the odds.
     
    That said, you can use file placement rules to help with this.  Either to micromanage placement, or ... you can set up a SSD dedicated for metadata like this, so that all of the SRT and other files end up on the SSD.  That way, the access is fast and the power consumption is minimal. 
  18. Like
    Christopher (Drashna) got a reaction from AnthonyMF in Can't copy some files ??? No reason why.   
    Well, it may be worth testing on the latest beta versions, as there are some changes that may effect this, due to driver changes. 
    If you're already on the beta or the beta doesn't help, please open a ticket at https://stablebit.com/Contact so we can see if we can help fix this issue. 
  19. Like
    Christopher (Drashna) got a reaction from Yolo_pl in S.M.A.R.T. - 78.4%   
    Yeah, it's ... manufacturer fun.   SMART really isn't a standard, it's more of a guideline, that a lot of manufacturers take a LOT of liberty with.  NVMe health is a *lot* better, in this regards (it an actual standard). 
  20. Like
    Christopher (Drashna) got a reaction from Yolo_pl in Drivepool and Drive letters - maximum number of drives?   
    Yes, you can do that, too.  
     
    However, I generally prefer and recommend mounting to a folder, for ease of access.
     
    It's much easier to run "chkdsk c:\drives\pool1\disk5", than "chkdsk \\?\Volume{GUID}"... and easier to identify.
     
     
     
    Yes. The actually pool is handled by a kernel mode driver, meaning that the activity is passed on directly to the disks, basically. 
     
    Meaning, you don't see it listed in task manager, like a normal program. 
  21. Like
    Christopher (Drashna) got a reaction from Yolo_pl in Drivepool and Drive letters - maximum number of drives?   
    StableBit DrivePool doesn't care about drive letters.  It uses the Volume ID (which Windows mounts to a drive letter or folder path). 
     
    So you can remove the drive letters, if you want, or mount to a folder path. 
     
    http://wiki.covecube.com/StableBit_DrivePool_Q6811286
  22. Like
    Christopher (Drashna) got a reaction from otravers in Clouddrive continues to attempt uploading when at gdrive upload limit   
    Just a heads up, yes, it does continue to attempt to upload, but we use an exponential backoff when the software gets throttling requests like this. 
    However, a daily limit or schedular is something that has been requested and is on our "to do"/feature request list.  I just don't have an ETA for when it would be considered or implemented.
  23. Like
    Christopher (Drashna) got a reaction from Shane in Change allocation unit size of all drives in DrivePool: Best way?   
    LOL.  I just recently did this on my own pool, actually. 
     
     
    There isn't a good way. You need to format the drives to do this (or use a 3rd party tool, and hope it doesn't corrupt your disk).
     
    The simplest way is to use the balancing system.  Use the "Disk Space Limiter" balancer to clear out one or more disks at a time. Once it's done that (it make take a few hours or longer), remove the disk from the pool and reformat it. Re-add it and repeat until you've cycled out ALL of the disks.
     
     
    Specifically, the reason that I mention the Disk Space Limiter balancer is that it runs in the background and doesn't set the pool to be read only. And is mostly automated.
  24. Thanks
    Christopher (Drashna) got a reaction from TPham in Cloud Providers   
    Well, Alex has covered the development side of this, I've been looking into the pricing of the different providers.     While "unlimited" is clearly the best option for many here, I want focus on the "big" providers (Amazon S3, Azure Storage, and Google Cloud Storage).  Unfortunately, for many users, the high cost associated with these providers may immediately put them out of range. But we still think it is a good idea to compare the pricing (at least for reference sake).    All three of these providers include a storage pricing (how much data you're storing), "Request" pricing (how many API requests you make), and data transfer pricing (how much data you've transferred to and from. And all prices are listed for the US Standard region, ATM. I've tried to reorder lists, so that each provider is shown using the same layout for their different tiers.    Amazon S3     Storage Pricing (Amount stored)                                           Reduced Redundancy             Standard                  Glacier Up to     1TB / Month             $0.0240 per GB               $0.0300 per GB        $0.0100 per GB Up to   50TB / Month             $0.0236 per GB               $0.0295 per GB        $0.0100 per GB Up to 500TB / Month             $0.0232 per GB               $0.0290 per GB        $0.0100 per GB Up to     1PB / Month             $0.0228 per GB               $0.0285 per GB        $0.0100 per GB Up to     4PB / Month             $0.0224 per GB               $0.0280 per GB        $0.0100 per GB Over      5PB / Month             $0.0220 per GB               $0.0275 per GB        $0.0100 per GB   Specifically, Amazon lists "for the next" pricing, so the pricing may be cumulative. Also, "reduced Redundancy" means that they're using mostly only local servers to you, and not redundant throughout various regions.  And this is ~$25 per TB per month of storage for the Reduced Redundancy, about $30 per TB per month for Standard and $10.24 per TB per month for Glacier.   This may seem like a deal, but lets look at the data transfer pricing.   Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.090 per GB "Next"   40TB / Month (50TB)       $0.085 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Amazon S3 for special consideration   That's $92 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $115/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    PUT, COPY, POST, LIST Requests            $0.005 per 1000 requests
    Glacier Archive and Restore Requests        $0.050 per 1000 requests
    DELETE Requests                                         Free (caveat for Glacier)
    GET and other requests                              $0.004 per 10,000 requests
    Glacier Data Restores                                  Free
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    Needless to say, that every time you list contents, you may be making multiple requests (we minimize this as much as possible with the caching/prefetching options, but that only limits it to a degree).  This one is hard to quantify without actual usage.
     
     
    Microsoft Azure Storage   Storage Pricing (Amount stored) for Block Blob                                                                  LRS                       ZRS                        GRS                       RA-GRS First          1TB / Month                  $0.0240 per GB    $0.0300 per GB        $0.0480 per GB        $0.0610 per GB "Next"     49TB / Month (50TB)      $0.0236 per GB    $0.0295 per GB        $0.0472 per GB        $0.0599 per GB "Next"   450TB / Month (500TB)    $0.0232 per GB    $0.0290 per GB        $0.0464 per GB        $0.0589 per GB "Next"   500TB / Month (1000TB)  $0.0228 per GB    $0.0285 per GB        $0.0456 per GB        $0.0579 per GB "Next" 4000TB / Month (5000TB)  $0.0224 per GB    $0.0280 per GB        $0.0448 per GB        $0.0569 per GB Over   5000PB / Month                                      Contact Microsoft Azure for special consideration   The LRS and ZRS "zones" are priced identically to Amazon S3 here.  However, lets explain these terms: LRS:  Multiple copies of the data on different physical servers, as the same datacenter (one location). ZRS: Three copies at different data centers within a region, or in different regions. For "blob storage only". GRS: Same as LRS, but with multiple (asynchronous) copies at other another datacenter. RA-GRS: Same was GRS, but with read access to the secondary data center   And this is ~$25 per TB per month of storage for the LRS, about $30 per TB per month for ZRS, about $50 per TB per month for GRS, and about $60 per TB per month for RA-GRS.   Microsoft Azure offers other storage types, but it gets much more expensive, very quickly (double the of what's listed for Blob storage, or higher).   Transfer Rate   Unfortunately, Microsoft isn't as forthcoming about their transfer rates. They tuck it away on another page, so it's harder to access.  However, it is    Data Transfer IN  (upload)          $0.000 per GB Data Transfer OUT to the internet (download) First       5GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.087 per GB "Next"   40TB / Month (50TB)       $0.083 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Micorosft Azure for special consideration   That's $89 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    This is slightly cheaper than Amazon S3, but not by a whole lot, and it heavily depends on the level of redundancy and storage type you use. 
     
    Request Pricing (API Calls)
    Any Request                         $0.0036 per 10000 requests
    Import/Export (HDDs)           $80 per drive, may not be not suitable for CloudDrive
     
     
    This is definitely much cheaper than Amazon S3's request pricing.
    It's still going to run you around $100 per TB per month to store and transfer, but it's a bit better than Amazon S3. And that's not counting the request transaction pricing.
     
    Google Cloud Storage     Storage Pricing (Amount stored)     DRA Storage                    Standard          Cloud Storage Nearline $0.0200 per GB               $0.0260 per GB        $0.0100 per GB   DRA (Durability Reduced Availability) means that the data is not always available. While this is the cheapest, it will definitely cause latency issues (or worse).   Cloud Storage Nearly is a step cheaper, and is at a reduced performance, and has less Availability.   However, this is a flat rate, so it's very simple to figure out what your cost will be here.   And this is ~$20.48 per TB per month of storage for the DRA Storage, $26.63 per TB per month for Standard and $10.24 per TB per month for Cloud Storage Nearline.   Now lets look at the transfer pricing. Transfer Pricing   Data Transfer In to Google (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                           $0.120 per GB "Next"   10TB / Month                           $0.110 per GB Over     40TB / Month (50TB)               $0.080 per GB   That's about $122 per TB per month, up to 10TBs
     
     
    So, that boils down to $140/month to store and access 1TB per month. This is definitely more expensive than either Amazon S3 or Azure Storage.
     
    Additionally, Google Cloud Storage does charge you per API call, as well.
     
    Request Pricing (API Calls)
    LIST, PUT, COPY, POST Requests       $0.010 per 1000 requests
    GET, and others Requests                     $0.001 per 1000 requests
    DELETE Requests                                  Free
     
    Google is definitely significantly more expensive when it comes to API calls. 
     
     
     
    Backblaze B2     Storage Pricing (Amount stored)    Flat Storage Rate    $0.005 per GB     The first 10GBs is free, but that's a small account, so we won't even bother computing it (it's a $0.05 difference, specifically) But that's basically $5.12 per TB per month for storage.      Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month                   $0.000 per GB Past 1GB / Month                   $0.050 per GB   That's $51 per TB per month transferred. This is by far, the cheapest option here.
    And chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $56/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    DELETE bucket/file version, HIDE, UPLOAD Requests            Free
    GET, DOWNLOAD file by ID/Name                                          $0.004 per 10,000 requests
    Authorize, CREATE, GET, LIST, UPDATE Requests               $0.004 per 1000 requests
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    First 2500 are free each day, and this is different from the other providers.  However, as above, it's hard to predict the usage without actual usage.
     
     

    Is there a clear winner here? No. Depending on the available, amount of data and traffic, and usage, it varies depending on how you want to use the provider.
     
    Well, in regards to pricing, Backblaze is clearly the winner here. But giving other issues with Backblaze (eg, sourcing, reporting statistically insignificant findings, etc), the question is "Will they be able to maintain their B2 business?" And that is a significant one. Only time will tell. 
  25. Like
    Christopher (Drashna) got a reaction from Shane in Clouddrive continues to attempt uploading when at gdrive upload limit   
    Just a heads up, yes, it does continue to attempt to upload, but we use an exponential backoff when the software gets throttling requests like this. 
    However, a daily limit or schedular is something that has been requested and is on our "to do"/feature request list.  I just don't have an ETA for when it would be considered or implemented.
×
×
  • Create New...