Jump to content

TPham

Members
  • Posts

    12
  • Joined

  • Last visited

Reputation Activity

  1. Thanks
    TPham reacted to Christopher (Drashna) in Organizing Drive Pool properly. Help is needed, please   
    I think you're looking for this:
    https://wiki.covecube.com/StableBit_DrivePool_Q4142489
     
  2. Thanks
    TPham reacted to Christopher (Drashna) in Cloud Providers   
    Well, Alex has covered the development side of this, I've been looking into the pricing of the different providers.     While "unlimited" is clearly the best option for many here, I want focus on the "big" providers (Amazon S3, Azure Storage, and Google Cloud Storage).  Unfortunately, for many users, the high cost associated with these providers may immediately put them out of range. But we still think it is a good idea to compare the pricing (at least for reference sake).    All three of these providers include a storage pricing (how much data you're storing), "Request" pricing (how many API requests you make), and data transfer pricing (how much data you've transferred to and from. And all prices are listed for the US Standard region, ATM. I've tried to reorder lists, so that each provider is shown using the same layout for their different tiers.    Amazon S3     Storage Pricing (Amount stored)                                           Reduced Redundancy             Standard                  Glacier Up to     1TB / Month             $0.0240 per GB               $0.0300 per GB        $0.0100 per GB Up to   50TB / Month             $0.0236 per GB               $0.0295 per GB        $0.0100 per GB Up to 500TB / Month             $0.0232 per GB               $0.0290 per GB        $0.0100 per GB Up to     1PB / Month             $0.0228 per GB               $0.0285 per GB        $0.0100 per GB Up to     4PB / Month             $0.0224 per GB               $0.0280 per GB        $0.0100 per GB Over      5PB / Month             $0.0220 per GB               $0.0275 per GB        $0.0100 per GB   Specifically, Amazon lists "for the next" pricing, so the pricing may be cumulative. Also, "reduced Redundancy" means that they're using mostly only local servers to you, and not redundant throughout various regions.  And this is ~$25 per TB per month of storage for the Reduced Redundancy, about $30 per TB per month for Standard and $10.24 per TB per month for Glacier.   This may seem like a deal, but lets look at the data transfer pricing.   Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.090 per GB "Next"   40TB / Month (50TB)       $0.085 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Amazon S3 for special consideration   That's $92 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $115/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    PUT, COPY, POST, LIST Requests            $0.005 per 1000 requests
    Glacier Archive and Restore Requests        $0.050 per 1000 requests
    DELETE Requests                                         Free (caveat for Glacier)
    GET and other requests                              $0.004 per 10,000 requests
    Glacier Data Restores                                  Free
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    Needless to say, that every time you list contents, you may be making multiple requests (we minimize this as much as possible with the caching/prefetching options, but that only limits it to a degree).  This one is hard to quantify without actual usage.
     
     
    Microsoft Azure Storage   Storage Pricing (Amount stored) for Block Blob                                                                  LRS                       ZRS                        GRS                       RA-GRS First          1TB / Month                  $0.0240 per GB    $0.0300 per GB        $0.0480 per GB        $0.0610 per GB "Next"     49TB / Month (50TB)      $0.0236 per GB    $0.0295 per GB        $0.0472 per GB        $0.0599 per GB "Next"   450TB / Month (500TB)    $0.0232 per GB    $0.0290 per GB        $0.0464 per GB        $0.0589 per GB "Next"   500TB / Month (1000TB)  $0.0228 per GB    $0.0285 per GB        $0.0456 per GB        $0.0579 per GB "Next" 4000TB / Month (5000TB)  $0.0224 per GB    $0.0280 per GB        $0.0448 per GB        $0.0569 per GB Over   5000PB / Month                                      Contact Microsoft Azure for special consideration   The LRS and ZRS "zones" are priced identically to Amazon S3 here.  However, lets explain these terms: LRS:  Multiple copies of the data on different physical servers, as the same datacenter (one location). ZRS: Three copies at different data centers within a region, or in different regions. For "blob storage only". GRS: Same as LRS, but with multiple (asynchronous) copies at other another datacenter. RA-GRS: Same was GRS, but with read access to the secondary data center   And this is ~$25 per TB per month of storage for the LRS, about $30 per TB per month for ZRS, about $50 per TB per month for GRS, and about $60 per TB per month for RA-GRS.   Microsoft Azure offers other storage types, but it gets much more expensive, very quickly (double the of what's listed for Blob storage, or higher).   Transfer Rate   Unfortunately, Microsoft isn't as forthcoming about their transfer rates. They tuck it away on another page, so it's harder to access.  However, it is    Data Transfer IN  (upload)          $0.000 per GB Data Transfer OUT to the internet (download) First       5GB / Month                   $0.000 per GB Up to    10TB / Month                   $0.087 per GB "Next"   40TB / Month (50TB)       $0.083 per GB "Next" 100TB / Month (150TB)     $0.070 per GB "Next" 350TB / Month (500TB)     $0.050 per GB "Next" 524TB / Month (1024TB)   Contact Micorosft Azure for special consideration   That's $89 per TB per month, up to 10TBs
    Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    This is slightly cheaper than Amazon S3, but not by a whole lot, and it heavily depends on the level of redundancy and storage type you use. 
     
    Request Pricing (API Calls)
    Any Request                         $0.0036 per 10000 requests
    Import/Export (HDDs)           $80 per drive, may not be not suitable for CloudDrive
     
     
    This is definitely much cheaper than Amazon S3's request pricing.
    It's still going to run you around $100 per TB per month to store and transfer, but it's a bit better than Amazon S3. And that's not counting the request transaction pricing.
     
    Google Cloud Storage     Storage Pricing (Amount stored)     DRA Storage                    Standard          Cloud Storage Nearline $0.0200 per GB               $0.0260 per GB        $0.0100 per GB   DRA (Durability Reduced Availability) means that the data is not always available. While this is the cheapest, it will definitely cause latency issues (or worse).   Cloud Storage Nearly is a step cheaper, and is at a reduced performance, and has less Availability.   However, this is a flat rate, so it's very simple to figure out what your cost will be here.   And this is ~$20.48 per TB per month of storage for the DRA Storage, $26.63 per TB per month for Standard and $10.24 per TB per month for Cloud Storage Nearline.   Now lets look at the transfer pricing. Transfer Pricing   Data Transfer In to Google (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First       1GB / Month                           $0.120 per GB "Next"   10TB / Month                           $0.110 per GB Over     40TB / Month (50TB)               $0.080 per GB   That's about $122 per TB per month, up to 10TBs
     
     
    So, that boils down to $140/month to store and access 1TB per month. This is definitely more expensive than either Amazon S3 or Azure Storage.
     
    Additionally, Google Cloud Storage does charge you per API call, as well.
     
    Request Pricing (API Calls)
    LIST, PUT, COPY, POST Requests       $0.010 per 1000 requests
    GET, and others Requests                     $0.001 per 1000 requests
    DELETE Requests                                  Free
     
    Google is definitely significantly more expensive when it comes to API calls. 
     
     
     
    Backblaze B2     Storage Pricing (Amount stored)    Flat Storage Rate    $0.005 per GB     The first 10GBs is free, but that's a small account, so we won't even bother computing it (it's a $0.05 difference, specifically) But that's basically $5.12 per TB per month for storage.      Transfer Pricing   Data Transfer In to S3 (upload)  $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month                   $0.000 per GB Past 1GB / Month                   $0.050 per GB   That's $51 per TB per month transferred. This is by far, the cheapest option here.
    And chances are, that unless you have a very good speed, that's where you're going to be "stuck" at 
     
    So, that boils down to $56/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be).
     
    Additionally, Amazon S3 charges you per transaction (API call), as well.
     
    Request Pricing (API Calls)
    DELETE bucket/file version, HIDE, UPLOAD Requests            Free
    GET, DOWNLOAD file by ID/Name                                          $0.004 per 10,000 requests
    Authorize, CREATE, GET, LIST, UPDATE Requests               $0.004 per 1000 requests
                       (due to infrequent usage expected, can restore up to 5% monthly for free)
     
    First 2500 are free each day, and this is different from the other providers.  However, as above, it's hard to predict the usage without actual usage.
     
     

    Is there a clear winner here? No. Depending on the available, amount of data and traffic, and usage, it varies depending on how you want to use the provider.
     
    Well, in regards to pricing, Backblaze is clearly the winner here. But giving other issues with Backblaze (eg, sourcing, reporting statistically insignificant findings, etc), the question is "Will they be able to maintain their B2 business?" And that is a significant one. Only time will tell. 
  3. Thanks
    TPham reacted to Alex in Cloud Providers   
    As I was writing the various providers for StableBit CloudDrive I got a sense for how well each one performs / scales and the various quirks of some cloud services. I'm going to use this thread to describe my observations.
     
    As Stablebit CloudDrive grows, I'm sure that my understanding of the various providers will improve as well. Also remember, this is from a developer's point of view.
     
    Google Cloud Storage
    http://cloud.google.com/storage
     
    Pros:
    Reasonably fast. Simple API. Reliable. Scales very well. Cons:
    Not the fastest provider. Especially slow when deleting existing chunks (e.g. when destroying a drive). Difficult to use and bloated SDK (a development issue really).  
    This was the first cloud service that I wrote a StableBit CloudDrive provider for and initially I started writing the code against their SDK which I later realized was a mistake. I replaced the SDK entirely with my own API, so that improved the reliability of this provider and solved a lot of the issues that the SDK was causing.
     
    Another noteworthy thing about this provider is that it's not as fast as some of the other providers (Amazon S3 / Microsoft Azure Storage).
     
    Amazon S3
    http://aws.amazon.com/s3/
     
    Pros:
    Very fast. Reliable. Scales very well. Beautiful, compact and functional SDK. Cons:
    Configuration is a bit confusing. Here the SDK is uniquely good, it's a single DLL and super simple to use. Most importantly it's reliable. It handles multi-threading correctly and its error handling logic is straightforward. It is one of the few SDKs that StableBit CloudDrive uses out of the box. All of the other providers (except Microsoft Azure Storage) utilize custom written SDKs.
     
    This is a great place to store your mission critical data, I backup all of my code to this provider.
     
    Microsoft Azure Storage
    http://azure.microsoft.com/en-us/services/storage/
     
    Pros:
    Very fast. Reliable. Scales very well. Easy to configure. Cons:
    No reasonably priced support option that makes sense. This is a great cloud service. It's definitely on par with Amazon S3 in terms of speed and seems to be very reliable from my testing.
     
    Having used Microsoft Azure services for almost all of our web sites and the database back-end, I can tell you that there is one major issue with Microsoft Azure. There is no one that you can contact when something goes wrong (and things seem to go wrong quite often), without paying a huge sum of money.
     
    For example, take a look at their support prices: http://azure.microsoft.com/en-us/support/plans/
     
    If you want someone from Microsoft to take a look at an issue that you're having within 2 hours that will cost you $300 / month. Other than that, it's a great service to use.
     
    OneDrive for Business
    https://onedrive.live.com/about/en-US/business/
     
    Pros:
    Reasonable throttling limits in place. Cons:
    Slow. API is lacking leading to reliability issues. Does not scale well, so you are limited in the amount of data that you can store before everything grinds to a halt. Especially slow when deleting existing chunks (e.g. when destroying a drive). This service is actually a rebranded version of Microsoft SharePoint hosted in the cloud for you. It has absolutely nothing to do with the "regular" OneDrive other than the naming similarity.
     
    This service does not scale well at all, and this is really a huge issue. The more data that you upload to this service, the slower it gets. After uploading about 200 GB, it really starts to slow down. It seems to be sensitive to the number of files that you have, and for that reason StableBit CloudDrive sets the chunk size to 1MB by default, in order to minimize the number of files that it creates.
     
    By default, Microsoft SharePoint expects each folder to contain no more than 5000 files, or else certain features simply stop working (including deleting said files). This is by design and here's a page that explain in detail why this limit is there and how to work around it: https://support.office.com/en-us/article/Manage-lists-and-libraries-with-many-items-11ecc804-2284-4978-8273-4842471fafb7
     
    If you're going to use this provider to store large amounts of data, then I recommend following the instructions on the page linked above. Although, for me, it didn't really help much at all.
     
    I've worked hard to try and resolve this by utilizing a nested directory structure in order to limit the number of files in each directory, but nothing seems to make any difference. If there are any SharePoint experts out there that can figure out what we can do to speed this provider up, please let me know.
     
    OneDrive
    https://onedrive.live.com/
    Experimental
     
    Pros:
    Clean API. Cons:
    Heavily and unreasonably throttled. From afar, OneDrive looks like the perfect storage provider. It's very fast, reliable, easy to use and has an inexpensive unlimited storage option. But after you upload / download some data you start hitting the throttling limits. The throttling limits are excessive and unreasonable, so much so, that using this provider with StableBit CloudDrive is dangerous. For this reason, the OneDrive provider is currently disabled in StableBit CloudDrive by default.
     
    What makes the throttling limits unreasonable is the amount of time that OneDrive expects you to wait before making another request. In my experience that can be as high as 20 minutes to 1 hour. Can you imagine when trying to open a document in Microsoft Windows hitting an error that reads "I see that you've opened too many documents today, please come back in 1 hour". Not only is this unreasonable, it's also technically infeasible to implement this kind of a delay on a real-time disk.
     
    Box
    https://www.box.com/
     
    At this point I haven't used this provider for an extended period of time to render an opinion on how it behaves with large amounts of data.
     
    One thing that I can say is that the API is a bit quirky in how it's designed necessitating some extra HTTP traffic that other providers don't require.
     
    Dropbox
    http://www.dropbox.com/
     
    Again, I haven't used this provider much so I can't speak to how well it scales or how well it performs.
     
    The API here is very robust and very easy to use. One very cool feature that they have is an "App Folder". When you authorize StableBit CloudDrive to use your Dropbox account, Dropbox creates an isolated container for StableBit CloudDrive and all of the data is stored there. This is nice because you don't see the StableBit CloudDrive data in your regular Dropbox folder, and Stablebit CloudDrive has no way to access any other data that's in your Dropbox or any data in some other app folder.
     
    Amazon Cloud Drive
    https://www.amazon.com/clouddrive/home
     
    Pros:
    Fast. Scales well. Unlimited storage option. Reasonable throttling limits. Cons:
    Data integrity issues. I know how important it is for StableBit CloudDrive to support this service and so I've spent many hours and days trying to make a reliable provider that works. This single provider delayed the initial public BETA of StableBit CloudDrive by at least 2 weeks.
     
    The initial issue that I had with Amazon Cloud Drive is that it returns various errors as a response to I/O requests. These errors range from 500 Internal Server Error to 400 Bad Request. Reissuing the same request seems to work, so there doesn't appear to be a problem with the actual request, but rather with the server.
     
    I later discovered a more serious issue with this service, apparently after uploading a file, sometimes (very rarely) that file cannot be downloaded. Which means that the file's data gets permanently lost (as far as I can tell). This is very rare and hard to reproduce. My test case scenario needs to run for one whole day before it can reproduce the problem. I finally solved this issue by forcing Upload Verification to be enabled in StableBit CloudDrive. When this issue occurs, upload verification will detect this scenario, delete the corrupt file and retry the upload. That apparently fixed this particular issue.
     
    The next thing that I discovered with this service (after I released the public BETA) is that some 400 Bad Request errors spawn at a later time, long after the initial upload / verification step is complete. After extensively debugging, I was able to confirm this with the Amazon Cloud Drive web interface as well, so this is not a provider code issue, rather the problem actually occurs on the server. If a file gets into this state, a 400 Bad Request error is issued, and if you examine that request, the error message in the response says 404 Not Found. Apparently, the file metadata is there, but the file's contents is gone.
     
    The short story is that this service has data integrity issues that are not limited to StableBit CloudDrive in particular, and I'm trying to identify exactly what they are, how they are triggered and apply possible workarounds.
     
    I've already applied another possible workaround in the latest internal BETA (1.0.0.284), but I'm still testing whether the fix is effective. I am considering disabling this provider in future builds, and moving it into the experimental category.
     
    Local Disk / File Share
     
    These providers don't use the cloud, so there's really nothing to say here.
  4. Thanks
    TPham reacted to Christopher (Drashna) in NTFS Permissions and DrivePool   
    (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
  5. Thanks
    TPham reacted to Shane in NTFS Permissions and DrivePool   
    Edit: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 is the normal, Stablebit-approved method to reset NTFS permissions. It should suffice in most cases, and it's easier to do than my method, so try it first.
    To fix broken NTFS permissions I now use a freeware program called SetACL because - at least in my experience - it properly supports both long paths and unicode and can fix damaged security records that Windows builtin tab and utilities like takeown and icacls can’t (for reasons that personally I boil down to “unicode and long paths were added to Windows partly via the programming equivalent of lots of duct tape”). If you're a sysadmin I recommend checking out the rest of the site too!
    The following is a quick guide to how I use SetACL to reset my pool permissions on my own machines. Your mileage may vary; you should always have backups of anything you don't want to lose.
    Note: if you have customised your pool's security permissions (e.g. for multiple users with different access rights) be aware that you may need to customise the following commands (particularly the fourth) in this post to suit your changes, or adjust the permissions subsequently.
    I download the SetACL for Administrators (not the Studio) from here. It's freeware (as is the Studio version if you want a GUI). Open the zip file and copy SetACL.exe from the 32bit or 64bit folder (as appropriate) to wherever you want to keep it so long as that location is OUTSIDE of your pools.
    Open a command prompt as an Administrator and enter the following commands in order (where X:\ is the location of SetACL.exe and P:\ is the location of your pool drive - and that's very important, do not get confused, do not put in the wrong drive letters):
    Net stop "stablebit drivepool service"
    X:\SetACL.exe -on P:\ -ot file -actn setowner -ownr "n:Administrators" -rec cont_obj -fltr "System Volume Information" -fltr "$RECYCLE.BIN"
    X:\SetACL.exe -on P:\ -ot file -actn clear -clr "dacl,sacl" -actn rstchldrn -rst "dacl,sacl" -fltr "System Volume Information" -fltr "$RECYCLE.BIN"
    X:\SetACL.exe -on P:\ -ot file -actn ace -ace "n:Authenticated Users;p:change" -ace "n:SYSTEM;p:full" -ace "n:Administrators;p:full" -ace "n:Users;p:read_ex" -fltr "System Volume Information" -fltr "$RECYCLE.BIN"
    Net start "stablebit drivepool service"
    The first and last commands stop and start DrivePool respectively; the second command takes ownership (since you need to have ownership before you can alter any permissions that would otherwise prevent you from altering permissions), the third resets all existing permissions and enables inheritance of new permissions, and the fourth grants new permissions (I’ve used the Windows 10 defaults that should be compatible with all versions of Windows that are compatible with DrivePool); the special folders used for System Volume Information and Recycle Bin have been excluded as a precaution.
    If you are still getting permission errors even after running all five commands in order, you can try running them directly on the individual poolpart folders. So for example if "E:" was one of the drives you'd added to the pool, you'd use E:\PoolPart.guidstring instead of P:\ in the above commands (tip: when you're using the command prompt, pressing the tab key after you've typed PoolPart into the command prompt should fill in the correct guidstring for you). Note: poolpart folders contain a ".covefs" folder; do not apply non-default permissions to that folder unless you know exactly what you are doing.
    If your pool contains a lot of files (the size of each file doesn't matter), these commands may take a while to complete (my primary pool, with ~600k files, takes a couple of hours).
  6. Thanks
    TPham reacted to Shane in NTFS Permissions and DrivePool   
    Spend long enough working with Windows and you may become familiar with NTFS permissions. As an operating system intended to handle multiple users, Windows maintains records that describe which user owns each file and folder and how much access each user has to those files and folders. These records are kept on the same volumes as those files and folders. Unfortunately, in the course of moving or copying folders and files around, Windows may fail to properly update these settings for a variety of reasons (e.g. bugs, bit errors, power interruptions, failing hardware).
    This can mean files and folders that you can no longer delete, access or even have ownership of anymore, sometimes for no obvious reason when you check via the Security tab of the file/folder's Properties (they can look fine but actually be broken inside).
    So, first up, here’s what the default permissions for a pool drive should look like:

    And now here’s what the default permissions for the hidden poolpart folder on any drive added to the pool should look like:

    The above are taken from a freshly created pool using a previously unformatted drive, on a computer named CASTLE that is using Windows 10 Professional. I believe it should be the same for all supported versions of Windows so far.
    Any entries that are marked Deny override entries that are marked Allow. There are limited exceptions for SYSTEM. It is optional for a hidden poolpart folder to Inherit its permissions from its parent drive. It is recommended that the Administrators account have Full control of all poolpart folders, subfolders and files. It is necessary that the SYSTEM account have Full control of all poolpart folders, subfolders and files. The permissions of files and folders in a pool drive are the permissions of those files and folders in the constituent poolpart folders. Caveat: duplicates are expected to have identical permissions (because in normal practice, only DrivePool should be creating them). My next post in this thread will describe how I go about fixing these permissions when they go bad.
  7. Thanks
    TPham reacted to gtaus in Removing drive from pool   
    I just got done watching a 720p movie and I don't think my stream ever got over 1 MB/s. When I said the 1080p stream was 4 MB/s tops, that is maybe a short burst at the start of the file which I assume is filling the Fire TV Stick's onboard memory cache and then, like you noticed, it drops below 1 MB/s. Since you see more or less the same streaming transfer rate that I do on my system, maybe you can see why I am always saying that I doubt if your problem with streaming is coming from DrivePool.
    If I copy files to/from a remote computer to/from my DrivePool computer on my home network, I can reach 80 MB/s transfer rate. So, clearly, it's not DrivePool that is slowing down my system.
    @Shane already suggested using the Windows Resource Monitor, which I access via the Task Manager. You should be able to view the data streaming from your Storage Spaces drive volume.
    I use Kodi as my main media interface. If your DrivePool computer is on your network, it probably already has network sharing set on it. If not, turn on network sharing for your DrivePool drive (J: on my computer). Then, in Kodi, you need to add the media folder by going to the browse network command. Is it complicated? Well, it's a bit more complicated than Plex to setup, but I just like the Kodi interface better than Plex so I was willing to put a little more effort into the project.  I had to go online and search for Kodi tutorials on how to setup a network folder for Kodi, but following the directions, I got it to work without too much difficulty.
    Just another thought comes to mind... I think you once mentioned that you were using IP6 protocol for wifi. IIRC, I had all kinds of problems with IP6 and shut it off so I only use IP4. At the time, there were many people complaining about the IP6 protocol not working correctly. It might have been too new and the IP6 standards were not firmly set. Anyway, going back to IP4 worked for me and I have not changed it since (if it ain't broke, don't fix it). If you are using IP6, you might want to turn that off and see if the older, more stable, IP4 works better for you. It could be that your devices are trying to communicate via different versions of IP6 and they have a hard time talking, which would lead to dropouts. Anyway, just a thought.
  8. Thanks
    TPham reacted to Shane in Removing drive from pool   
    A1: I'm not familiar with the Pace 5268A (or any other AT&T routers), sorry. Different country!
    In theory, generally, it should be possible to have router "A" handling internet traffic while router "B" handles local traffic without them fighting about it; in practice some routers are (much) less capable than others and some ISPs also lock down the functionality of routers they provide / mandate.
    For whatever it's worth, here's an example very basic setup:
    Internet <-> [WAN port] Router A [LAN port] <-> [LAN port] Router B [LAN port(s) and Wifi] <-> Local devices Router A (WAN DHCP on + LAN DHCP off) is configured to receive the necessary WAN addresses from the ISP but to not manage the local devices. Router B (WAN DHCP off + LAN DHCP on) is configured to deliver the necessary IP configurations so local devices use Router A as the internet gateway. Obviously this is a simplified summary and you'd need to get all the fiddly parts lined up, but that's my day job not my forum job. 
    A2: As a general rule of thumb you want to minimise the number of wifi devices (including stations) operating in any given area, and another is that you want all devices operating on any given channel to support the same wifi standards (e.g. if a gen 1 device and a gen 2 device are sharing the same channel, you may end up with the gen 2 device limited to gen 1 capability). That said, if your plan is to have Router A handling wired traffic while Router B handles wireless traffic, that might work for you.
    A3: The new Beta functionality mostly relates to supporting Stablebit Cloud connectivity, but the changelog indicates there was work done on DrivePool functionality too. The great thing is you can install the Beta and if it doesn't help you can uninstall it and reinstall the Stable release without losing anything.
    Resource Manager is built into Windows and can be used to monitor a variety of metrics including file access/throughput.
  9. Thanks
    TPham reacted to gtaus in Removing drive from pool   
    Have you determined what speed your TV streaming device pulls movies from your Storage Spaces or DrivePool? For example, when I watch my DrivePool GUI, I can see that my Fire TV Stick is pulling about ~4 MB/s tops for streaming 1080p movies. I don't suffer any stuttering or caching on my system. If I try to stream movies >16GB, then I start to see problems and caching issues. But, at that point, I know I have reached the limits of my Fire TV Stick with limited memory storage and its low power processor. It is not a limit of how fast DrivePool can send data over my wifi.
    Well, there is how many bars are available to indicate how strong the connection is, but bars does not equal speed. On my old 56K router, I would also have 4 or 5 bars indicating a strong connection, but I was constantly fighting buffering issues while streaming. I upgraded to a 1 gigabit router, which is much faster, and that took care of my buffering problems.
    Well, good questions but beyond my level of tech expertise with that equipment. I get my internet service from a local telephone company, and they have a computer support team on staff to answer questions and help customers with their equipment. If you are leasing your equipment from ATT, then they might have a support team you could contact for assistance.
     
    At least you have something that is currently working for you, so it's not like you are in a panic. After years of running Storage Spaces on my system, and now with DrivePool for just less than 1 year, I don't yet understand why you are experiencing streaming issues with DrivePool. On my system, it made no difference at all in regards to streaming, which I have stated runs at about 4 MB/s tops and usually much less on my system.
  10. Thanks
    TPham reacted to gtaus in Removing drive from pool   
    I don't know why you saw so much memory usage on your first install of DrivePool. I have never seen anything like that. Is it possible that DrivePool was performing some intensive task at that moment? I am glad that Christopher was able to get you back on track and that things are going much better (normal) now.
    A long yellow bar usually means DrivePool activity or an error state - like duplication needs to be re-checked. The green bar usually means everything is OK with DrivePool and does not need any attention. If you have a green bar, then I don't understand why you would be looking for a cancellation option. Is there some task running in the background? Your screenshot of DrivePool looks all good to me.
    Speaking of background tasks, I have suggested that if would be nice to have an option to select that more information be displayed of what is going on in the background tasks running in DrivePool. Many of those tasks and status on them seems to be hidden, maybe for a good reason. Most of the time, I'm just happy to let DrivePool do its thing in the background and I don't need to see the inner workings. But, it would be good to have an option to get more info and status on those tasks for those few times I do care to understand what is going on behind the curtain.
    Under normal circumstances, DrivePool pretty much takes care of itself. However, I have noticed a few times that it appeared to me that a task like balancing might not be behaving properly. Don't know if it was a DrivePool issue, or a Windows issue, but I rebooted the computer and everything went back to "normal." That's pretty much my first step to take if I notice something fishy going on with my computer. Again, I don't know if it's a DrivePool issue, or a Windows issue. As you know, Windows can lock itself up for unknown reasons and a fresh reboot seems to correct the OS. And that certainly has nothing to do with DrivePool.
    I would also suggest keeping an informal log (note on piece of paper) of what was wrong and how you corrected it. If you see a pattern of misbehavior, you could share that with Christopher and maybe he could look for a bug to correct. One thing I appreciate with DrivePool is that the programmer(s) are looking for ways to improve the software and they issue new releases. 
  11. Thanks
    TPham reacted to gtaus in Removing drive from pool   
    @TPham
    You simply click the mouse and drag the cursor over the text you want to copy. The text should turn to blue background. When you get to the end of your selection, release the mouse and a popup box should appear "Quote selection". Click on the popup box and it will transfer that quote to your response box.
    Unlike Windows File Manager, you don't need a drive letter for DrivePool to see and list the drive. I just added another HDD to DrivePool this morning, without a Drive Letter, and all I had to do was simply click the add button on the Non-Pooled drives list on the DrivePool GUI.
    I currently have 18 HDDs and 1 SSD in my DrivePool. So I wanted to remove all drive letters from my pool drives and free those Drive Letters for flash drives and/or other devices I might want to temporarily add to my system.
    If you ever need to work with any drive in Windows, you can go into Disk Management and add a Drive Letter to any pool drive you want. DrivePool does not care if you add/remove a Drive Letter to the drive, it identifies the drive by the hidden PoolPart folder it writes to the drive itself.
    That is one way of organizing your duplication folders that I have also considered. It certainly would clean up the root directory to only 3 folders.
    I really noticed a performance boost in DrivePool when I added a front end SSD. DrivePool is limited to the speed of the drive it is writing data to at that time. If it is writing to a SSD cache, then it's very fast. When I transfer large amounts of data, I am glad I have that SSD front end cache.
    DrivePool can use a SSD as a front end cache for writes. That works good enough for me. If you are a power user and need more speed in your server, I would suggest you take a look at PrimoCache. PrimoCache can use both your system RAM and your SSD as cache/buffers for reads/writes. I believe PrimoCache still offers a free trial period to use their software to see if it works for you.
    Years ago I tried PrimoCache, but I only had 4GB RAM and no SSD on my system. Writing/reading from RAM was lightning fast with PrimoCache, but with limited RAM, the buffer filled up in no time with large data transfers (>3GB at that time) and dropped back down to the slow write speed of my archive HDD. So it was not worth it for me at the time to buy the program. Your situation is different with 16GB RAM and 2 SSDs. I'd recommend checking out the program at least for the free trial period.
    I use DrivePool primarily as my home media storage server, so using the SSD as a front end cache is enough for me. Even without using the SSD, DrivePool was just fine for my needs and I had no complaints. The SSD just caches the writes faster on large data transfers. Sometimes that comes in handy.
    Yeah, I had a hard time trying to figure out what drive/serial number I was working with in Storage Spaces. If you buy a few of the same HDD models, only the serial number may be different, which was a real pain for me.
    Anyway, when I set up DrivePool, I just named my HDDs DP01, DP02, DP03, etc... I placed them in physical order on my shelf and I also tagged each drive/case with the drive name. The drives I have in my ProBox enclosures are also in physical order, so I know exactly which drive is in the slot. My life was easier when I got away from using serial numbers to identify the pool drives.
    I don't have any magic way to identify your HDDs with serial numbers in your enclosures. Sometimes the software will have a feature to "identify" your drive which might flash a light on the drive slot if your enclosure has that feature. Mine did not. If you have a USB disk caddy, you could pull your enclosure drives and identify them in the caddy. I'd label the drives at that time, if needed, before returning them to the enclosure. But, like I said, with DrivePool, I just tagged and named the drives in the physical order that they are placed in the enclosure and/or on the shelf. Anything to make like simpler works better for me.
  12. Thanks
    TPham reacted to gtaus in Removing drive from pool   
    If setting the drive volume label to match the drive's serial number works for you, then stay with it.
    My approach is different and works better for me. First of all, I currently have 18 USB HDDs in my DrivePool. I don't use Drive Letters on any of the pool drives. There are just too many drives in the pool and DrivePool does not need Drive Letters anyway. I preferred to clean up my Windows File Explorer listing, so the Drive Letters had to go.
    What I do is just name the drives in logical order as they sit on the shelf. Being not too creative, my pool drives are labeled as DP01, DP02, DP03, etc.... I also put a label on each drive case. I have DrivePool GUI sort the pool list by name for easy reading. If there is any problem with a HDD, I immediately know which drive is affected.
    I have lots of unduplicated home media files in my DrivePool. But I also have a few folders that I want 2X duplication. Not only do I find the duplication options better in DrivePool than Storage Spaces, but the net result is that I am saving lots of money by not duplicating my entire pool when it is not required for about 85% of my stored media files.
    Also, I have had a couple HDD failures in the past month, and I have been able to recover almost all my data off the drives. In the meantime, DrivePool was still serving up all my other files like nothing happened. When my Storage Spaces crashed, it could takes weeks to rebuild. I don't miss Storage Spaces....
     
     
  13. Like
    TPham reacted to Christopher (Drashna) in Removing drive from pool   
    I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. 
    But that's my resolution this year: to make a big effort to keep up with the forum. 
  14. Like
    TPham reacted to Shane in Removing drive from pool   
    I've done it. It's literally plug-and-play. So long as DrivePool is installed on a computer and you have the physical ability to plug in your pooled drives, it'll automatically recognise the pool on those drives so you can play with your files straight away. Doesn't matter whether it's Windows 7/8/8.1/10/2008/2011/2012/2016/2019, Home/Pro/Enterprise/Server.
    In fact, even if every single copy of both Windows and DrivePool somehow disappeared overnight (cue spooky music), you would still be able to access your data with any other OS that can read standard NTFS volumes (they just wouldn't be conveniently pooled anymore).
    Also, if a drive in your pool fails, it'll be marked in DrivePool's GUI as missing and still displaying the drive letter (if any) and volume label (I set the latter to match the drive's serial number), and if you hover the cursor over the missing drive it'll show you the model (edit: and serial number if possible), and the pool will still be accessible (albeit read-only until the problem is resolved and any files unique to that drive missing, so if you've duplication enabled for the entire pool you won't have lost anything).
  15. Like
    TPham reacted to gtaus in Removing drive from pool   
    Yes, DrivePool, like many of my Windows programs, sometimes hangs and requires a reboot. Most of the time DrivePool works without any problems, but I have run into some circumstances where DrivePool misbehaves and does not correct itself until after a reboot.
    I ran Windows Storage Spaces for ~7 years, and the small problems I occasionally experience with DrivePool are nothing compared to the problems I had with Storage Spaces trying to manage the same size pool (currently 70TB).
    IF I had any real complaint about DrivePool, it would be that it really keeps you in the dark on background tasks it performs. I personally would like more status info displayed in the DrviePool GUI in those cases, because background tasks might have a really low priority and it may look like nothing is happening. Well, it may be happening but not very fast. Or, maybe the task got hung up and needs a reboot. Sometimes you can go into Task Manager and check for disk activity there. I have done that on occasion to verify activity was really going on in the background tasks. It would be nice to have the option of seeing some of that background activity on the DrivePool GUI. Most of the time I don't care as long as it gets done. Sometimes I want to verify work is actually going on.
    FWIW, I had the same idea as you when I was testing out DrivePool. I did a couple disk removals, but they worked just fine on my system, so I started off with a positive experience.
  16. Like
    TPham reacted to Shane in Removing drive from pool   
    Stablebit does not have a large support department; as I understand it, your support request is going directly to the developers. As much as we'd all like personal 24/7 valet service for our USD$29.95, they do need to do other things from time to time. 
    That said however, DrivePool simply should not need to balance first to remove a drive (caveat: so long as there's room), and certainly shouldn't sit there for 24+ hours doing nothing obvious and then require a reboot to get it unstuck! I hope that Stablebit is able to find the cause and fix it. Please keep us updated?
  17. Like
    TPham reacted to Christopher (Drashna) in Another Cannot Remove Disk issue   
    This can happen if the PoolPart folder itself is damaged/corrupted.  Part of the removal process is marking the drive as "removed".  But also, it has to be able to read from the folder, as well.   File system corruption can stop both. 
    nn
    While it's removing, yes, it should. Afterwards, no.  But you can use the Drive Usage Limiter to prevent files from being placed on the drive, and to move files off of the drive.
    Even better is the "dp-cmd"'s "ignore-poolpart" command.  This immediately ejects the drive from the pool.  This doesn't move the drive contents, but only marks the disk as "removed" from the driver. It will show up as "missing" in the UI, and then you can remove it from the UI. 
    And from there, you can absolutely move the contents of the "poolpart" folder on that drive back into the pool. And ideally, using the "skip existing files" option to speed things up. 
     
     
  18. Like
    TPham reacted to gtaus in Another Cannot Remove Disk issue   
    Last week I had a 5TB HDD failure and tried to remove the disk via DrivePool's GUI. It was unsuccessful. I did get an error message to run chkdsk on the HDD to correct the problem. Unfortunately, chkdsk corrupted the directory and I was left with no data on that drive. Total loss.
    Today, I have another 6TB HDD that is misbehaving according to DrivePool. So, once again, I tried to remove the HDD using the DrivePool GUI. Again, it was unsuccessful and gave me an error message to run chkdsk on the HDD. NO WAY am I going to fall for that trap again. So I am currently moving data off that drive manually to other drives using TeraCopy. What I have discovered is that there are a few corrupt files on that HDD that cannot be moved. At least with TeraCopy, it will automatically skip over those corrupt files and continue to move the rest of the files on the list - and then give you an error report of the failed files at the end of the task. So I don't have to babysit the transfer of files over the next ~10 hours.
    Questions:
    1) Why is DrivePool unable to remove the drive when I have all 3 boxes checked to remove the drive and just to leave the failed files on the HDD? It appears to me that the remove task is erroring out when it hits the first corrupt file and does not try anymore.
    2) If I have a HDD checked for removal, and it errors out, does DrivePool lock out that drive from the pool and not allow any new files to be written to it? I have ordered a couple new drives to replace these drives, but can I continue to use DrivePool without worrying about data being written to the drive marked for removal?
    3) Since I am unable to remove the drive from within DrivePool's GUI, can I physically pull that drive out of the pool, then when DrivePool lists it as a missing disk, can I remove it then? After that, am I able to manually transfer files off that pulled disk back into the pool as is, or do I have to rename the PoolPart directory on that drive so DrivePool does not see it as part of the pool again?
    I don't believe there is anything physically wrong with this current 6TB HDD. It reports at 100% Health, 100% Performance, and No error reports in SMART. Maybe related to the 5TB HDD failure of last week, for some reason, I got a few corrupt files thrown on this 6TB HDD. My intention if to vacate all data on this drive, run a few integrity tests on it for good measure, and assuming it passes, I will reformat the drive and put it back into DrivePool.
    Let me end with saying something positive about DrivePool. Although I am having a few issues with a failed drive and now some corrupt files on (what appears to be) a good drive, with DrivePool I am still able to manually vacate the files off the drive, leaving only a few corrupt files on the drive that are causing me a problem. When I had problems with my old RAID and Storage Spaces pools, it was game over and I lost everything. This is another case that confirms my decision to move to DrivePool, and that is when you do have a HDD problem, chances are good with DrivePool that you might be able to minimize your loss and recover almost all your data.
     
×
×
  • Create New...