Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 01/29/21 in all areas

  1. My backup pool has a 1TB SSD that I use to speed up my backups over a 10Gbs link. Works great, as I can quickly backup my deltas from the main pool --> backup pool and in slow time it pushes the files to the large set of spinning rust (100TB). However, when I try to copy a file that is larger than the 1TB SSD I get a msg saying there is not enough available space. Ideally, the SSD Optimiser should just push the file directly to a HDD in such cases (feature request?), but for now what would be the best way of copying this file into the pool? - Manually copy directly to one of the HDD behind DrivePools back then rescan? - Turn off the SSD optimiser then copy the file" or, - Some other method? Thanks Nathan
    2 points
  2. Not currently, but I definitely do keep on bringing it up.
    2 points
  3. I mean, alex is alive and well. but yeah, some issues that don't directly involve Covecube. If alex wants to answer what exacty so going on, he will. But stuff is still happening.
    2 points
  4. Just a heads up, yes, it does continue to attempt to upload, but we use an exponential backoff when the software gets throttling requests like this. However, a daily limit or schedular is something that has been requested and is on our "to do"/feature request list. I just don't have an ETA for when it would be considered or implemented.
    2 points
  5. Support suggested that I reset all settings. I did and it did not fix everything but at least it was a good start. Now, the original Pool reappeared but only one of the 3 original drives was randomly listed in Pooled, and the other two were listed in Non-pooled. Trying to add these Non-Pooled drives would create some kind of "cannot add drives again" error. In the end, I had to remove every Pooled drive from the "original pool" by restarting service, seeing a new random drive listed in Pooled, removing it and restarting service until all drives are removed from original pool. Finally then I created a new pool, and moved all files from the original pool folders into the new pooled folders. What a pain but it was not hard, at least I got all my files back without losing anything, which was shocking. Please give me a like if this post has helped or will help anyone who run into similar issues.
    1 point
  6. Any hint in the gear icon in upper right -> Troubleshooting -> Service Log? Check right after system restart and scroll to the very top, to see the part where DrivePool detects existing pools. (Maybe temporarily deactivate Service tracing / auto-scroll right after opening the log, to prevent the interesting lines to move out of view.) And, just out of curiosity, are there any other pools shown in the UI? (click your pool's name in the top center in the main window) And - also just another wild guess - are the two missing drives shown if you start your computer without the working drive plugged in? (Disclaimer: Do at your own risk. I'm just an ordinary user!)
    1 point
  7. i would also like to see dark mode addedto all stablebit products
    1 point
  8. Recently I had an issue that caused me to move my array over to another disk. I think the issue caused a problem, as 5 disks show up with damage warnings. In each case the damaged block is shown to be the very last one on the disk. I assume this isn't some weird visual artifact where blocks are re-ordered to be grouped? Anyway, clicking on the message about the bad disk of course gives a menu option to do a file scan. I decide to go ahead and do this, and of the 5 disks, I think only one actually apears to do anything- you can start see it trying to read all the files on the disk. The others, the option just seems to return, pretty much instantly, which makes no sense to me. They don't error or fail, or otherwise indicate a problem, they all just act like maybe there are no files on those disks to scan (there are). The disks vary in size, 2x4TB and the others are 10TB. One of the 4TB disks scanned fine, and indicated no problems, but it's not clear as to why it would work and the others wouldn't. Ideas? Thanks!
    1 point
  9. I figured out shortly after this what best method probably is... I'm just going to copy the files to a temp folder on the drive, then I can move them into the PoolPart folder after that like the guide above. Turning off the service for a few minutes while I move the files sounds a lot easier than the other way.
    1 point
  10. I'm having significant problems with the DrivePool hanging for minutes on basic things like showing the contents of a folder in Explorer. So far I cannot figure out how to locate the source. For instance today I opened a command prompt and simply tried to switch to the DrivePool's drive (i.e. I typed X: into the command prompt.) That caused the command prompt to freeze for several minutes. While it was frozen in that command prompt I opened up a second command prompt and switched to each of the underlying drives, and even listed out a hundred thousand or so files and folders on each of the source drives. So, it feels like the DrivePool drive is hanging for something independent of the source drives. On the other hand I don't know what that something else could be as in my quest to troubleshoot this I have setup a brand new computer with a clean fresh install of everything including DrivePool. The only thing I transferred was the source drives. Yet the problem followed the drives to the new computer. StableBit Scanner reports that all the underlying drives are in great shape, and so far I can't find any logs that report problems. How can I root out the source of this problem? Oh and I did also try the StableBit Troubleshooter. However, it hung infinitely at collecting system information (Infinity defined as well over 80 hours of trying). Also the hanging seems to be intermittent. I might go a day or so without or it might happen every few minutes. I have utterly failed to identify triggers so far.
    1 point
  11. You posted on another thread too. But You may need to enable the "Unsafe" "Direct IO" option to get the SMART data off of these drives, due to the controller. https://wiki.covecube.com/StableBit_Scanner_Advanced_Settings#Advanced_Settings if that doesn't work, enable the "NoWmi" option for "Smart", as well.
    1 point
  12. We have a number of people using Windows 11 already, but we won't say that it's officially supported until Windows 11 is no longer a beta. This is because the OS is in too much flux until it's RTM'ed, and takes a lot of work to officially support it before then.
    1 point
  13. Christopher (at support) came back on my ticket. He reenabled my ID and all now good. Can write to the DrivePool. Just in case someone else encounters this problem use a fresh trial ID to get things going while waiting for support to reenable the paid ID.
    1 point
  14. I don't really know. I speculate that MB-SATA may not be designed to be optimal. For instance, I do not know whether they can actually deliver 600MBps on all ports simultaneously. I guess it can be found through Yahoo but not sure. PCIe SATA cards, I have read that they can deliver lackluster performance, as in shared bandwidth dependent on the actual number of PCIe lanes it uses, but never that they'd interfere with MB-Sata. Again, I don't actually know and I am sure that there may be different qualities out there. But really, I think your PCIe SATA card will be fine and give no issues. It should work for your transition. I'd leave the card in the PC once done so that, AIW you need it, you have two ports readily available. The SAS HBA route is one I would recommend if you expect storage to grow as measured by number of drives. For me, it works like a charm and as these are, AFAIK, all made with enterprise servers in mind, I am pretty comfortable about performance, compatability and endurance.
    1 point
  15. OK. That you have multiple PoolPart.* folders on H and K is a clear issue. That you don;t have them on P and M is weird. And then there are the PoolPart/* files which shouldn't be there. Not sure what to do here. Tranferring files, removing drives from Pool through GUI, reformat and add is a possibility but takes a long time. Perhaps better to contact support (https://stablebit.com/Support) or wait for a better volunteer here. Another scenario, but I am not sure if that would work well, is: 1. Remove the suspect drives from the Pool throuh the GUI 2. From each PoolPart.* folder on those drives, check whether they have any contents. If they don't, delete them. If they do, rename the folders. 3. Add the drives to the Pool 4. You will now see a new PoolPart.* folder. For each of the for drives, move the contents from the renamed PoolPart.* folder(s) to the new PoolPart.* folder according to this: StableBit DrivePool Q4142489 (Follow this well, you will need to stop the DrivePoolService and start it when done) 5. Do a re-measure I *think* this will work but....
    1 point
  16. The OS writes files by going (approximately) "create/open entry for file on drive's index, stream data to file from program, write details (e.g. blocks used so far) to drive's index, repeat previous two steps until program says it's done or the program says it's encountered an error or the drive runs out of room or insert-other-condition-here, write final details in the index and close entry for file". Or in even simpler terms: at the system level all files are written one block at a time, no matter how many blocks they'll eventually involve. Now a workaround for programs that deal with fixed file sizes is to ask the OS in advance "how much free space is on drive X" so that they can know whether there's going to be room before they start writing (well, "know" as in "guess" because other programs might also write to the drive and then it becomes a competition). But the catch there is that when the OS in turn asks the drive pool "how much free space do you have", DrivePool reports its total free space rather than the free space of any particular physical drive making up the pool. This is because it can't know why it's being asked how much free space it has (DP: "am I being asked because a user wants to know or because a program wants to write one big file or because a program wants to write multiple small files or because some other reason, and oh also if I've got any placement rules those might affect my answer too?" OS: "I don't know").
    1 point
  17. Whenever I have problems with duplication, I go to the Settings Cog>Troubleshooting>Recheck Duplication and let DrivePool try to figure it out. Honestly, if there are duplication problems with DrivePool (like after removal of a failed HDD), it takes me a couple times running the Recheck Duplication and Balancing tasks. Last time that happened to me, it literally took a few days for DrivePool to clean itself up, but I have 80TB in my DrivePool. To be fair to DrivePool, it did fix itself given time. I only have a few folders set for duplication in my DrivePool, so out of my 80TB pool, only about 20TB are duplicated. Also, DrivePool duplication is good for some things, but it does not ensure that your files are actually intact and complete. It is possible to have a corrupted file/folder and DrivePool is happy to duplicate the corruption. If there is a mismatch between the original and the copy, DrivePool cannot tell you which file/folder is true and which may have been corrupted. For example, my DrivePool is mainly used as my home media storage. If I have an album folder with 15 tracks, and one or two tracks gets deleted or corrupted, DrivePool cannot tell me if the original directory is complete, if the duplicate directory is complete, or if neither copy is complete. Because of this, I now add 10% .par2 files to my folders for verification and possible rebuild. With the .par2 files, I can quickly determine if the folder is complete, if any missing or corrupted files can be rebuilt from the .par2 files in that folder, or if I have to take out my backup HDDs from the closet to rebuild the corrupted data in DrivePool. Unfortunately, DrivePool duplication does not ensure that your data has not been corrupted. For this reason, I don't consider DrivePool duplication in any way a backup solution. It lacks the ability to verify if the original or the duplicate copy is complete and intact and cannot resolve mismatches between copies. In theory, from what I understand, duplication is mainly good for rebuilding your DrivePool if you have a HDD failure and the bad drive is a complete loss. Then, DrivePool will still have a copy of the files on other drives in DrivePool and can rebuild the failed data. That may be a great option, and of course I said I do use duplication, but DrivePool duplication still lacks any ability to verify if the files are complete and uncorrupted. For that reason, I have gone to using those .par2 files for file verification. My backup HDDs are stored in my closet. It's not the best solution to my backup needs, but it is the best I have found for me at this point. In a more perfect world, DrivePool would have the ability to duplicate folders for faster pool recovery, and there would also be some way to verify and rebuild lost data like the .par2 files. In your case, if you have good backups of your data, I might consider turning off duplication in DrivePool, Rebalancing the pool and/or forcing a Recheck Duplication to clean up the data, and then turning duplication back on for the folders/pool as you want. But before I did that, I think I would contact the programmer directly for support and ask him for his recommendation(s). DrivePool is a great program and data recovery is much better than other methods I have used such as RAID systems and Windows Storage Spaces. But I do run into errors like you are experiencing and I cannot always understand the corrective action to take. Mostly, I have found that DrivePool is able to correct itself with its various troubleshooting tasks, but it might take a long time on a large pool.
    1 point
  18. fluffybunnyuk

    Defrag layouts

    I found when copying to the pool it defragments files quite badly. My solution to it is ultimatedefrag. Although any folder/file layout defragger will do just fine. I'm old school and obsessive about track to track seeks, power usage and head flying hours. One of the things i've read often mentioned here is how old your drive is. Really the question should be is the drive under 40,000 flying hours thats my benchmark for an old drive where failure rates increase to a significant level. One of the useful things i use is in the power settings, the ability to ignore burst traffic for 1 minute, and HIPM/DIPM/DEVSLP. It keeps the drives spun down until i actually need them. I tend to use a pair of seagate powerbalance SAS drives for frequent data, as i love the timer control, and running Idle_C for extended periods. Copying 8TB over to a pair of 12TB drive i had about 3 million file fragments (mostly film/tv). So i sort to end of disk- File/Folder Descending. It takes 2 days on a new drive, but it has the advantage of the free space being at the front of the disk, and the head doesnt have to seek any further than 33% into the platters unless its a film (in which case its a sequential read and even on the final sectors not alot of work even for a 4k film). It should be pointed out that i do this yearly at most. Doing a consolidation defrag every 3 months to push the new films etc on the outer tracks to backfill from the new end of freespace at 4TB. It also has the advantage of being a sequential read to duplicate the disk. Of course you could defrag to the outside of the disk but that has the disadvantage of the free space being on the end of the platter, which is a full stroke anytime you want to write data. If you really needed file recovery, and can read the platters, sequential data is much easier too. Another issue is rated workload, i tend to prefer SAS drives, so i can get away with sustained use in heavy periods. But a way to age your drives fast is to scan them on a 30 day basis for sector errors. Thats 12x per year or a workload of 144TB on a 12TB drive (Ironwolf have 155TB/year usage). A better solution is to get the new drive, allow it to warm up a couple of hours. Then do the smart tests quick then extended, then use a sector regenerator to scan the disk (theres usually a point on the platter where theres a slowdown at least on the drives i buy 12TB+ (NAS or SAS)). Scans dont need to be more than 180 days less 10days per drive age in years(min 90 days). Also as an old timer, i like the sound of noisy drives I can hear what they're doing better. But mostly the measures herein reduce most of the seek noise, so usually all i hear is parking, or a long seek (and the occasional crunching over an area of weak sectors i forgot to regen). I'm sure i forgot to mention other stuff, but its enough for an intro to me, and my personal choices in how i organzie my data. Of course i welcome any thoughts.
    1 point
  19. Are you going to head back to StableBit DrivePool and make some updates and add some of the posted suggestions? This is fantastic software and I hate to see it stagnating.
    1 point
  20. Deduplication, ntfs compression, and "slack space" can all account/contribute to this.
    1 point
  21. Hello, I've noticed recently that Drivepool deals with "Size" vs "Size on disk" differently inside and outside of the pool which causes drivepool to think the drive is bigger than it is. From what I can tell: When files are outside of the pool it uses the "Size on disk" i.e. "Used space" reported by Windows + "Free space" and calculates the size of the drive. When files are in the pool it uses the "Size" not "size on disk" to calculate used space + "Free space" to calculate size of the drive. This causes a discrepancy in the size it believes the drive to be and can also trigger file moves before required (e.g. move when 100gb is needs to be moved) It seems like it's always done this, just caught my attention now since files were being moved when I knew it wasn't needed. See below examples: 1: Folder outside of the pool. Notice size, size on disk and the drivepool measured size 2: Now I've moved that into the pool and notice what Drivepool (Top right) sees as the size. (Size + Free space)
    1 point
  22. Rob Platt

    Reparse.covefs.* files

    So far so good. Looks like my issues boiled down to reparse files and/or something wrong with DropBox (which I've since found is no longer really supported in DrivePool). Thanks for the help @Shane
    1 point
  23. Hi! I had the same problem (currently evaluating DrivePool version 2.2.5.1237). After watching drive activity with filespy it seems like some interaction between DrivePool and the WMI Provider Host (WmiPrvSE.exe) is keeping the drives awake. To circumvent this, I changed the config file (C:\ProgramData\StableBit DrivePool\Service\Settings.json) to disable BitLocker support, like this: "BitLocker_PoolPartUnlockDetect": { "Default": true, "Override": false }, Now my drives sleep the way they should. Ciao! PS: Just noticed that this was quite the necro. It is however one of the top search results when looking for 'drivepool spin down', so I don't feel too bad.
    1 point
  24. Nicely done. Looks like Stablebit Scanner has more sorting options than Hard Disk Sentinel.
    1 point
  25. gtaus

    How old are your drives?

    I tried to buy my Amazon renewed drives from the seller GoHardDrive. I have purchased drives from GoHardDrive for years and their customer support has always been first rate. I feel confident they would back their warranty.
    1 point
  26. Yeah, it's ... manufacturer fun. SMART really isn't a standard, it's more of a guideline, that a lot of manufacturers take a LOT of liberty with. NVMe health is a *lot* better, in this regards (it an actual standard).
    1 point
  27. Yup. This is an issue we see from time to time, and it's a known issue with Windows (not our software). Specifically, the issue is that ... sometimes, windows will not mount a volume if it doesn't have a letter (or path) assigned to it. Fortunately, this is a dead simple fix: https://wiki.covecube.com/StableBit_DrivePool_F3540
    1 point
  28. Fortunately, no. I have 18 USB drives and was very happy when I discovered that DrivePool did not require Drive Letters to work. I have auto update on Windows 10, so I imagine I have the latest updates. Everything is working fine here. Although I do not use Drive Letter assignments, I have all my pool drives named. Since I am so clever, I named them DP01, DP02, DP03, etc... and they sit on a shelf in that order, too. I also labeled the drives with a small tag. Makes it easy for me to pull a drive if it has any problems. I hope you find out what happened to your DrivePool and why you now need Drive Letters assigned. I'll be reading to see what develops. Hope you get a fix soon.
    1 point
  29. Saphir

    DrivePool and SSD combinations

    Thanks for your answers. This are the answers from t he costumer service. I post them here because others may have the same questions an will find this answers helpfull too: question 1: Is the duplication working with having one SSD-drive set for cache. Is DrivePool writing the files to the cache SSD-drive and a duplication to the HDDs in the Archive? answer: Yes, but if you have duplication enabled, and don't have two SSDs, then yeah, it's falling back to using one of the HDDs. question 2: How is the reading of this files, is DrivePool reading the files on the cache SSD-drive first if they are still there or will the duplication on the HDD slow it down? answer: if the files are duplicated, then it has a number of checks to optimize the reads, if read striping is enabled. You can read about this here: https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Performance Options#Read Striping question 3: Would it be better to have two SSD enabled in the SSD Optimizer as SSDs to get for the files which are actually in the cach the duplication in the cache too, would this work? answer: Yes. Ideally. question 4: If I use two SSDs in the cache how do I set them? Together in a subpool? Or adding them simple to the main pool and set them both as SSD? answer: Just add both to the pool, and enable them as "ssd"s in the SSD Optimizer balancers. question 5: Would it maybe a better solution to get one SSD in the pool and not enable it as SSD-drive in the "SSD Optimizer" to use it as archive with file placement rules for my most used files? answer: Honestly, either works, but it depends on what you want in your setup. question 6: If I try it like in 5. how is it with the duplication would DrivePool read the files from the fast SSD or the slower HDD? Maybe the slow HDD makes the SSD slow too? answer: It should generally use the SSD, unless it's busy, basically. The nice think is DrivePool is working how I thought it should be working :-) I ordered now another 1 TB SSD and will set both of my 1TB SSD in the optimizer as SSD. For my enabled duplication that should work best like the answer in question 3 says this is ideally. I will double my RAM to 16 GB too and keep my old CPU ans Mainboard.
    1 point
  30. I don't think the SSD Optimizer works that way. When it hits the trigger point and flushes data, it appears to flush all the data in the cache. There is nothing left in the cache after the flush. I think there are some SSDs with learning software that will keep your most used programs/data on the SSD. Those are typically system SSDs. The DrivePool SSD Optimizer is not designed for keeping your most used files on the SSD. However, DrivePool can use your SSD as both a pool drive and a system drive. You could simply install the programs and/or put those files you use most often directly on the SSD and not as part of DrivePool. For example, my DrivePool SSD is drive Z:. I can install, store, or locate any files I want on my Z: drive as normal, or I can also use that SSD in DrivePool which adds the hidden DrivePool PoolPart directory on my Z: drive. One of the big advantages to DrivePool over other systems such as RAID or Windows Storage Spaces is that you are able to add and use an existing drive to DrivePool and, at the same time, also use that drive normally on your system. My older RAID and Storage Spaces took full control over my drives and those pool drives could not be used for anything else.
    1 point
  31. gtaus

    SSD Optimizer problem

    Welcome to the forum. As to stealing this topic.... on the top of this thread it states that the issue was solved by @Christopher (Drashna) way back in 2016. Given that DrivePool has been updated many times since then, if you have a similar issue as this old thread, then it might be better to start a new thread with your concerns. There was just another recent thread on the SSD Optimizer. You might also find that discussion helpful. When I was first setting up my SSD cache, I noticed that sometimes it would not flush properly until I rebooted the computer. After making changes to the SSD cache settings, it appeared to hang on the flushing. Through trial and error, I discovered everything with the SSD cache flushing to archive drives worked as expected after a reboot. I have not changed my SSD cache settings for some time now, and it flushes as expected without any problems. If rebooting your computer after you change your SSD cache settings does not work for you, I'd suggest starting a new thread because this thread is already marked as solved. I think you would get more responses that way. Take care.
    1 point
  32. Tiru

    Removing a nested pool

    For anybody who comes across this thread in the future with a similar issue: I wasn't able to ultimately figure out how this occurred but I decided to remove the E:\ pool from the D:\ pool using the default removal settings and that removed the extra drive letter. A SnapRAID scan of all the constituent drives that comprise my E: pool didn't find any unexpected changes, so it appears to have been a safe operation given limited testing.
    1 point
  33. gtaus

    DrivePool and SSD combinations

    DrivePool has a feature called Read Striping, which you can turn on. If the folder is duplicated, DrivePool will read from both drives and that can speed up the read task. If you expand the performance tab, you can see both drives that DrivePool is reading from and the speed at which it is reading. You can almost double the read speed in some transfers. What I see on my system is that DrivePool will read from both my SSD and an archive HDD at the same time, but since the SSD is just so much faster, most of the read data comes from the SSD. I don't know if DrivePool is specifically programmed to recognize the faster drive and use that pipeline first, but in effect that is what I see happening on my system. My read speed, with Read Striping turned on, in those cases is almost exactly the same as the read speed on my SSD. FYI, I only have 1 SSD, so I do not know if DrivePool would Read Strip both of your SSDs and almost double your read speed from the SSDs. Also, in my case, the bottleneck on my transfers is not the DrivePool read speeds, it is the speed of which I can transfer the data over my home network ethernet, the wifi, or to maybe a destination USB HDD. In most of my cases, transfers on my system are slowed down by lots of things but DrivePool is not one of them.
    1 point
  34. bejhan

    S.M.A.R.T. - 78.4%

    After a bit of research, I've discovered that even though the maximum value of an attribute is 255, manufacturers choose a maximum value that suits them (usually a nice round number). In WD's case, they have chosen 200 (200 / 255 = 78.4%). I have an HGST drive that seems to be using 100 instead which results in 39.2%.
    1 point
  35. Well, it may be worth testing on the latest beta versions, as there are some changes that may effect this, due to driver changes. If you're already on the beta or the beta doesn't help, please open a ticket at https://stablebit.com/Contact so we can see if we can help fix this issue.
    1 point
  36. I'd take a wild stab and say probably negligible difference so long as windows is idle and behaving; checking task manager, my fileserver's boot SSD is currently averaging ~1% active time, ~450 KB/s transfer rate, ~1ms response time and <0.01 queue length over 60 seconds at idle, and it's a rather old SATA SSD. I think you're much more likely to run into other bottlenecks first. But YMMV as it depends how many simultaneous streams are you planning for, what other background programs you have running, etc. You could test it? Open task manager / resource monitor, set to show the drive's active time, transfer rate and queue length, then open some streams while you run an antivirus scan or create a restore point and so on?
    1 point
  37. Another option to consider may be using a product like PrimoCache which can use both your system RAM as level 1 cache and you SSD as level 2 cache for all your computer reads/writes - so, not just limited to DrivePool. DrivePool will use the SSD cache for writes only, but does not use it for caching reads. PrimoCache is a try before you buy software, and I think they offer a 30 day trial period. It worked good for me, but I chose just to use the DrivePool SSD plugin which is all I needed for my DrivePool home media server. If you decided to go with something like PrimoCache, then you would not have to partition your SSD for multiple pools in DrivePool. In theory, you would have access to the entire SSD for every drive in your system and nothing wasted sitting in a partition that is not being used. Additionally, it is possible to overrun your DrivePool SSD cache if your data transfer is larger than your available space on the SSD. That has never happened to me, but evidently it is possible in DrivePool. With PrimoCache, on the other hand, it is more like a true buffer and you would not be able to overfill it. If you ever filled the PrimoCache buffer, the data would slow down/stop until more buffer space was available.
    1 point
  38. This is a topic that comes up from time to time. Yes, it is possible to display the SMART data from the underlying drives in Storage Spaces. However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple. At best, it will require a drastic change, if not outright rewrite of the UI. And that's not a small undertaking. So, can we? Yes. But do we have the resources to do so? not as much (we are a very small company)
    1 point
  39. Do we have an update on Mega.nz? I see under the list its still being worked on, but this has been like this for a few years. Mega.nz offers unlimited storage and does have an api set. So shouldn't be to hard to setup. I know there are mount programs now like Netdrive and Raidrive that have support for it. I would be nice to have this as an option. https://stablebit.com/Admin/IssueAnalysis/15659
    1 point
  40. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API. If you understand all of that, then understand that CloudDrive does not operate the same way. CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand. So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons. It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume. There are both benefits and drawbacks to using this approach: Benefits CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent. Drawbacks (read carefully) CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work. So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations.
    1 point
  41. Ok solution is that you need to manually create the virtual drive in powershell after making the pool: 1) Create a storage pool in the GUI but hit cancel when it asks to create a storage space 2) Rename the pool to something to identify this raid set. 3) Run the following command in PowerShell (run with admin power) editing as needed: New-VirtualDisk -FriendlyName VirtualDriveName -StoragePoolFriendlyName NameOfPoolToUse -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize
    1 point
  42. Alex

    Cloud Providers

    As I was writing the various providers for StableBit CloudDrive I got a sense for how well each one performs / scales and the various quirks of some cloud services. I'm going to use this thread to describe my observations. As Stablebit CloudDrive grows, I'm sure that my understanding of the various providers will improve as well. Also remember, this is from a developer's point of view. Google Cloud Storage http://cloud.google.com/storage Pros: Reasonably fast. Simple API. Reliable. Scales very well. Cons: Not the fastest provider. Especially slow when deleting existing chunks (e.g. when destroying a drive). Difficult to use and bloated SDK (a development issue really). This was the first cloud service that I wrote a StableBit CloudDrive provider for and initially I started writing the code against their SDK which I later realized was a mistake. I replaced the SDK entirely with my own API, so that improved the reliability of this provider and solved a lot of the issues that the SDK was causing. Another noteworthy thing about this provider is that it's not as fast as some of the other providers (Amazon S3 / Microsoft Azure Storage). Amazon S3 http://aws.amazon.com/s3/ Pros: Very fast. Reliable. Scales very well. Beautiful, compact and functional SDK. Cons: Configuration is a bit confusing. Here the SDK is uniquely good, it's a single DLL and super simple to use. Most importantly it's reliable. It handles multi-threading correctly and its error handling logic is straightforward. It is one of the few SDKs that StableBit CloudDrive uses out of the box. All of the other providers (except Microsoft Azure Storage) utilize custom written SDKs. This is a great place to store your mission critical data, I backup all of my code to this provider. Microsoft Azure Storage http://azure.microsoft.com/en-us/services/storage/ Pros: Very fast. Reliable. Scales very well. Easy to configure. Cons: No reasonably priced support option that makes sense. This is a great cloud service. It's definitely on par with Amazon S3 in terms of speed and seems to be very reliable from my testing. Having used Microsoft Azure services for almost all of our web sites and the database back-end, I can tell you that there is one major issue with Microsoft Azure. There is no one that you can contact when something goes wrong (and things seem to go wrong quite often), without paying a huge sum of money. For example, take a look at their support prices: http://azure.microsoft.com/en-us/support/plans/ If you want someone from Microsoft to take a look at an issue that you're having within 2 hours that will cost you $300 / month. Other than that, it's a great service to use. OneDrive for Business https://onedrive.live.com/about/en-US/business/ Pros: Reasonable throttling limits in place. Cons: Slow. API is lacking leading to reliability issues. Does not scale well, so you are limited in the amount of data that you can store before everything grinds to a halt. Especially slow when deleting existing chunks (e.g. when destroying a drive). This service is actually a rebranded version of Microsoft SharePoint hosted in the cloud for you. It has absolutely nothing to do with the "regular" OneDrive other than the naming similarity. This service does not scale well at all, and this is really a huge issue. The more data that you upload to this service, the slower it gets. After uploading about 200 GB, it really starts to slow down. It seems to be sensitive to the number of files that you have, and for that reason StableBit CloudDrive sets the chunk size to 1MB by default, in order to minimize the number of files that it creates. By default, Microsoft SharePoint expects each folder to contain no more than 5000 files, or else certain features simply stop working (including deleting said files). This is by design and here's a page that explain in detail why this limit is there and how to work around it: https://support.office.com/en-us/article/Manage-lists-and-libraries-with-many-items-11ecc804-2284-4978-8273-4842471fafb7 If you're going to use this provider to store large amounts of data, then I recommend following the instructions on the page linked above. Although, for me, it didn't really help much at all. I've worked hard to try and resolve this by utilizing a nested directory structure in order to limit the number of files in each directory, but nothing seems to make any difference. If there are any SharePoint experts out there that can figure out what we can do to speed this provider up, please let me know. OneDrive https://onedrive.live.com/ Experimental Pros: Clean API. Cons: Heavily and unreasonably throttled. From afar, OneDrive looks like the perfect storage provider. It's very fast, reliable, easy to use and has an inexpensive unlimited storage option. But after you upload / download some data you start hitting the throttling limits. The throttling limits are excessive and unreasonable, so much so, that using this provider with StableBit CloudDrive is dangerous. For this reason, the OneDrive provider is currently disabled in StableBit CloudDrive by default. What makes the throttling limits unreasonable is the amount of time that OneDrive expects you to wait before making another request. In my experience that can be as high as 20 minutes to 1 hour. Can you imagine when trying to open a document in Microsoft Windows hitting an error that reads "I see that you've opened too many documents today, please come back in 1 hour". Not only is this unreasonable, it's also technically infeasible to implement this kind of a delay on a real-time disk. Box https://www.box.com/ At this point I haven't used this provider for an extended period of time to render an opinion on how it behaves with large amounts of data. One thing that I can say is that the API is a bit quirky in how it's designed necessitating some extra HTTP traffic that other providers don't require. Dropbox http://www.dropbox.com/ Again, I haven't used this provider much so I can't speak to how well it scales or how well it performs. The API here is very robust and very easy to use. One very cool feature that they have is an "App Folder". When you authorize StableBit CloudDrive to use your Dropbox account, Dropbox creates an isolated container for StableBit CloudDrive and all of the data is stored there. This is nice because you don't see the StableBit CloudDrive data in your regular Dropbox folder, and Stablebit CloudDrive has no way to access any other data that's in your Dropbox or any data in some other app folder. Amazon Cloud Drive https://www.amazon.com/clouddrive/home Pros: Fast. Scales well. Unlimited storage option. Reasonable throttling limits. Cons: Data integrity issues. I know how important it is for StableBit CloudDrive to support this service and so I've spent many hours and days trying to make a reliable provider that works. This single provider delayed the initial public BETA of StableBit CloudDrive by at least 2 weeks. The initial issue that I had with Amazon Cloud Drive is that it returns various errors as a response to I/O requests. These errors range from 500 Internal Server Error to 400 Bad Request. Reissuing the same request seems to work, so there doesn't appear to be a problem with the actual request, but rather with the server. I later discovered a more serious issue with this service, apparently after uploading a file, sometimes (very rarely) that file cannot be downloaded. Which means that the file's data gets permanently lost (as far as I can tell). This is very rare and hard to reproduce. My test case scenario needs to run for one whole day before it can reproduce the problem. I finally solved this issue by forcing Upload Verification to be enabled in StableBit CloudDrive. When this issue occurs, upload verification will detect this scenario, delete the corrupt file and retry the upload. That apparently fixed this particular issue. The next thing that I discovered with this service (after I released the public BETA) is that some 400 Bad Request errors spawn at a later time, long after the initial upload / verification step is complete. After extensively debugging, I was able to confirm this with the Amazon Cloud Drive web interface as well, so this is not a provider code issue, rather the problem actually occurs on the server. If a file gets into this state, a 400 Bad Request error is issued, and if you examine that request, the error message in the response says 404 Not Found. Apparently, the file metadata is there, but the file's contents is gone. The short story is that this service has data integrity issues that are not limited to StableBit CloudDrive in particular, and I'm trying to identify exactly what they are, how they are triggered and apply possible workarounds. I've already applied another possible workaround in the latest internal BETA (1.0.0.284), but I'm still testing whether the fix is effective. I am considering disabling this provider in future builds, and moving it into the experimental category. Local Disk / File Share These providers don't use the cloud, so there's really nothing to say here.
    1 point
  43. It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. Hey Middge, I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up. This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately. This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine. WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would.
    1 point
  44. First, thank you for your interest in our product(s)! The default file placement strategy is to place files on the drive(s) with the most available free space (measured absolutely, rather than based on percentage). This happens regardless of the balancing status. In fact, it's the balancers themselves that can (will) change the placement strategy of new files. For what you want, that isn't ideal.... and before I get to the solution: The first issue here is that there is a misconception about how the balancing engine works (or more specifically, with how frequently or aggressive it is). For the most part, the balancing engine DOES NOT move files around. For a new, empty pool, balancing will rarely, if ever move files around. Partly because, it will proactively control where files are placed in the first place. That said, each balancer does have exceptions here. But just so you understand how and why each balancer works and when it would actually move files, let me enumerate each one and give a brief description of them. StableBit Scanner (the balancer). This balancer only works if you have StableBit Scanner installed on the same system. By default, it is only configured to move contents off of a disk if "damaged sectors" (aka "Unreadable sectors") are detected during the surface scan. This is done in an attempt to prevent data loss from file corruption. Optionally, you can do this for SMART warnings as well. And to avoid usage if the drive has "overheated". If you're using SnapRAID, then it may be worth turning this balancer off, as it isn't really needed Volume Equalization. This only affects drives that are using multiple volumes/partitions on the same physical disk. It will equalize the usage, and help prevent duplicates from residing on the same physical disk. Chances are that this balancer will never do anything on your system. Drive Usage Limiter This balancer controls what type of data (duplicated or unduplicated) can reside on a disk. For the most part, most people won't need this. We recommend using it for drive removal or "special configurations" (eg, my gaming system uses it to store only duplicated data, aka games, on the SSD, and store all unduplicated data on the hard drive)Unless configured manually, this balancer will not move data around. Prevent Drive Overfill This balancer specifically will move files around, and will do so only if the drive is 90+% full by default. This can be configured, based on your needs. However, this will only move files out of the drive until the drive is 85% filled. This is one of the balancers that is likely to move data. But this will only happen on very full pool. This can be disabled, but may lead to situations where the drives are too full. Duplication Space Optimizer. This balancer's sole job is to rebalance the data in such a way that removes the "Unusable for duplication" space on the pool. If you're not using duplication at all, you can absolutely disable this balancer So, for the most part, there is no real reason to disable balancing. Yes, I understand that it can cause issues for SnapRAID. But depending on the system, it is very unlikely to. And the benefits you gain by disabling it may be outweighed by the what the balancers do. Especially because of the balancer plugins. Specifically, you may want to look at the "Ordered File Placement" balancer plugin. This specifically fills up one drive at a time. Once the pool fills up the disk to the preset threshold, it will move onto the next disk. This may help keep the contents of a specific folders together. Meaning that it may help keep the SRT file in the same folder as the AVI file. Or at least, better about it than the default placement strategy. This won't guarantee the folder placement, but significantly increases the odds. That said, you can use file placement rules to help with this. Either to micromanage placement, or ... you can set up a SSD dedicated for metadata like this, so that all of the SRT and other files end up on the SSD. That way, the access is fast and the power consumption is minimal.
    1 point
  45. Yes, you can do that, too. However, I generally prefer and recommend mounting to a folder, for ease of access. It's much easier to run "chkdsk c:\drives\pool1\disk5", than "chkdsk \\?\Volume{GUID}"... and easier to identify. Yes. The actually pool is handled by a kernel mode driver, meaning that the activity is passed on directly to the disks, basically. Meaning, you don't see it listed in task manager, like a normal program.
    1 point
  46. StableBit DrivePool doesn't care about drive letters. It uses the Volume ID (which Windows mounts to a drive letter or folder path). So you can remove the drive letters, if you want, or mount to a folder path. http://wiki.covecube.com/StableBit_DrivePool_Q6811286
    1 point
  47. Thanks Have mounted then and removed drive letters and Drivepool picked up the changes
    1 point
  48. Hi the individual drive letters are not needed most people give the drives a name like bay 1 bay 2 etc purely to help identify the drives then remove the letters under disk management.
    1 point
  49. I've got a few different setups (NASes, Storage Pools, convential JBOD server and a couple DrivePools). My largest DrivePool is currently configured at 88.2TB usable and I've got a couple 6TB drives not in that figure used for parity (SnapRaid). This pool is strictly for media mostly used by Plex. I've got 4 Windows 2012 R2 servers and two are currently dedicated for multimedia. 2 are more generic servers and hold a lot of VHDX images using windows De-Duplication. Then I've got a few smaller NAS boxes used to hold typical family stuff. But getting back to DrivePool. I'll be increasing the storage space or the 88.2TB pool in about a month (guessing) when I add the next 8 to 15 bay enclosure for more movies and especially TV Shows. Currently stored on that pool: 160 - 3D Movies 6,200 - Movies 1,150 - Educational Video 18,700 - Music Videos 850 - NFL Games 10,100 - TV Episodes (132 Shows, 613 Seasons) Music: 4,400 Artists, 12,900 Albums, 105,000 Tracks Carlo
    1 point
  50. Well, Alex has covered the development side of this, I've been looking into the pricing of the different providers. While "unlimited" is clearly the best option for many here, I want focus on the "big" providers (Amazon S3, Azure Storage, and Google Cloud Storage). Unfortunately, for many users, the high cost associated with these providers may immediately put them out of range. But we still think it is a good idea to compare the pricing (at least for reference sake). All three of these providers include a storage pricing (how much data you're storing), "Request" pricing (how many API requests you make), and data transfer pricing (how much data you've transferred to and from. And all prices are listed for the US Standard region, ATM. I've tried to reorder lists, so that each provider is shown using the same layout for their different tiers. Amazon S3 Storage Pricing (Amount stored) Reduced Redundancy Standard Glacier Up to 1TB / Month $0.0240 per GB $0.0300 per GB $0.0100 per GB Up to 50TB / Month $0.0236 per GB $0.0295 per GB $0.0100 per GB Up to 500TB / Month $0.0232 per GB $0.0290 per GB $0.0100 per GB Up to 1PB / Month $0.0228 per GB $0.0285 per GB $0.0100 per GB Up to 4PB / Month $0.0224 per GB $0.0280 per GB $0.0100 per GB Over 5PB / Month $0.0220 per GB $0.0275 per GB $0.0100 per GB Specifically, Amazon lists "for the next" pricing, so the pricing may be cumulative. Also, "reduced Redundancy" means that they're using mostly only local servers to you, and not redundant throughout various regions. And this is ~$25 per TB per month of storage for the Reduced Redundancy, about $30 per TB per month for Standard and $10.24 per TB per month for Glacier. This may seem like a deal, but lets look at the data transfer pricing. Transfer Pricing Data Transfer In to S3 (upload) $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month $0.000 per GB Up to 10TB / Month $0.090 per GB "Next" 40TB / Month (50TB) $0.085 per GB "Next" 100TB / Month (150TB) $0.070 per GB "Next" 350TB / Month (500TB) $0.050 per GB "Next" 524TB / Month (1024TB) Contact Amazon S3 for special consideration That's $92 per TB per month, up to 10TBs Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at So, that boils down to $115/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be). Additionally, Amazon S3 charges you per transaction (API call), as well. Request Pricing (API Calls) PUT, COPY, POST, LIST Requests $0.005 per 1000 requests Glacier Archive and Restore Requests $0.050 per 1000 requests DELETE Requests Free (caveat for Glacier) GET and other requests $0.004 per 10,000 requests Glacier Data Restores Free (due to infrequent usage expected, can restore up to 5% monthly for free) Needless to say, that every time you list contents, you may be making multiple requests (we minimize this as much as possible with the caching/prefetching options, but that only limits it to a degree). This one is hard to quantify without actual usage. Microsoft Azure Storage Storage Pricing (Amount stored) for Block Blob LRS ZRS GRS RA-GRS First 1TB / Month $0.0240 per GB $0.0300 per GB $0.0480 per GB $0.0610 per GB "Next" 49TB / Month (50TB) $0.0236 per GB $0.0295 per GB $0.0472 per GB $0.0599 per GB "Next" 450TB / Month (500TB) $0.0232 per GB $0.0290 per GB $0.0464 per GB $0.0589 per GB "Next" 500TB / Month (1000TB) $0.0228 per GB $0.0285 per GB $0.0456 per GB $0.0579 per GB "Next" 4000TB / Month (5000TB) $0.0224 per GB $0.0280 per GB $0.0448 per GB $0.0569 per GB Over 5000PB / Month Contact Microsoft Azure for special consideration The LRS and ZRS "zones" are priced identically to Amazon S3 here. However, lets explain these terms: LRS: Multiple copies of the data on different physical servers, as the same datacenter (one location). ZRS: Three copies at different data centers within a region, or in different regions. For "blob storage only". GRS: Same as LRS, but with multiple (asynchronous) copies at other another datacenter. RA-GRS: Same was GRS, but with read access to the secondary data center And this is ~$25 per TB per month of storage for the LRS, about $30 per TB per month for ZRS, about $50 per TB per month for GRS, and about $60 per TB per month for RA-GRS. Microsoft Azure offers other storage types, but it gets much more expensive, very quickly (double the of what's listed for Blob storage, or higher). Transfer Rate Unfortunately, Microsoft isn't as forthcoming about their transfer rates. They tuck it away on another page, so it's harder to access. However, it is Data Transfer IN (upload) $0.000 per GB Data Transfer OUT to the internet (download) First 5GB / Month $0.000 per GB Up to 10TB / Month $0.087 per GB "Next" 40TB / Month (50TB) $0.083 per GB "Next" 100TB / Month (150TB) $0.070 per GB "Next" 350TB / Month (500TB) $0.050 per GB "Next" 524TB / Month (1024TB) Contact Micorosft Azure for special consideration That's $89 per TB per month, up to 10TBs Chances are, that unless you have a very good speed, that's where you're going to be "stuck" at This is slightly cheaper than Amazon S3, but not by a whole lot, and it heavily depends on the level of redundancy and storage type you use. Request Pricing (API Calls) Any Request $0.0036 per 10000 requests Import/Export (HDDs) $80 per drive, may not be not suitable for CloudDrive This is definitely much cheaper than Amazon S3's request pricing. It's still going to run you around $100 per TB per month to store and transfer, but it's a bit better than Amazon S3. And that's not counting the request transaction pricing. Google Cloud Storage Storage Pricing (Amount stored) DRA Storage Standard Cloud Storage Nearline $0.0200 per GB $0.0260 per GB $0.0100 per GB DRA (Durability Reduced Availability) means that the data is not always available. While this is the cheapest, it will definitely cause latency issues (or worse). Cloud Storage Nearly is a step cheaper, and is at a reduced performance, and has less Availability. However, this is a flat rate, so it's very simple to figure out what your cost will be here. And this is ~$20.48 per TB per month of storage for the DRA Storage, $26.63 per TB per month for Standard and $10.24 per TB per month for Cloud Storage Nearline. Now lets look at the transfer pricing. Transfer Pricing Data Transfer In to Google (upload) $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month $0.120 per GB "Next" 10TB / Month $0.110 per GB Over 40TB / Month (50TB) $0.080 per GB That's about $122 per TB per month, up to 10TBs So, that boils down to $140/month to store and access 1TB per month. This is definitely more expensive than either Amazon S3 or Azure Storage. Additionally, Google Cloud Storage does charge you per API call, as well. Request Pricing (API Calls) LIST, PUT, COPY, POST Requests $0.010 per 1000 requests GET, and others Requests $0.001 per 1000 requests DELETE Requests Free Google is definitely significantly more expensive when it comes to API calls. Backblaze B2 Storage Pricing (Amount stored) Flat Storage Rate $0.005 per GB The first 10GBs is free, but that's a small account, so we won't even bother computing it (it's a $0.05 difference, specifically) But that's basically $5.12 per TB per month for storage. Transfer Pricing Data Transfer In to S3 (upload) $0.000 per GB Data Transfer OUT to the internet (download) First 1GB / Month $0.000 per GB Past 1GB / Month $0.050 per GB That's $51 per TB per month transferred. This is by far, the cheapest option here. And chances are, that unless you have a very good speed, that's where you're going to be "stuck" at So, that boils down to $56/month to store and access 1TB per month. Your usage may vary, but this may get very expensive, very quickly (fortunately, upload is free, so getting the storage there isn't that expensive, it's getting it back that will be). Additionally, Amazon S3 charges you per transaction (API call), as well. Request Pricing (API Calls) DELETE bucket/file version, HIDE, UPLOAD Requests Free GET, DOWNLOAD file by ID/Name $0.004 per 10,000 requests Authorize, CREATE, GET, LIST, UPDATE Requests $0.004 per 1000 requests (due to infrequent usage expected, can restore up to 5% monthly for free) First 2500 are free each day, and this is different from the other providers. However, as above, it's hard to predict the usage without actual usage. Is there a clear winner here? No. Depending on the available, amount of data and traffic, and usage, it varies depending on how you want to use the provider. Well, in regards to pricing, Backblaze is clearly the winner here. But giving other issues with Backblaze (eg, sourcing, reporting statistically insignificant findings, etc), the question is "Will they be able to maintain their B2 business?" And that is a significant one. Only time will tell.
    1 point
×
×
  • Create New...