Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 07/01/21 in all areas

  1. My backup pool has a 1TB SSD that I use to speed up my backups over a 10Gbs link. Works great, as I can quickly backup my deltas from the main pool --> backup pool and in slow time it pushes the files to the large set of spinning rust (100TB). However, when I try to copy a file that is larger than the 1TB SSD I get a msg saying there is not enough available space. Ideally, the SSD Optimiser should just push the file directly to a HDD in such cases (feature request?), but for now what would be the best way of copying this file into the pool? - Manually copy directly to one of the HDD behind DrivePools back then rescan? - Turn off the SSD optimiser then copy the file" or, - Some other method? Thanks Nathan
    2 points
  2. Not currently, but I definitely do keep on bringing it up.
    2 points
  3. I mean, alex is alive and well. but yeah, some issues that don't directly involve Covecube. If alex wants to answer what exacty so going on, he will. But stuff is still happening.
    2 points
  4. https://stablebit.com/DrivePool/Download Once you click the checkbox, there is a "what's new?" link that will take you to here: https://stablebit.com/DrivePool/ChangeLog?Platform=win Or you can see the beta changelog here: https://dl.covecube.com/DrivePoolWindows/beta/download/changes.txt
    1 point
  5. Ac3M

    Replacing drive with same data ?

    I think DrivePool will ignore folders that it hasn't created itself. Never tried this actually. I'm not using any backup solution myself, just JBOD hurdled together in DrivePool as a single drive, and I've had my fair share of disks that have failed (FUBAR cables, for the most part), and also disks that were full (Like 0 bytes left) which have led to having to move data, and I found that it was working to move data outside of the PoolPart-folder made by DrivePool, and then to/from source/destination. But, remember to pay heed to never move data from one PoolPart-folder to another since DrivePool tends to pull a David Copperfield on you and make stuff disappear, like magic. I have yet to try out recovery software to see if removed data can be restored, come to think of it. Not that it has anything to do with this, but it should work I think. I find it very simple to work with the folder structure (PoolPart) since anything within it, you can just move up one level and it's taken out of the pool, then you can merge that data with another folder on another disk or whatever then you can just put it back and et voila - It's back in the pool again.
    1 point
  6. I'm on version 2.2.2.934 and have "Check for New Versions" selected. Obviously, there have been several new versions, but I don't receive any notifications to update. I've tried "Force update check now" which says if a new version is found, I'll get a notification to update. I never get a notification. StableBit DrivePool Notifications is running and enabled at Startup (per Task Manager). I don't have "Focus Time" enabled or anything I can see that would suppress notifications. Yes, I can manually update the version, but I was wondering if anyone had any ideas on how I could get it back to checking for new versions automatically.
    1 point
  7. minden02

    Drive(s) dropping from pool

    Ah that's a tough one. If the disks SMART readings are ok then my first guess would be the JBOD chassis power supply or other hardware failing. Disk failures are usually more slow moving in that you start to see errors and corruption and will get some warning from the pool. If it completely fails suddenly then a restart wouldn't bring it back.
    1 point
  8. Any update on WebDAV? Is there a specific issue preventing it being added? It's been a while...
    1 point
  9. phhowe17

    Any word on pricing?

    Looks like there will be a Cloud+ subscription at $1/device/month. https://blog.covecube.com/2022/03/stablebit-cloud-plus/ As a 3xDrivePool only customer, I don't see a use case.
    1 point
  10. There are some hidden by default, already, but that's because they're experimental. But I don't see a reason why this couldn't be added. StableBit Scanner uses Telerik controls, which we stopped supporting/using, since they come with a bunch of their own issues. StableBit DrivePool and StableBit CloudDrive use entirely custom implementations, and all native WPF controls, because of some of the issues we've had with telerik controls. So, pretty much the opposite. Eg, StableBit Scanner is the older product (the first, actually).
    1 point
  11. As long as Drivepool can see the drives it'll know exactly which pool they belong to. I move drives around regularly between my server and an external diskshelf and they always reconnect to the correct pools.
    1 point
  12. correct, internal operations (balancing, duplication, measuring) are not reported there. Only activity it reports is what happens through the pool drive.
    1 point
  13. For balancing and duplication operations, the files are copied to a temp folder, and then renamed when the operation completes. this ensures that even if an interruption occurs, that the files are intact, whole and not corrupted. And yeah, that's probably a good way to set things up.
    1 point
  14. Yep. Also with having automatic balancing enabled, your pool will become unbalanced from time to time. For example if you've setup the "Balance Immediately -> No more often than ..." rule, or if you've setup a trigger like "If the balance ratio falls below...". As soon as there is 1 byte to move, the arrow should appear. But automatic balancing will not start until all the requirements you've setup for automatic balancing are fulfilled. Then you can force an early start of balancing with the arrow.
    1 point
  15. Nope. The surface scan is a sector by sector scan. The file system scan is an api call for CHKDSK. So unless you're using the file recovery option, it should not be reading files.
    1 point
  16. It follows the normal balancing, rules. But you can use the Drive Usage Limiter if you wnt to clear out a specific disk or two.
    1 point
  17. Mostly, this. You can use utilities like WinDirStat to view the drives. But it's very possible/likely that you won't find anything specifically. Most likely, the difference is due to the size vs size on disk differences (slack space), as well as the System Volume Information folder on each disk.
    1 point
  18. You will need to manually update the location information. In StableBit DrivePool, disable the bitlocker detection. That helps from that end. https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings For StableBit Scanner, throttle the SMART queries, as that usually helps. https://stablebit.com/Support/Scanner/2.X/Manual?Section=SMART As for the work window, that's defined in the "general" tab. So if you don't have a window set, then this setting doesn't do you much good. Meets or exceeds it. Also, there is a threshold setting to. It will warn you as it approaches that maximum. (default is for a 15C window) It's one disk per controller. So if you have multiple controllers, such as onboard, and a controller card, it can scan multiple disks. It checks the disks, and see if there are any disks with sections that haven't been scanned in "x" days, with x being the settings for how frequently to scan. There are some other factors, such as disk activity, etc. It means that the scanning is done in such a way that any normal activity takes priority over the scans, and may stop the scanning from occurring. This way, it minimizes the performance impact for the drives, ideally.
    1 point
  19. You are very weclome! Correct. At least for that system. Reconnecting it should automatically re-activate, though.
    1 point
  20. klepp0906

    Drive Sleep

    Did you ever figure this out? I mean you figured out it was drivepool but beyond that a fix? Ive been dealing with this ever since i installed drivepool. I installed it just after installing my HBA and new disk chassis so assumed it was the old LSI card not being able to pass sleep commands. Everything I found said it should so i decided to start shutting things off. Turned off the 3 drivepool services (well 6) but for the 3 softwares as I have all 3 installed. Drives all sleeping like a baby now. Obviously need to be able to enable the drivepool service before I set up my pools but i'd love to figure this out first. I did try procmon myself and same thing, drives didnt pick up activity there but its repeatable via testing. been at it almost all day.
    1 point
  21. Went with a Western Digital WD Ultrastar DC HC550 I had done some research many many years ago and found that the Ultrastar 8TB seemed like a pretty solid drive with low failure rates based on the data available from storage companies. I've more or less stuck with them since then. I think Western Digital bought them years ago and that's what the HGST brand name is if I remember correctly. I think (but could be wrong) that WD didn't actually change anything and still has Hitachi producing these drives?
    1 point
  22. Hey Ben, good you found it! Yeah, that balancer plugin was exactly what I was referring to! @Christopher (Drashna) , the balancing plugins manual page seems to be outdated here. It says "File Placement Limiter" while this balancer is now actually renamed to "Drive Usage Limiter".
    1 point
  23. Jonibhoni

    TrueCrypt and DrivePool

    Yes, I guess it should work. Keep in mind, that DrivePool cannot split single files into chunks, so your physical drives that make up the pool must be large enough to host those 4 TB files (DrivePool works on file level, not on sector level!). But if you have for example two 10 TB drives, I guess it should work well. The performance characteristics will differ from RAID or dynamic discs, storage spaces and whatever, but if in doubt, maybe just give it a try. DrivePool doesn't need formatted drives, any existing partitions will do fine and creating a pool on them will not harm the existing data. For completeness, I can maybe add that there is even another possibility to have encrypted data stored on a DrivePool for redundancy: Instead of VeraCrypt, you *could* also use StableBit CloudDrive to create a "Local Disk" cloud drive inside your pool, which will act as if your local DrivePool drive was a cloud provider. As CloudDrive (as opposed to DrivePool) creates an encrypted virtual drive that is based on chunk files, you will have both Encryption (CloudDrive) and Redundancy (underlying DrivePool) in a nicely clean virtual NTFS drive with just software from the StableBit bundle. But honestly, I'm using that solution because I thought it was clever (it is more friendly to file-based backup than having 4 TB files), but - despite it works conceptually well - perceived performance is unfortunately an order of magnitude slower than just using a VeryCrypt container. I'm hoping for some optimization by the devs there... but I guess, as CloudDrive was made for, well: the cloud, it will never be as fast.
    1 point
  24. Any hint in the gear icon in upper right -> Troubleshooting -> Service Log? Check right after system restart and scroll to the very top, to see the part where DrivePool detects existing pools. (Maybe temporarily deactivate Service tracing / auto-scroll right after opening the log, to prevent the interesting lines to move out of view.) And, just out of curiosity, are there any other pools shown in the UI? (click your pool's name in the top center in the main window) And - also just another wild guess - are the two missing drives shown if you start your computer without the working drive plugged in? (Disclaimer: Do at your own risk. I'm just an ordinary user!)
    1 point
  25. There are file placement rules that may help but I never used them. I do use hierarchical pooling to ensure one duplicate is on one and the other on another group of disks so that I only need to backup one set of disks. Maybe that can help you. But afaik, within one pool, there is no way to ensure one and only one duplicate ends up at a specific subset of HDD and the other duplicate on another subset.
    1 point
  26. Yep - Drivepool really doesn't care how the drive is connected - just so it is connected. I've moved drives from internal bays on my R710 to an external disk shelf and to usb enclosures (and back again) and they're always found.
    1 point
  27. I think you're looking for this: https://wiki.covecube.com/StableBit_DrivePool_Q4142489
    1 point
  28. i would also like to see dark mode addedto all stablebit products
    1 point
  29. AFAIK, copying, even cloning does not work. The easiest way is: 1. Install/connect new HDD 2. Add HDD to the Pool 3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance. 4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
    1 point
  30. I've been pretty confused about how Scanner actually works. Here are some of my observations. Scanner will scan a random drive without regard to when others were last scanned Scanner doesn't pay attention to the setting that says to scan every 30 days (ie don't scan for another 30 days) Scanner will stop mid scan and move to another drive Scanner will scan a drive, but not update the Last Scanned, and will scan it again later Scanner will scan a drive even if the settings to NOT Surface or File System scan are checked Some things I learned: Scanner will scan one drive per "interface". So if I have 2 HBAs, it would scan one drive per HBA. This accounts for some of the seemingly random drive selections There are two scans (Surface and File System), and they are not done at the same time. Scanner does not tell you which scan it is doing, and I haven't found a way to know which scan was started/completed per drive. I have a drive that has been formatting for 14 hours, and another that I just fixed the file system on. I want to rescan the second one so it can hopefully confirm it's fixed and wipe out the File System Damaged error. I went through all the drives and checked the boxes to NOT surface or file system scan them. When I hit Start Check it wants to scan the drive that's in the middle of formatting, and another drive. So the ONLY drive that it should be scanning it doesn't scan. First off, that just makes no sense to me. Why is is scanning drives with those options checked? Why it is scanning a drive that's in the middle of formatting? How can I manually start a scan on this one drive by itself?
    1 point
  31. I figured out shortly after this what best method probably is... I'm just going to copy the files to a temp folder on the drive, then I can move them into the PoolPart folder after that like the guide above. Turning off the service for a few minutes while I move the files sounds a lot easier than the other way.
    1 point
  32. You posted on another thread too. But You may need to enable the "Unsafe" "Direct IO" option to get the SMART data off of these drives, due to the controller. https://wiki.covecube.com/StableBit_Scanner_Advanced_Settings#Advanced_Settings if that doesn't work, enable the "NoWmi" option for "Smart", as well.
    1 point
  33. Christopher (at support) came back on my ticket. He reenabled my ID and all now good. Can write to the DrivePool. Just in case someone else encounters this problem use a fresh trial ID to get things going while waiting for support to reenable the paid ID.
    1 point
  34. Whenever I have problems with duplication, I go to the Settings Cog>Troubleshooting>Recheck Duplication and let DrivePool try to figure it out. Honestly, if there are duplication problems with DrivePool (like after removal of a failed HDD), it takes me a couple times running the Recheck Duplication and Balancing tasks. Last time that happened to me, it literally took a few days for DrivePool to clean itself up, but I have 80TB in my DrivePool. To be fair to DrivePool, it did fix itself given time. I only have a few folders set for duplication in my DrivePool, so out of my 80TB pool, only about 20TB are duplicated. Also, DrivePool duplication is good for some things, but it does not ensure that your files are actually intact and complete. It is possible to have a corrupted file/folder and DrivePool is happy to duplicate the corruption. If there is a mismatch between the original and the copy, DrivePool cannot tell you which file/folder is true and which may have been corrupted. For example, my DrivePool is mainly used as my home media storage. If I have an album folder with 15 tracks, and one or two tracks gets deleted or corrupted, DrivePool cannot tell me if the original directory is complete, if the duplicate directory is complete, or if neither copy is complete. Because of this, I now add 10% .par2 files to my folders for verification and possible rebuild. With the .par2 files, I can quickly determine if the folder is complete, if any missing or corrupted files can be rebuilt from the .par2 files in that folder, or if I have to take out my backup HDDs from the closet to rebuild the corrupted data in DrivePool. Unfortunately, DrivePool duplication does not ensure that your data has not been corrupted. For this reason, I don't consider DrivePool duplication in any way a backup solution. It lacks the ability to verify if the original or the duplicate copy is complete and intact and cannot resolve mismatches between copies. In theory, from what I understand, duplication is mainly good for rebuilding your DrivePool if you have a HDD failure and the bad drive is a complete loss. Then, DrivePool will still have a copy of the files on other drives in DrivePool and can rebuild the failed data. That may be a great option, and of course I said I do use duplication, but DrivePool duplication still lacks any ability to verify if the files are complete and uncorrupted. For that reason, I have gone to using those .par2 files for file verification. My backup HDDs are stored in my closet. It's not the best solution to my backup needs, but it is the best I have found for me at this point. In a more perfect world, DrivePool would have the ability to duplicate folders for faster pool recovery, and there would also be some way to verify and rebuild lost data like the .par2 files. In your case, if you have good backups of your data, I might consider turning off duplication in DrivePool, Rebalancing the pool and/or forcing a Recheck Duplication to clean up the data, and then turning duplication back on for the folders/pool as you want. But before I did that, I think I would contact the programmer directly for support and ask him for his recommendation(s). DrivePool is a great program and data recovery is much better than other methods I have used such as RAID systems and Windows Storage Spaces. But I do run into errors like you are experiencing and I cannot always understand the corrective action to take. Mostly, I have found that DrivePool is able to correct itself with its various troubleshooting tasks, but it might take a long time on a large pool.
    1 point
  35. Yeah, it's ... manufacturer fun. SMART really isn't a standard, it's more of a guideline, that a lot of manufacturers take a LOT of liberty with. NVMe health is a *lot* better, in this regards (it an actual standard).
    1 point
  36. I don't think the SSD Optimizer works that way. When it hits the trigger point and flushes data, it appears to flush all the data in the cache. There is nothing left in the cache after the flush. I think there are some SSDs with learning software that will keep your most used programs/data on the SSD. Those are typically system SSDs. The DrivePool SSD Optimizer is not designed for keeping your most used files on the SSD. However, DrivePool can use your SSD as both a pool drive and a system drive. You could simply install the programs and/or put those files you use most often directly on the SSD and not as part of DrivePool. For example, my DrivePool SSD is drive Z:. I can install, store, or locate any files I want on my Z: drive as normal, or I can also use that SSD in DrivePool which adds the hidden DrivePool PoolPart directory on my Z: drive. One of the big advantages to DrivePool over other systems such as RAID or Windows Storage Spaces is that you are able to add and use an existing drive to DrivePool and, at the same time, also use that drive normally on your system. My older RAID and Storage Spaces took full control over my drives and those pool drives could not be used for anything else.
    1 point
  37. Diodato

    Drive Sleep

    Hello again Christopher, On the machine in question only DrivePool and Scanner are installed. I have CloudDrive but I use it on a different PC on a different LAN. I have verified thrice just to be sure and here is my setup and observations: - PC setup: Win10 1909 build 18363.815, DrivePool version is 2.2.3.1019 , Scanner version is 2.5.6.3336 - Three USB disks attached (drive letters D, E and F) (TerraMaster D5-300C enclosure) - Scanner has under SMART throttling set to 60 minutes, "only query during the work window or if scanning" set and "do not query if the disk has spun down" also is set. - DrivePool is not configured for any pool disk, as if just installed. DrivePool has "BitLocker_PoolPartUnlockDetect" set to "false" (both Default and Override) - the DrivePool Service is Stopped - the active power plan of Windows has disk spindown set to 3 minutes. All three disks are not spinning. - Procmon64 shows abolutely no processes accessing the D, E or F disks - Resmon (disk tab) shows absolutely no activity for disks D, E and F Now as soon as I enable the DrivePool Service the disks wake up and never again go to sleep. You can see in the Resmon that every second there is access to all three disks. In the image in my previous post accesses to disk D are obvious, but if look with attention, every second there are very subtle marks also for disks E and F (very small peaks). Procmon continues to show that nothing is accessing disks D, E and F. If I stop the DrivePool Service I stop seeing in Resmon disk access for these three disks and after three minutes (whatever is set in the power plan) the go to sleep. Once again if I restart the DrivePool Service the disks wake-up and I start seeing the same every second disk access activity in Resmon. Again the disks will never spin-down. Not until I stop the DrivePool Service. Now I do not want to say that the DrivePool service is the culprit but it took me a very long time to narrow down what is causing my external USB drives not to sleep. Because Procmon was useless (no process was accessing the disks) I started stopping different services and have observed the above described behavior with the DrivePool Service. As the authors of DrivePool I am kindly asking for suggestions what to do to have my external disks spin-down or how to continue investigating. It might be a Microsoft Windows or USB driver problem or other but I would like some help or directions with the next steps. Thanks in advance. Cheers.
    1 point
  38. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API. If you understand all of that, then understand that CloudDrive does not operate the same way. CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand. So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons. It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume. There are both benefits and drawbacks to using this approach: Benefits CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent. Drawbacks (read carefully) CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work. So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations.
    1 point
  39. Ok solution is that you need to manually create the virtual drive in powershell after making the pool: 1) Create a storage pool in the GUI but hit cancel when it asks to create a storage space 2) Rename the pool to something to identify this raid set. 3) Run the following command in PowerShell (run with admin power) editing as needed: New-VirtualDisk -FriendlyName VirtualDriveName -StoragePoolFriendlyName NameOfPoolToUse -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize
    1 point
  40. It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. Hey Middge, I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up. This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately. This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine. WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would.
    1 point
  41. I have mine set to not scan removable drives - and do not have any issues with adding or removing usb drives/sticks - this is on win 10 FCU
    1 point
  42. Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already. The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds. Why is this important and how does it affect your pool? You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive. You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk. These are some very useful tricks to integrate the products, already. And if anyone else finds some neat tips or tricks, we'll add them here as well.
    1 point
  43. First, thank you for your interest in our product(s)! The default file placement strategy is to place files on the drive(s) with the most available free space (measured absolutely, rather than based on percentage). This happens regardless of the balancing status. In fact, it's the balancers themselves that can (will) change the placement strategy of new files. For what you want, that isn't ideal.... and before I get to the solution: The first issue here is that there is a misconception about how the balancing engine works (or more specifically, with how frequently or aggressive it is). For the most part, the balancing engine DOES NOT move files around. For a new, empty pool, balancing will rarely, if ever move files around. Partly because, it will proactively control where files are placed in the first place. That said, each balancer does have exceptions here. But just so you understand how and why each balancer works and when it would actually move files, let me enumerate each one and give a brief description of them. StableBit Scanner (the balancer). This balancer only works if you have StableBit Scanner installed on the same system. By default, it is only configured to move contents off of a disk if "damaged sectors" (aka "Unreadable sectors") are detected during the surface scan. This is done in an attempt to prevent data loss from file corruption. Optionally, you can do this for SMART warnings as well. And to avoid usage if the drive has "overheated". If you're using SnapRAID, then it may be worth turning this balancer off, as it isn't really needed Volume Equalization. This only affects drives that are using multiple volumes/partitions on the same physical disk. It will equalize the usage, and help prevent duplicates from residing on the same physical disk. Chances are that this balancer will never do anything on your system. Drive Usage Limiter This balancer controls what type of data (duplicated or unduplicated) can reside on a disk. For the most part, most people won't need this. We recommend using it for drive removal or "special configurations" (eg, my gaming system uses it to store only duplicated data, aka games, on the SSD, and store all unduplicated data on the hard drive)Unless configured manually, this balancer will not move data around. Prevent Drive Overfill This balancer specifically will move files around, and will do so only if the drive is 90+% full by default. This can be configured, based on your needs. However, this will only move files out of the drive until the drive is 85% filled. This is one of the balancers that is likely to move data. But this will only happen on very full pool. This can be disabled, but may lead to situations where the drives are too full. Duplication Space Optimizer. This balancer's sole job is to rebalance the data in such a way that removes the "Unusable for duplication" space on the pool. If you're not using duplication at all, you can absolutely disable this balancer So, for the most part, there is no real reason to disable balancing. Yes, I understand that it can cause issues for SnapRAID. But depending on the system, it is very unlikely to. And the benefits you gain by disabling it may be outweighed by the what the balancers do. Especially because of the balancer plugins. Specifically, you may want to look at the "Ordered File Placement" balancer plugin. This specifically fills up one drive at a time. Once the pool fills up the disk to the preset threshold, it will move onto the next disk. This may help keep the contents of a specific folders together. Meaning that it may help keep the SRT file in the same folder as the AVI file. Or at least, better about it than the default placement strategy. This won't guarantee the folder placement, but significantly increases the odds. That said, you can use file placement rules to help with this. Either to micromanage placement, or ... you can set up a SSD dedicated for metadata like this, so that all of the SRT and other files end up on the SSD. That way, the access is fast and the power consumption is minimal.
    1 point
  44. jtc

    Unremovable nested folders

    Solved (Knock on wood,) Thanks to everyone for the prompt suggestions. For future reference, the list below contains commonly advised methods to delete stuck files when a space has been appended to the file names. Use quotes that enclose the trailing space: rd "my folder " Same as above, but add a trailing [back]slash: rd "my folder \" Try using <shift>+delete to bypass the recycle bin, as suggested by otispresley Specify the UNC path: rd "\\.\C:\temp\my folder " Try 8.3 names (if enabled): rd myfold~1 Finally, boot a Linux Live CD: rmdir "/media/blahblah/temp/my folder " What worked for me: I first uninstalled Dropbox, just in case there was some interaction as Christopher suggested. (I have a vague recollection that the troublesome folders were,in fact, related to a Dropbox incident.) Next, I launched an administrative command prompt and traversed the PoolPart folder on each of the two drives, one drive containing the original and the other the duplicated copy of the intransigent folders. Finally, after trying the UNC path solution and the Shift-Delete method with no success, I tried the second of the ideas listed above, adding a trailing backslash to the "shellext " folder name that was causing all the trouble. ( RD "shellext \" ) It worked. Once done, the remainder of the folder structure was removed with RD /S FIXME, "fixme" being the first-level folder. The errant folders have also disappeared from the DrivePool virtual disk, which did not happen when I had tried a Linux Live CD on my previous attempt. Anyway, thanks again.
    1 point
  45. Hi Just trying out Drivepool and first question i have is Drive letters and what happens when you run out of them I am building a large storage/backup server and its likely i will add more than 20 hdd Just experimenting with 4 hdd plus OS drive - created a pool which is assigned a drive letter but also the four drives that make up the pool are also assigned drive letters The drives were not assigned drive letters before adding to the pool (un-formatted) So what happens when you want multiple pools and have a large number of drives - do you have to use mount points? Currently i am testing on win7 64bit Any pointers or advice would be good Thanks
    1 point
  46. or not mount them at all appears to work fine as well - swapped to win 10 pro 64bit now - as clean install the mount points were lost but drivepool picked the drives up on reinstall without re mounting them just have the os drive and a test pool working well so far - interesting watching the pool fill up for the first time and turned on duplication half way through a backup job - handled very well by the app its also interesting to see that in task manager the "pool" has no disk activity reported while the underlying disks are running around moving data - i assume this is normal?
    1 point
  47. StableBit DrivePool doesn't care about drive letters. It uses the Volume ID (which Windows mounts to a drive letter or folder path). So you can remove the drive letters, if you want, or mount to a folder path. http://wiki.covecube.com/StableBit_DrivePool_Q6811286
    1 point
  48. Thanks Have mounted then and removed drive letters and Drivepool picked up the changes
    1 point
  49. Hi the individual drive letters are not needed most people give the drives a name like bay 1 bay 2 etc purely to help identify the drives then remove the letters under disk management.
    1 point
×
×
  • Create New...