Jump to content
Covecube Inc.


Popular Content

Showing content with the highest reputation since 10/22/19 in Posts

  1. 5 points

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
  2. 4 points

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
  3. 3 points
    My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
  4. 2 points
    Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this helps.
  5. 2 points
    I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
  6. 2 points
    Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God's because this morning I awoke with back pain and two of my drives mounted and functioning. The original drive which had completed the upgrade, had been working, and then went into "drive initializing"...is now working again. The drive that I had tried to mount and said it was upgrading with no % given has mounted (FYI 15TB Drive with 500GB on it). However my largest drive is still sitting at "Drive queued to perform recovery". Maybe one more night in the dogs bed will complete the offering required to the Data God's End Diary entry. (P.S. Just in case you wondered...that spoiled dog has "The Big One" Lovesac as a dog bed.. In a pretty ironic fashion their website is down. #Offering)
  7. 2 points
    Unintentional Guinea Pig Diaries. Day 7 - Entry 1 OUR SUPREME LEADER HAS SPOKEN! I received a response to my support ticket though it did not provide the wisdom I seek. You may call it an "Aladeen" response. My second drive is still stuck in "Drive Queued to perform recovery". I'm told this is a local process and does not involve the cloud yet I don't know how to push it to do anything. The "Error saving data. Cloud drive not found" at this time appears to be a UI glitch and it not correct as any changes that I make do take regardless of the error window. This morning I discovered that playing with fire hurts. Our supreme leader has provided a new Beta ( Since I have the issues listed above I decided to go ahead and upgrade. The Supreme leaders change log says that it lowers the concurrent I/O requests in an effort to stop google from closing the connections. Upon restart of my computer the drive that was previously actually working now is stuck in "Drive Initializing - Stablebit CloudDrive is getting ready to mount your drive". My second drive is at the same "Queued" status as before. Also to note is that I had a third drive created from a different machine that I tried to mount yesterday and it refused to mount. Now it says "drive is upgrading" but there is no progress percentage shown. Seems that is progress but the drive that was working is now not working. Seeking burn treatment. I hope help comes soon. While my data is replaceable it will take a LONG TIME to do. Until then my Plex server is unusable and I have many angry entitled friends and family. End Diary entry.
  8. 2 points
    Unintentional Guinea Pig Diaries. Day 5 - Entry 2 *The Sound of Crickets* So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued" Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out. End Diary entry.
  9. 2 points

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
  10. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  11. 2 points

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  12. 1 point
    I don't have any info on this other than to say that I am not experiencing these issues, and that I haven't experienced any blue screens related to those settings. That user isn't suggesting a rollback, they're suggesting that you edit the advanced settings to force your drive to convert to the newer hierarchical format. I should also note that I do not work for Covecube--so aside from a lot of technical experience with the product, I'm probably not the person to consult about new issues. I think we might need to wait on Christopher here. My understanding, though, was that those errors were fixed with release .1314. It presumes that existing data is fine as-is, and begins using a hierarchical structure for any NEW data that you add to the drive. That should solve the problem. So make sure that you're on .1314 or later for sure. Relevant changelog: .1314 * [Issue #28415] Created a new chunk organization for Google Drive called Hierarchical Pure. - All new drives will be Hierarchical Pure. - Flat upgraded drives will be Hierarchical, which is now a hybrid Flat / Hierarchical mode. - Upgrading from Flat -> Hierarchical is very fast and involves no file moves. * Tweaked Web UI object synchronization throttling rules. .1312 * Added the drive status bar to the Web UI. .1310 * Tuned statistics reporting intervals to enable additional statistics in the StableBit Cloud. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off.
  13. 1 point
    Reid Rankin

    WSL 2 support

    I've been following up on this with some disassembly and debugging to try and figure out what precisely is going wrong. WSL2's "drvfs" is just a 9P2000.L file server implementation (yep, that's the protocol from Plan 9) exposed over a Hyper-V Socket. (On the Windows side, this is known as a VirtIO socket; on the Linux side, however, that means something different and they're called AF_VSOCK.) The 9P server itself is hard to find because it's not in WSL-specific code -- it's baked into the Hyper-V "VSMB" infrastructure for running Linux containers, which predates WSL entirely. The actual server code is in vp9fs.dll, which is loaded by both the WSL2 VM's vmwp.exe instance and a copy of dllhost.exe which it starts with the token of the user who started the WSL2 distro. Because the actual file system operations occur in the dllhost.exe instance they can use the proper security token instead of doing everything as SYSTEM. The relevant ETW GUID is e13c8d52-b153-571f-78c5-1d4098af2a1e. This took way too long to find out, but allows you to build debugging logs of what the 9P server is doing by using the tracelog utility. tracelog -start p9trace -guid "#e13c8d52-b153-571f-78c5-1d4098af2a1e" -f R:\p9trace.etl <do the stuff> tracelog -stop p9trace The directory listing failure is reported with a "Reply_Rlerror" message with an error code of 5. Unfortunately, the server has conveniently translated the Windows-side NTSTATUS error code into a Linux-style error.h code, turning anything it doesn't recognize into a catch-all "I/O error" in the process. Luckily, debugging reveals that the underlying error in this case is an NTSTATUS of 0xC00000E5 (STATUS_INTERNAL_ERROR) returned by a call to ZwQueryDirectoryFile. This ZwQueryDirectoryFile call requests the new-to-Win10 FileInformationClass of FileIdExtdDirectoryInformation (60), which is supposed to return a structure with an extra ReparsePointTag field -- which will be zero in almost all cases because most things aren't reparse points. Changing the FileInformationClass parameter to the older FileIdFullDirectoryInformation (38) prevents the error, though it results in several letters being chopped off of the front of each filename because the 9P server expects the larger struct and has the wrong offsets baked in. So things would probably work much better if CoveFs supported that newfangled FileIdExtdDirectoryInformation option and the associated FILE_ID_EXTD_DIR_INFO struct; it looks like that should be fairly simple. That's not to say that other WSL2-specific issues aren't also present, but being able to list directories would give us a fighting chance to work around other issues on the Linux side of things.
  14. 1 point

    how to determine what drive failed

    I read this is as asking how to identify the actual physical drives in the case. I physically label my drives and stack them in the server according to their labels. Without something like that, I have no clue how you would be able to identify the physical drives...
  15. 1 point
    Umfriend is correct. The service should be stopped to prevent any chance of balancing occurring during the migration when using that method. And that method is fine so long as your existing arrangement is compatible with DrivePool's pooling structure. E.g. if you have: drive D:\FolderA\FileB moved to D:\PoolPart.someguid\FolderA\FileB drive E:\FolderA\FileB moved to E:\PoolPart.someguid\FolderA\FileB drive F:\FolderA\FileC moved to F:\PoolPart.someguid\FolderA\FileC then your drivepool drive (in this example P: drive) will show: P:\FolderA\FileB P:\FolderA\FileC as DrivePool will presume that FileB is the same file duplicated on two drives. As Umfriend has warned, when it next performs consistency checking DrivePool will create/remove copies as necessary to match your chosen settings (e.g. "I want all files in FolderA to exist on three drives"), and will warn if it finds a "duplicated" file that does not match its duplicate(s) on the other drives. As to Snapraid, I'd follow Umfriend's advice there too.
  16. 1 point
    Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive. I am not sure but I would think you would first set settings, then do the seeding. (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that DP measures the Pool it will either delete one copy (I think if the name, size and timestamp are the same or otherwise inform of some sort of file conflict. This is something you could actually test before you do the real move (stop service, create a spreadhseet "Test.xlsx", save directly to a Poolpart.*/some folder on one of the drives, edit the file, save directly to Poolpart.*/some folder on another drive, start service and see what it does?). DP does not go mad with same folder names, some empty, some containing data. In fact, as a result of balancing, it can cause this to occur itself. I have no clue about snapraid. I would speculate that you first create and populate the Pool, let DP measure and rebalance and then implement snapraid. Not sure though. You may have to read up on this a bit and there is plenty to find, e.g. https://community.covecube.com/index.php?/topic/1579-best-practice-for-drivepool-and-snapraid/.
  17. 1 point
    I'd suggest a tool called Everything, by Voidtools. It'll scan the disks (defaults to all NTFS volumes) then just type in a string (e.g. "exam 2020" or ".od3") and it shows all files (you can also set it to search folder names as well) that have that string in the name, with the complete path. Also useful for "I can't remember what I called that file or where I saved it, but I know I saved it on the 15th..." problems.
  18. 1 point

    eXtreme bottlenecks

    Have you checked Event Viewer and what model is thus exactly? And if the data you want in the Pool is already on the disks you want to add to the Pool, then there is a much faster way of getting them in the Pool.
  19. 1 point

    eXtreme bottlenecks

    In my experience Resource Monitor's reporting of read and write rates can lag behind what's actually happening, making it look like it's transferring more files at any given point than it really is - but that transfer graph is definitely a sign of hitting some kind of bottleneck. It's the sort of thing I'd expect to see from a large number of small files, a network drive over wireless, or a USB flash drive. Can you tell us more about what version of DrivePool you're using (the latest "stable" release is, what drives are involved (HDD, SSD, other), how they're hooked up (SATA, USB, other) and if you've made any changes to the Manage Pool -> Performance options (default is to have only Read striping and Real-time duplication ticked)? Examining the Performance indicators of DrivePool (to see it, maximise the DrivePool UI and click the right-pointing triangle to the left of the word Performance) and the Performance tab of Task Manager when the bottlenecking is happening might also be useful. Hmm. You might also want to try something other than the built-in Windows copier to see if that helps, e.g. FastCopy ?
  20. 1 point
    Is it ok to fill non primary hdds to full capacity that would be accessed only to read files?
  21. 1 point
    I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  22. 1 point

    Move data to a partition

    There is no appreciable performance impact by using multiple volumes in a pool.
  23. 1 point

    Move data to a partition

    Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  24. 1 point

    Mixed speed disks

    So I think it is a matter of use case and personal taste. IMHO, just use one Pool. If you're going to replace the 5900rpm drives anyway over time anyway. I assume you run things over a network. As most are still running over 1Gbit networks (or slower), even the 5900rpm drives won't slow you down. I've never used an SSD Optimizer plugin but yeah, it is helpful for writing, not reading (except for the off-chance that the file you read is still on the SSD). But even then it would need a datapath that is faster than 1Gbit all the way. What you could do is test a little by writing to the disks directly outside the Pool in a scenario that resembles your usecase. If you experience no difference, just use one Pool, makes management a lot easier. If anything, I would more wonder about duplication (do you use that?) and real backups.
  25. 1 point
    OK. My mistake, then. I haven't started this process, I just thought I remembered this fact from some rClone work I've done previously. I'll remove that comment.
  26. 1 point
    Same behaviour for me. I submitted a ticket today to support.
  27. 1 point
    I'm enjoying your diary entries as we are in the same boat... not laughing at you... but laughing/crying with you! I know that this must be done to protect our data long term... it's just a painful process.
  28. 1 point

    Win10 build broke DrivePool how to fix?

    Same here, on multiple PCs. Complete removal and reinstallation does not solve the problem
  29. 1 point
    Just a heads up, this is a known issue, and we are looking into a solution for this issue (both to prevent it for new drives and for existing drives). I don't have an ETA, but we are actively working on it.
  30. 1 point
    So weird, this seems to make it work on my end also! Just go to https://console.developers.google.com/apis/library/drive.googleapis.com Press "Enable" and afterwards go disable it again - voila it works.
  31. 1 point
    In the DP GUI, see the two arrows to the right of the balancing status bar? If you press that, it will increase the I/O priority of DP. May help some. Other that that, ouch! Those are more like SMR-speeds.
  32. 1 point

    My Rackmount Server

    So, nearly two and a half years down the line and a few small changes have been made: Main ESXi/Storage Server Case: LogicCase SC-4324S OS: VMWare ESXi 6.7 CPU: Xeon E5-2650L v2 (deca-core) Mobo: Supermicro X9SRL-F RAM: 96GB (6 x 16GB) ECC RAM GFX: Onboard Matrox (+ Nvidia Quadro P400 passed through to Media Server VM for hardware encode/decode) LAN: Quad-port Gigabit PCIe NIC + dual on-board Gigabit NIC PSU: Corsair CX650 OS Drive: 16GB USB Stick IBM M5015 SAS RAID Controller with 4 x Seagate Ironwolf 1TB RAID5 array for ESXi datastores (Bays 1-4) Dell H310 (IT Mode - passed through to Windows VM) + Intel RES2SV240 Expander for Drivepool drives (Bays 5-24) Onboard SATA Controller with 240GB SSD (passed through to Media Server VM) ESXi Server (test & tinker box) HP DL380 G6 OS: VMWare ESXi 6.5 (custom HP image) CPU: 2 x Xeon L5520 (quad core) RAM: 44GB ECC DDR3 2 x 750W Redundant PSU 3 x 72GB + 3 x 300GB SAS drives (2 RAID5 arrays) Network Switch TP-Link SG-1016D 16-port Gigabit switch UPS APC SmartUPS SMT1000RMI2U Storage pools on the Windows storage VM now total 34TB (mixture of 1,2 and 4TB drives) and still got 6 bays free in the new 24 bay chassis for future expansion. There's always room for more tinkering and expansion but no more servers unless I get a bigger rack!
  33. 1 point
    You will never see the 1 for 1 files that you upload via the browser interface. CloudDrive does not provide a frontend for the provider APIs, and it does not store data in a format that can be accessed from outside of the CloudDrive service. If you are looking for a tool to simply upload files to your cloud provider, something like rClone or Netdrive might be a better fit. Both of those tools use the standard upload APIs and will store 1 for 1 files on your provider. See the following thread for a more in-depth explanation of what is going on here:
  34. 1 point
    Pools behave as if they are regular NTFS formatted volumes. However, any software that uses VSS (which many backup solutions do) is not supported. I don't know Crashplan so couldn't say. Having said that, you could backup the underlying drives. If you use duplication, then Hierachical Pools can ensure that you only backup one instance of the duplicates.
  35. 1 point

    Setting Cache Drive desitination

    There are some inherent flaws with USB storage protocols that would preclude it from being used as a cache for CloudDrive. You can see some discussion on the issue here: I don't believe they ever added the ability to use one. At least not yet.
  36. 1 point

    GSuite Drive. Migration\Extension

    A 4k (4096) cluster size supports a maximum volume size of 16TB. Thus, adding an additional 10TB to your existing 10TB with that cluster size exceeds the maximum limit for the file system, so that resize simply won't be possible. Volume size limits are as follows: Cluster Size Maximum Partition Size 4 KB 16 TB 8 KB 32 TB 16 KB 64 TB 32 KB 128 TB 64 KB 256 TB This is unfortunately not possible, because of how Cloud Drive works. However, a simple option available to you is to simply partition your drive into multiple volumes (of a maximum 16TB apiece) and recombine them using DrivePool into one large aggregate volume of whatever size you require (CloudDrive's actual and technical maximum is 1PB per drive).
  37. 1 point
    Christopher (Drashna)

    Hiding Drives

    welcome! And yeah, it's a really nice way to set up the system. It hides the drives and keeps them accessible, at the same time.
  38. 1 point
    I guess that would really depend on the client/server/service, and how it handles uploaded files. However, it shouldn't be an issue, in most cases.
  39. 1 point

    2nd request for help

    Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
  40. 1 point
    Christopher (Drashna)

    Hiding Drives

    You can remove the drive letters and map to folder paths. We actually have a guide on how to do this: https://wiki.covecube.com/StableBit_DrivePool_Q4822624 It would be, but the only problem is that it would be too easy to break existing configurations. Which is why we don't have the option to do so.
  41. 1 point
    Thanks for the response. Turns out, I was clicking on the down arrow and that did not give the option of enable auto scanning. So after reading your response, I clicked on the "button" itself and it toggled to enabled. Problem solved. Auto scanning immediately started so I know that it is working. Thanks.
  42. 1 point

    Manualy delete duplicates?

    Inside each physical disk that's part of the pool, exists a hidden folder named with a unique identification ID. Inside these folders is the same folder structure as the pool, itself. Your duplicated files / folders would simply be on the appropriate number of disks. They're not actually denoted as duplicates in any way. If files are now duplicated (that shouldn't) be, it may be enough to simply re-check duplication.
  43. 1 point
    Now I have an image of you having some sort of steampunk mechanical networking setup...
  44. 1 point
    In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
  45. 1 point

    QNAP Hardware?

    Hello - I have been using WHS2011 and Stable Bit Drive Pool on a HP Proliant N54L for several years. I have been happy with it, and don't really want to change, however it is 2020 now and as i understand it, support for WHS2011 ended in 2016... So, I got my hands on a QNAP TVS-671 (Intel Core i5 based) NAS and was wondering if there is any way to still use Windows/Stable Bit Drive Pool with this hardware? The NAS does support VM's, but it has to be set up in some way (either JBOD or RAiD) before I can create/use VM's, so I don't a VM running Windows/Drive pool would work.. Would it? Or even there's a way to run Windows/Stable Bit Drive Pool on a separate physical machine, and then use the NAS as a 6 bay enclosure connected by a single cable, with Drive Pool providing the fault tolerance, etc, I would be interested in that..Or any other suggestions for a way that I could use this QNAP NAS hardware with Windows/Stable Bit Drive Pool... Any ideas/suggestions will be appreciated! Thanks!!
  46. 1 point

    Extension of a Plex Drive?

    Did you get this sorted? Seems to me you did everything correctly. So, to be clear - You had a standalone 8TB drive that was getting full. You bought a new 12TB drive. You downloaded and installed DrivePool. You created a brand new Pool consisting of your old 8TB drive and the new drive 12TB drive, giving you a new Virtual Drive, G: Because G: is considered a new drive, you are going to want to MOVE all of your files from E: to G: That's all you should have to do. In the future when you add drives to the pool you won't have to do anything and you should simply see the new free space from the new drive. Since this is a new pool, it's empty. ~RF
  47. 1 point

    Removing drive from pool

    Fully recognize that the current issue is not mine (but I'm the OP) however would highly appreciate if: 1. How do I find out which files on drives are unduplicated? 2. That this thread is anyway updated with recommended processes/commands need to be followed when a problem occurs. Or a link to such processes/commands. cheers Edward
  48. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  49. 1 point

    Optimal settings for Plex

    Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. So let's say you have an existing CloudDrive volume at E:. First you'll use DrivePool to create a new pool, D:, and add E: Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space. Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive. Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F: Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool. Then just navigate to D: and all of your content will be there, as well as plenty of room for more. You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. My drive layout looks like this:
  50. 1 point
    Christopher (Drashna)

    SSD Optimizer problem

    This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files. This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files.


  • Create New...