Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Today
  2. I wanted to add a new drive to my pool. I could only do this with bit of cable management. So I had to unplug all the drives, including those which were part of the pool. To do this I shut down the computer, did the cable management and started it up again. Now, the drives which are part of the pool no longer show up in BIOS. I tried reseating the power connectors and SATA cables, as well as trying different SATA ports, and nothing. Have I broken something? How can I access my drives again?
  3. With the new hierarchical chunk organization, shouldn't this now be technically possible?
  4. Yesterday
  5. I don't have enough info to try to diagnose this but remeasuring seems like a sensible thing to do. Screenprints illustrating what you say is the case help.
  6. Last week
  7. I moved my files to new hdds and drivepool thinks i duplicated these files and lists half of my space as duplicated. Will a recheck duplication under settings solve this issue? I dont want any duplication in my pool and have it disabled, yet drive pool was trying to duplicate something, is this normal?
  8. Thank you srcrist for your input! You bring up some good points I had not considered. I was trying to pool several small clouds to get one big cloud, but now I understand that the overhead can be a problem... so that is not as efficient as simply paying for more cloud place on fewer drives. I very much appreciate you taking the time to answer in such detail. Your input is very helpful!
  9. I have active instances of both rClone and CloudDrive myself as well. I use an rClone mirror for my data, actually, just as insurance against data loss and to provide flexibility. CloudDrive is my daily driver storage solution, though. I've simply found that it's faster and more compatible with less quirks. This is the main thread that I can think of where the Team Drive issue has been addressed: The biggest technical issues mentioned are related to drive integrity because Team Drives allow simultaneous access, and the hard limits on files and folders.
  10. MrNerdHair

    WSL 2 support

    Just want to add my two cents as a loyal DrivePool user -- WSL2 support is important to me. I'd like to do my dev work in DrivePool so it's duplicated.
  11. Well you can delete the service folder, but I wouldn't recommend it. Simply choosing to reauthorize the drive, or removing the connection from your providers list in the UI and re-adding it, should bring up the window to choose your credentials again.
  12. That is what I am thinking. After the reboot and reinstall it appears to have cached the credentials somewhere. Do you know where so I can delete it as the reinstall remembered all configuration
  13. Sure thing. Also just doublecheck very carefully that you are actually using the same account that created the drive to authorize it now. You'll be using whatever Google account is logged in to your browser to authorize the service. As someone who uses multiple Google accounts for a few things, I have received this same error because I thought that I authorized with one account but didn't notice that I actually was signed in with another.
  14. Thank you for the prompt reply @srcrist. I have logged a ticket. I will quickly try uninstalling clouddrive -> reboot -> clear all files -> reinstall to see if that fixes it.
  15. The error, as you can probably deduce, is simply saying that it cannot locate the data for the volume when logging in with the provided credentials. But, if you're absolutely sure that you're using the same google credentials that the data was created with, and that the data is available when looking via other tools like the Google web UI, you might have just stumbled on a bug of some sort. Submit a ticket to the official support channel here: https://stablebit.com/Contact They'll get you sorted.
  16. When drives are first created there will be some amount of file system data that will be uploaded to your cloud provider. It should not be much more than a few hundred MB per drive, though, and should only take a few minutes to upload for most people. It's possible that something else, perhaps even Windows itself, is creating some sort of data on the drive. Make sure that Windows' indexing services and page file and recycle bin settings are disabled on the CloudDrive volumes. In any case, if you are just getting started I would highly discourage you from using eight separate CloudDrive drives unless there is some very specific reason that you need individual drives instead of one large drive split into multiple volumes. CloudDrive can be surprisingly resource intensive with respect to caching and bandwidth, and the difficultly of managing those things compounds pretty quickly for each additional drive you tax your local resources with. If you have like eight separate SSDs to use as cache drives and a 1gbps connection wherein you have little concern for local bandwidth management then, by all means, feel free. Otherwise, though, you should be aware that a single CloudDrive can be resized up to 1PB total, and can be partitioned into multiple volumes just like a local drive. And that CloudDrive itself provides bandwidth management tools within the UI to throttle a single drive.
  17. Switching from unRAID to the trip of Stablebit products and setup a clouddrive with my gsuite to store some files while I migrate and before purchasing. With unRAID I was running a Win10 VM where I tested all the software. Powered off unRAID and booted into the VM (Baremetal) CloudDrive wanted me to transfer the license as it was a change of hardware did that no problem. Then it wanted me to reauthorized my drive and that is where I am stuck I keep getting the below. None of my credentials changed. I have made a second drive that authorised fine with the same credentials. Who do I go upon troubleshooting this ? When I log into the Gsuite account I can see the encrypted data so it is there and the credentials are 100% correct.
  18. We should clarify some terms here. A partition, or volume, is different than the size of the drive. You can expand a CloudDrive drive up to a maximum of 1PB irrespective of the sector size. But the sector size will limit the size of individual volumes (or partitions) on the drive. A CloudDrive can have multiple volumes. So if you're running out of space, you might just have to add a second volume to the drive. Note that multiple volumes can then be merged to recreate a single unified file system with tools like Stablebit's own DrivePool.
  19. Hello everyone! First of all I want to say that I really love your products - using Scanner on 5 devices as well das Drivepool on now 3 devices. So far it's been working flawlessly. On my new machine though, which will function as a remote backup server, I have problems with Drivepool getting stuck while balancing at very low percentages between 1% and 5% roughly. At first I had added 2 x 5 TB drives and copied the majority of my files to the pool, without any problems. I then added a spare 2 TB drive to the pool and this is when the balancing problems started. On top of that, when increasing the priority of the balacning, Drivepool becomes unresponsive and crashes. All of these drives are healthy and have been running in other pools without any problems, and I have never had any balancing problems like this on other pools as well. I've tried the recent stable release of Drivepool as well as the most recent (as of today) BETA release, to no avail. :-(
  20. I have about 12TB waiting to upload for one of my cloud drives, and I would like to take just the files that are cached to be uploaded and move just those files to another drive (leaving the existing files that have been uploaded already). Is there a way to know which files in the hidden drivepool folder are actually physically still in the cache waiting to be uploaded so I can just move those files?
  21. I'm trying out CloudDrive and DrivePool. I'm impressed with the software, but am seeing an odd situation. I have a pool of eight cloud drives, with nothing in any of the drives (or the pool), yet there is constant upload activity of about 4 Mbs total for all drives (which is all I'm allowing it to do by throttling with NetLimiter sofware). I turned off auto balancing. This has been going on for several hours and sucking up my upload bandwidth. A;ll the cloud drives are reporting small amounts to upload, and that isn't from any files in them. Is there some kind of indexing going on which will eventually be done? Is there some configuration to be considered?
  22. I have made tons of important questions the past couple months and still no reply from anyone. Please support be more active in this forum, we need you!
  23. Scanner's surface scan is described as being data scrubbing. That would mean that there's a checksum somewhere. Is it stored on the disk itself? More practically, can Scanner replace ReFS's integrity streams feature?
  24. I keep getting a lot of warnings in the service log and I have no clue what they mean. I keep getting a bunch of ItemAdded warnings in a row and every so often I see a item removed which is somewhat concerning. Can anyone add some light to this.
  25. Have you tried reading https://stablebit.com/? Anyway, DP combines volumes into a large NTFS volume. It can have redundancy if you want. Either for the entire "Pool" or just certain folders therein. If one drive fails, the most you can lose is the data on that one drive. The data on the other drives will still be available. If all was redundant (we call that x2 duplication) then the Pool will reduplicate the lost files, provided you have enough space. Each drive you add to the Pool is not exclusively for the Pool. It will create a hidden PoolPart.* folder within which the Pooled files will be stored. You can use the rest of the drive as well outside of the Pool. DP simulates a NTFS drive but it also uses plain NTFS on the drives that are part of the Pool so any recovery tool that works on NTFS will work. x2 duplication is a bit like RAID-1 in the sense that it requires twice the space to store data. Parity/checksums are not supported by DP. You might be able to learn a lot actually from just reading a few threads here on the forum.
  26. I recently heard about StableBit DrivePool as a way to turn multiple volumes into a single volume in File Explorer. I wanted to ask how this was done; is the process that is used closer to a RAID-0 setup, with no redundancy, such that if one drive fails, all the data is lost, or is the setup such that if a drive fails the data on the failed drive isn't lost?
  27. Well, in this case, it did work. Last time, it didn't as it was already activated. I got a support ticket in, I used Edge (aughhh..) since Brave doesn't like them. (or the other way around??) But what on earth is wrong with an account based unlock function like any other modern software? This is so 90's Microsoft level stuff that created the cracker market to begin with. How can these folks think of so many nice features and robust functionality and so screw it up when it comes to DRM? Even Microsoft finally got this right. Let me state this for what little it's worth... You can not stop someone from cracking your software. 20 years now companies have been trying, and it ultimately never works. The only thing can be done is to keep honest people honest. The best way to do that is make it as easy as it can be without giving away the farm. The common method that is the most out of the way and still works is account based methods. Microsoft does this with their Microsoft accounts, Steam lets you have multiple installations but won't let you run on both machines at the same time. etc. There's hundreds of other examples and most people are used to it by now. But for backup software, (which is a function of this software, data integrity) it is strange that such an old school 90's 2000's draconian system is in place. It boggles the mind. At least the data isn't locked behind this, or it would have been borderline criminal. It's that technique where the data is not stored on some crappy format that blows up your data the minute the software or hardware goes kaput is why I am continuing to fight with this. I find the decoupling of data from medium the most intriguing part of this software, and to have this crappy key lock system in place... augh.... WHY! I've already bought 3 copies, and before I knew this all was an issue, I had gotten a friend to get a copy.... this is before I realized what kind of land mine was built into the thing. "Crap happens" when it comes to computers, why make the clean up so frustrating when it otherwise would be a snap? For what it's worth.
  28. Why not use the 30-day trial period so that you have time working thus out with support?
  1. Load more activity

Announcements

×
×
  • Create New...