Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    403
  • Joined

  • Last visited

  • Days Won

    30

Everything posted by srcrist

  1. Submitting a ticket was the right call, here. It may be a bug, it may be a conflict with another service. It may even be a hardware or Windows issue. So you'll need to work with support via the ticket system. In any case, you should go ahead and download and run the Troubleshooter (https://wiki.covecube.com/StableBit_Troubleshooter), which will gather a bunch of information and submit it for your ticket. Just enter the ticket number you were given in the requisite field when you run the tool. Christopher and Alex will let you know if they need anything else from there.
  2. I have active instances of both rClone and CloudDrive myself as well. I use an rClone mirror for my data, actually, just as insurance against data loss and to provide flexibility. CloudDrive is my daily driver storage solution, though. I've simply found that it's faster and more compatible with less quirks. This is the main thread that I can think of where the Team Drive issue has been addressed: The biggest technical issues mentioned are related to drive integrity because Team Drives allow simultaneous access, and the hard limits on files and folders.
  3. Well you can delete the service folder, but I wouldn't recommend it. Simply choosing to reauthorize the drive, or removing the connection from your providers list in the UI and re-adding it, should bring up the window to choose your credentials again.
  4. Sure thing. Also just doublecheck very carefully that you are actually using the same account that created the drive to authorize it now. You'll be using whatever Google account is logged in to your browser to authorize the service. As someone who uses multiple Google accounts for a few things, I have received this same error because I thought that I authorized with one account but didn't notice that I actually was signed in with another.
  5. The error, as you can probably deduce, is simply saying that it cannot locate the data for the volume when logging in with the provided credentials. But, if you're absolutely sure that you're using the same google credentials that the data was created with, and that the data is available when looking via other tools like the Google web UI, you might have just stumbled on a bug of some sort. Submit a ticket to the official support channel here: https://stablebit.com/Contact They'll get you sorted.
  6. When drives are first created there will be some amount of file system data that will be uploaded to your cloud provider. It should not be much more than a few hundred MB per drive, though, and should only take a few minutes to upload for most people. It's possible that something else, perhaps even Windows itself, is creating some sort of data on the drive. Make sure that Windows' indexing services and page file and recycle bin settings are disabled on the CloudDrive volumes. In any case, if you are just getting started I would highly discourage you from using eight separate CloudDrive drives unless there is some very specific reason that you need individual drives instead of one large drive split into multiple volumes. CloudDrive can be surprisingly resource intensive with respect to caching and bandwidth, and the difficultly of managing those things compounds pretty quickly for each additional drive you tax your local resources with. If you have like eight separate SSDs to use as cache drives and a 1gbps connection wherein you have little concern for local bandwidth management then, by all means, feel free. Otherwise, though, you should be aware that a single CloudDrive can be resized up to 1PB total, and can be partitioned into multiple volumes just like a local drive. And that CloudDrive itself provides bandwidth management tools within the UI to throttle a single drive.
  7. We should clarify some terms here. A partition, or volume, is different than the size of the drive. You can expand a CloudDrive drive up to a maximum of 1PB irrespective of the sector size. But the sector size will limit the size of individual volumes (or partitions) on the drive. A CloudDrive can have multiple volumes. So if you're running out of space, you might just have to add a second volume to the drive. Note that multiple volumes can then be merged to recreate a single unified file system with tools like Stablebit's own DrivePool.
  8. And to be very clear: though you cannot add your drive to multiple pools, you can add your pool to another pool, which is the proposal here. Thus, you can have, say, your pool of local drives and your pool of cloud provider drives both together in a master pool which can be used to duplicate between both subpools.
  9. No. I'm sorry. I'm not trying to be difficult, but I'm just not understanding the problem. If you want to nest your existing pool, you can just add it to a new pool and move the data to the new PoolPart folder that will be created within the pool you already have. If you want to, for some reason, break your existing pool apart, delete the pool you have, and create a new pool to nest with another pool...you'd just move all of the data out of your PoolPart folder, remove the drive from your existing pool, create the new one, and move the data to the PoolPart folder that you want it to be in. The major point here is that since DrivePool works at the file system level, all of the data is accessible (and modifiable) as existing files and folders on your drive. You can move and shuffle that around however you choose simply using Windows Explorer or another file manager. No matter what structure you want your pools to have, it's just a matter of moving data from one folder to another. It's entirely up to you.
  10. I guess I'm really not sure what you mean. When you add any drive to a pool, a hidden folder called PoolPart-XXXXXXXXXX will be created on that drive. Any data on that drive that you want to be accessible from within the pool just needs to be placed in that folder. If you're trying to nest the pools, you'll have a hidden PoolPart folder within a hidden PoolPart folder and you'll just need to move all of the data there. In every case, whatever pool you want the data in simply requires that the data be moved to the respective PoolPart folder that contains the data for that drive. Any possible change you want to make simply involves moving data at the file system level either into or out of the PoolPart folder for a given pool. You can restructure any pool you have completely at will.
  11. I use CloudDrive for a few things, but the largest drive, which is the one that I just gave you settings for, is for media, yes. It's constantly uploading as well, basically. I am not currently using my own API keys for that drive. There isn't really an "optimal" prefetcher setting--it just depends on your needs. I shouldn't really impact this.
  12. Submit a ticket via the support contact form. It sounds like this might be resolved as simply as force mounting the drive after clearing the service folder, but I don't want to tell you anything that might cause you data loss. The form is the official support channel. It's located here: https://stablebit.com/Contact
  13. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
  14. Sure. Presently using: 12 down threads, 5 up threads Background I/O Disabled Upload throttling at 70mbits/s Upload threshold at 1MB or 5mins 20MB Minimum download size Prefetching enabled with: 8MB trigger 150MB forward and a 15 second window
  15. DrivePool actually works at the file system level, while CloudDrive is a block-based tool. You can move all of your data into a DrivePool pool without reuploading anything at all. You just need to move the content to the directory structure created by DrivePool. You'd just move the content on your Box drive to the directory created on your Box drive, and the content on your Gsuite drive to the directory created on that drive. It should be as instantaneous as any other intra-filesystem data move.
  16. I'm still not seeing an issue here. Might want to open a ticket and submit a troubleshooter and see if Christopher and Alex can see a problem. It doesn't seem to be universal for this release.
  17. I haven't had any errors with 1316 either. It might have been a localized network issue.
  18. Remember that your cache drive will be used for both reads and writes. These will often be simultaneous. You should expect significantly reduced cache performance when compared to straight read or write scenarios. A lot of us suggest using an SSD as the cache, as it works much better. A smaller SSD of even 64GB or so will result in far superior CloudDrive performance over a 2TB but slower HDD.
  19. I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  20. That's it, really. That's the only way. You'll just need to manually copy all of your data out of CloudDrive to another storage location that Linux can mount. Note that I am not suggesting that this process is efficient or even wise. It's just the project that you're proposing. It would likely take weeks or even months, if your drive is large. Unfortunately CloudDrive's structure precludes any more simple or expeditious option. The data would have to be downloaded from the CloudDrive and reuploaded to the new location, yes. There is no way to move the data on the provider. No other application can even access CloudDrive's provider data anyway. It's completely obfuscated even if it isn't encrypted. You don't want to run Plex on a server to which you do not have root access honestly. If you have root access, you could set up a VM. There really shouldn't be a significant performance impact beyond running an entire Windows VM in the background. Generally you pass drives to the host via a network share over the loopback interface, so it should be relatively fast. Faster, certainly, than a real network share over a real network--and those are perfectly capable of streaming media as well.
  21. You actually specify the cache drive when you create or mount the drive. CloudDrive doesn't just choose one (aside from populating the box). You can change it to whatever you want if you detach the drive and reattach it.
  22. You would either have to copy all of the content from your CloudDrive to something with linux support like an rClone mount, or run CloudDrive via a Windows VM and pass the drive out to the host OS.
  23. This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume. You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to the CloudDrive volume and your local duplication independently of one another.
  24. Sure thing. Happy to help.
  25. There is no appreciable performance impact by using multiple volumes in a pool.
×
×
  • Create New...