Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
  2. Sure. Presently using: 12 down threads, 5 up threads Background I/O Disabled Upload throttling at 70mbits/s Upload threshold at 1MB or 5mins 20MB Minimum download size Prefetching enabled with: 8MB trigger 150MB forward and a 15 second window
  3. DrivePool actually works at the file system level, while CloudDrive is a block-based tool. You can move all of your data into a DrivePool pool without reuploading anything at all. You just need to move the content to the directory structure created by DrivePool. You'd just move the content on your Box drive to the directory created on your Box drive, and the content on your Gsuite drive to the directory created on that drive. It should be as instantaneous as any other intra-filesystem data move.
  4. I'm still not seeing an issue here. Might want to open a ticket and submit a troubleshooter and see if Christopher and Alex can see a problem. It doesn't seem to be universal for this release.
  5. I haven't had any errors with 1316 either. It might have been a localized network issue.
  6. Remember that your cache drive will be used for both reads and writes. These will often be simultaneous. You should expect significantly reduced cache performance when compared to straight read or write scenarios. A lot of us suggest using an SSD as the cache, as it works much better. A smaller SSD of even 64GB or so will result in far superior CloudDrive performance over a 2TB but slower HDD.
  7. I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  8. That's it, really. That's the only way. You'll just need to manually copy all of your data out of CloudDrive to another storage location that Linux can mount. Note that I am not suggesting that this process is efficient or even wise. It's just the project that you're proposing. It would likely take weeks or even months, if your drive is large. Unfortunately CloudDrive's structure precludes any more simple or expeditious option. The data would have to be downloaded from the CloudDrive and reuploaded to the new location, yes. There is no way to move the data on the provider. No other application can even access CloudDrive's provider data anyway. It's completely obfuscated even if it isn't encrypted. You don't want to run Plex on a server to which you do not have root access honestly. If you have root access, you could set up a VM. There really shouldn't be a significant performance impact beyond running an entire Windows VM in the background. Generally you pass drives to the host via a network share over the loopback interface, so it should be relatively fast. Faster, certainly, than a real network share over a real network--and those are perfectly capable of streaming media as well.
  9. You actually specify the cache drive when you create or mount the drive. CloudDrive doesn't just choose one (aside from populating the box). You can change it to whatever you want if you detach the drive and reattach it.
  10. You would either have to copy all of the content from your CloudDrive to something with linux support like an rClone mount, or run CloudDrive via a Windows VM and pass the drive out to the host OS.
  11. This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume. You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to the CloudDrive volume and your local duplication independently of one another.
  12. Sure thing. Happy to help.
  13. There is no appreciable performance impact by using multiple volumes in a pool.
  14. I think that it is better to keep any single volume less than the 60TB limit for Volume Shadow Copy and Chkdsk. Any volume larger than that cannot be repaired in the inevitable eventuality of minor file system corruption. Chkdsk is relatively essential in order to maintain an NTFS volume. So I would suggest splitting the volume. Note that if that volume isn't actually full, you can shrink it to add additional volumes of the appropriate size.
  15. Because CloudDrive is a block-based system, it has no real awareness of the files on your drive. Drastic changes would have to be made in order to allow for pinning at a file-based level. I believe there is already a request in the tracker for a similar feature to this, but i doubt we'll see any action on it any time soon. Christopher will have to share details on that, if he has them. In any case, you can disable the thumbnail wait entirely by simply changing the folder type to "General Items." This can be done an an entire drive or folder tree as well. Windows will no longer try to load or generate thumbnails if you do so.
  16. Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  17. Yep. The data duplication is self-healing.
  18. I'm not sure if there is any complication that I'm missing in what you're asking here, but, based on how I'm reading your question (and my read is that you are already using CloudDrive for your data), you should be able to simply detach the drive from the computer that it is presently mounted to and attach it to your remote server once CloudDrive and DrivePool are installed. Note that both applications are Windows only, and Windows Server can be expensive in a data center.
  19. It sounds like we might be stumbling on some buggy cache code, between this and the previous notes from Chase about deleting the cache. Make sure you guys are submitting actual tickets and troubleshooter dumps as well, so Alex and Christopher can take a look at the related code and your logs.
  20. The cache also includes information that has been modified but not yet uploaded. Everyone should be *very* careful before simply deleting the local cache. Note that any modifications that have not yet been uploaded to the cloud provider will be permanently lost, and you could potentially corrupt your drive as a result. I believe that you are theoretically correct as long as none of the information in the cache has yet to be uploaded, but extreme caution should be used before following this example. Anyone who deletes their cache with data in the "to upload" state will, A), definitely lose that data for good and, B), potentially corrupt their drive depending on what that data is (read: file system data, for example).
  21. OK. My mistake, then. I haven't started this process, I just thought I remembered this fact from some rClone work I've done previously. I'll remove that comment.
  22. I'm sorry, but this seems incorrect. A Google search returns precisely zero results for that API error prior to this month. The earliest result that I can find was this result from a Japanese site 20 days ago. I don't know why you seem so resistant to the notion that this is a change in Google's API that will need to be addressed by all of us sooner or later, but the objective information available seems to suggest that this is the case. I am also not having errors at the moment (which might be related to the fact that my account is a legitimate account at a large institution, and not a personal GSuite Business account) but I anticipate that we will all, eventually, need to migrate to a structure conformant with this new policy. Note that this obscurity and suddenness is consistent with Google's API changes in the past as well. Nothing about this should be particularly surprising. It's just the sloppy way in which Google typically does business. EDIT: Note also that this API error code (numChildrenInNonRootLimitExceeded) was added to Google's API reference sometime between April and June.
  23. It should probably just be noted that Stablebit is precisely two people. They have never had, to my knowledge, any more people than just Alex doing software development.
  24. Note that the forums are not the official support channel. If you need support, submit a ticket to support at this link: https://stablebit.com/Contact
×
×
  • Create New...