Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can jus
  2. Sure. Presently using: 12 down threads, 5 up threads Background I/O Disabled Upload throttling at 70mbits/s Upload threshold at 1MB or 5mins 20MB Minimum download size Prefetching enabled with: 8MB trigger 150MB forward and a 15 second window
  3. DrivePool actually works at the file system level, while CloudDrive is a block-based tool. You can move all of your data into a DrivePool pool without reuploading anything at all. You just need to move the content to the directory structure created by DrivePool. You'd just move the content on your Box drive to the directory created on your Box drive, and the content on your Gsuite drive to the directory created on that drive. It should be as instantaneous as any other intra-filesystem data move.
  4. I'm still not seeing an issue here. Might want to open a ticket and submit a troubleshooter and see if Christopher and Alex can see a problem. It doesn't seem to be universal for this release.
  5. I haven't had any errors with 1316 either. It might have been a localized network issue.
  6. Remember that your cache drive will be used for both reads and writes. These will often be simultaneous. You should expect significantly reduced cache performance when compared to straight read or write scenarios. A lot of us suggest using an SSD as the cache, as it works much better. A smaller SSD of even 64GB or so will result in far superior CloudDrive performance over a 2TB but slower HDD.
  7. I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  8. That's it, really. That's the only way. You'll just need to manually copy all of your data out of CloudDrive to another storage location that Linux can mount. Note that I am not suggesting that this process is efficient or even wise. It's just the project that you're proposing. It would likely take weeks or even months, if your drive is large. Unfortunately CloudDrive's structure precludes any more simple or expeditious option. The data would have to be downloaded from the CloudDrive and reuploaded to the new location, yes. There is no way to move the data on the provider. No other app
  9. You actually specify the cache drive when you create or mount the drive. CloudDrive doesn't just choose one (aside from populating the box). You can change it to whatever you want if you detach the drive and reattach it.
  10. You would either have to copy all of the content from your CloudDrive to something with linux support like an rClone mount, or run CloudDrive via a Windows VM and pass the drive out to the host OS.
  11. This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume. You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to t
  12. Sure thing. Happy to help.
  13. There is no appreciable performance impact by using multiple volumes in a pool.
  14. I think that it is better to keep any single volume less than the 60TB limit for Volume Shadow Copy and Chkdsk. Any volume larger than that cannot be repaired in the inevitable eventuality of minor file system corruption. Chkdsk is relatively essential in order to maintain an NTFS volume. So I would suggest splitting the volume. Note that if that volume isn't actually full, you can shrink it to add additional volumes of the appropriate size.
  15. Because CloudDrive is a block-based system, it has no real awareness of the files on your drive. Drastic changes would have to be made in order to allow for pinning at a file-based level. I believe there is already a request in the tracker for a similar feature to this, but i doubt we'll see any action on it any time soon. Christopher will have to share details on that, if he has them. In any case, you can disable the thumbnail wait entirely by simply changing the folder type to "General Items." This can be done an an entire drive or folder tree as well. Windows will no longer try to load
  16. Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  17. Yep. The data duplication is self-healing.
  18. I'm not sure if there is any complication that I'm missing in what you're asking here, but, based on how I'm reading your question (and my read is that you are already using CloudDrive for your data), you should be able to simply detach the drive from the computer that it is presently mounted to and attach it to your remote server once CloudDrive and DrivePool are installed. Note that both applications are Windows only, and Windows Server can be expensive in a data center.
  19. It sounds like we might be stumbling on some buggy cache code, between this and the previous notes from Chase about deleting the cache. Make sure you guys are submitting actual tickets and troubleshooter dumps as well, so Alex and Christopher can take a look at the related code and your logs.
  20. The cache also includes information that has been modified but not yet uploaded. Everyone should be *very* careful before simply deleting the local cache. Note that any modifications that have not yet been uploaded to the cloud provider will be permanently lost, and you could potentially corrupt your drive as a result. I believe that you are theoretically correct as long as none of the information in the cache has yet to be uploaded, but extreme caution should be used before following this example. Anyone who deletes their cache with data in the "to upload" state will, A), definitely lose that
  21. OK. My mistake, then. I haven't started this process, I just thought I remembered this fact from some rClone work I've done previously. I'll remove that comment.
  22. I'm sorry, but this seems incorrect. A Google search returns precisely zero results for that API error prior to this month. The earliest result that I can find was this result from a Japanese site 20 days ago. I don't know why you seem so resistant to the notion that this is a change in Google's API that will need to be addressed by all of us sooner or later, but the objective information available seems to suggest that this is the case. I am also not having errors at the moment (which might be related to the fact that my account is a legitimate account at a large institution, and no
  23. It should probably just be noted that Stablebit is precisely two people. They have never had, to my knowledge, any more people than just Alex doing software development.
  24. Note that the forums are not the official support channel. If you need support, submit a ticket to support at this link: https://stablebit.com/Contact
×
×
  • Create New...