Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Posts posted by srcrist

  1. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    nitty-gritty details about exactly what services SBCD can interface with

    The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    I currently use primarily Gdrive (and specifically including some "shared drives", formerly "Teamdrives").

    CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    Coming from rclone I am also spoiled for having a lot of flexibility - so it would be nice to get an idea of how SBCD compared in that department.

    The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    (does it cost a lot more to buy it piecemeal later?).

    I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    What is SBCD's strategy when it comes to limitations in number of files?

    There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations.

    On 7/19/2020 at 10:47 PM, TheStigma said:

    Is Linux support something that is likely to ever happen?

    Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda.

    Hope some of that helps.

  2. 14 minutes ago, darkly said:

    Might do that if it keeps happening. Out of curiosity, can you share your I/O Performance settings?

    Sure. Presently using:

    • 12 down threads, 5 up threads
    • Background I/O Disabled
    • Upload throttling at 70mbits/s
    • Upload threshold at 1MB or 5mins
    • 20MB Minimum download size
    • Prefetching enabled with:
      • 8MB trigger
      • 150MB forward
      • and a 15 second window
  3. DrivePool actually works at the file system level, while CloudDrive is a block-based tool. You can move all of your data into a DrivePool pool without reuploading anything at all. You just need to move the content to the directory structure created by DrivePool. You'd just move the content on your Box drive to the directory created on your Box drive, and the content on your Gsuite drive to the directory created on that drive. It should be as instantaneous as any other intra-filesystem data move.

  4. 16 hours ago, darkly said:

    still going on. Happens several times a day consistently since I upgraded to 1316. Never had the error before unless I was actually having a connection issue with my internet.

    I'm still not seeing an issue here. Might want to open a ticket and submit a troubleshooter and see if Christopher and Alex can see a problem. It doesn't seem to be universal for this release.

  5. Remember that your cache drive will be used for both reads and writes. These will often be simultaneous. You should expect significantly reduced cache performance when compared to straight read or write scenarios. A lot of us suggest using an SSD as the cache, as it works much better. A smaller SSD of even 64GB or so will result in far superior CloudDrive performance over a 2TB but slower HDD.

  6. 6 hours ago, davidkain said:

    Do you have a recommendation for how best to handle the copy? My first thought would be to just open up my CloudDrive and an rClone mount on my machine, and then drag and drop. Is that correct?

    That's it, really. That's the only way. You'll just need to manually copy all of your data out of CloudDrive to another storage location that Linux can mount. Note that I am not suggesting that this process is efficient or even wise. It's just the project that you're proposing. It would likely take weeks or even months, if your drive is large. Unfortunately CloudDrive's structure precludes any more simple or expeditious option.

    6 hours ago, davidkain said:

    And in this case, would the data be moving from datacenter to datacenter, or does it come down to my machine first, and then back up to the cloud? If the latter, I'd be worried about data corruption and just general duration. I've only got 10Mbps up.

    The data would have to be downloaded from the CloudDrive and reuploaded to the new location, yes. There is no way to move the data on the provider. No other application can even access CloudDrive's provider data anyway. It's completely obfuscated even if it isn't encrypted.

    6 hours ago, davidkain said:

    As for the Windows VM, that sounds like a cool approach (assuming the provider offers this capability), but would there performance implications to streaming that media?

    You don't want to run Plex on a server to which you do not have root access honestly. If you have root access, you could set up a VM. There really shouldn't be a significant performance impact beyond running an entire Windows VM in the background. Generally you pass drives to the host via a network share over the loopback interface, so it should be relatively fast. Faster, certainly, than a real network share over a real network--and those are perfectly capable of streaming media as well.

  7. 10 minutes ago, smitbret said:

    Currently, Cloud Drive has chosen one of the drives in my SnapRAID pool as the cache drive.  Because of this, about half of my SnapRAID parity synchs fail because CloudDrive writes data to one of the protected drives during the synch.

    I have another drive in my server that I use just for stuff like this but I don't see where I can assign/move the cache folder to this drive.

    Am I missing a setting here?

    You actually specify the cache drive when you create or mount the drive. CloudDrive doesn't just choose one (aside from populating the box). You can change it to whatever you want if you detach the drive and reattach it.

  8. 8 hours ago, davidkain said:

    @Chase - Wow, that sounds amazing! It's out of my price range, though, unfortunately.

    @Christopher (Drashna) - On the migration front, what would the process look like if I did find a Linux seedbox I liked and needed to migrate off from my StableBit Pool of Drives to something on GDrive that the seedbox could read? I'm working with ~20TB of data, so curious what that process would even look like (and/or if I'd need any special utilities to do it properly). I suspect I'm sticking with what I've got, but I'm curious.

    You would either have to copy all of the content from your CloudDrive to something with linux support like an rClone mount, or run CloudDrive via a Windows VM and pass the drive out to the host OS.

  9. This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume.

    You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to the CloudDrive volume and your local duplication independently of one another.

  10. 2 minutes ago, ventualle said:

    thanks, last thing, for plex if i use two units in a pool instead of a single one the performances are different or nothing changes?

    There is no appreciable performance impact by using multiple volumes in a pool.

  11. 1 hour ago, ventualle said:

    thanks for the answer, do you think it is better to have a single 250TB volume as it is already using or is it better to divide it into multiple partitions and then unify it in a pool?

    I think that it is better to keep any single volume less than the 60TB limit for Volume Shadow Copy and Chkdsk. Any volume larger than that cannot be repaired in the inevitable eventuality of minor file system corruption. Chkdsk is relatively essential in order to maintain an NTFS volume. So I would suggest splitting the volume. Note that if that volume isn't actually full, you can shrink it to add additional volumes of the appropriate size.

  12. Because CloudDrive is a block-based system, it has no real awareness of the files on your drive. Drastic changes would have to be made in order to allow for pinning at a file-based level. I believe there is already a request in the tracker for a similar feature to this, but i doubt we'll see any action on it any time soon. Christopher will have to share details on that, if he has them.

    In any case, you can disable the thumbnail wait entirely by simply changing the folder type to "General Items." This can be done an an entire drive or folder tree as well. Windows will no longer try to load or generate thumbnails if you do so.

  13. Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.

  14. I'm not sure if there is any complication that I'm missing in what you're asking here, but, based on how I'm reading your question (and my read is that you are already using CloudDrive for your data), you should be able to simply detach the drive from the computer that it is presently mounted to and attach it to your remote server once CloudDrive and DrivePool are installed. Note that both applications are Windows only, and Windows Server can be expensive in a data center.

  15. 4 minutes ago, trillex said:

    The 5 drives that were "fine" are now unable to be mounted and my cache drive is completely filled to the brim (44k free). This could be due to some automation filling the earlier mounted drives, but I have never seen it be filled below the 5 GB limit.

    It sounds like we might be stumbling on some buggy cache code, between this and the previous notes from Chase about deleting the cache. Make sure you guys are submitting actual tickets and troubleshooter dumps as well, so Alex and Christopher can take a look at the related code and your logs.

  16. 1 hour ago, Chase said:

    In my understanding of the system the "cache" is just a local file of your pinned data and a COPY of the last #GB of data that was received or transferred between your machine and the cloud drives.

    The cache also includes information that has been modified but not yet uploaded. Everyone should be *very* careful before simply deleting the local cache. Note that any modifications that have not yet been uploaded to the cloud provider will be permanently lost, and you could potentially corrupt your drive as a result. I believe that you are theoretically correct as long as none of the information in the cache has yet to be uploaded, but extreme caution should be used before following this example. Anyone who deletes their cache with data in the "to upload" state will, A), definitely lose that data for good and, B), potentially corrupt their drive depending on what that data is (read: file system data, for example).

  17. 17 minutes ago, kird said:

    Sorry, this rule since i have been paying my last gdrive, approximately 3 or 4 months, already existed as it was commented in telegram groups at the time. I don't invent anything, mate.

    I'm sorry, but this seems incorrect. A Google search returns precisely zero results for that API error prior to this month. The earliest result that I can find was this result from a Japanese site 20 days ago.

    I don't know why you seem so resistant to the notion that this is a change in Google's API that will need to be addressed by all of us sooner or later, but the objective information available seems to suggest that this is the case.

    I am also not having errors at the moment (which might be related to the fact that my account is a legitimate account at a large institution, and not a personal GSuite Business account) but I anticipate that we will all, eventually, need to migrate to a structure conformant with this new policy.

    Note that this obscurity and suddenness is consistent with Google's API changes in the past as well. Nothing about this should be particularly surprising. It's just the sloppy way in which Google typically does business.

    EDIT: Note also that this API error code (numChildrenInNonRootLimitExceeded) was added to Google's API reference sometime between April and June.

  18. 4 hours ago, Chase said:

    I imagine that Stablebit is limited on engineers and they were blindsided with googles new rules.

    It should probably just be noted that Stablebit is precisely two people. They have never had, to my knowledge, any more people than just Alex doing software development.

  19. 9 hours ago, oceanmasterza said:

    i was considering purchasing this software however with this level of support I will stay away. Sort out your support guys!

     

    9 hours ago, kird said:

    I don't know what's going on with the official support, it was pretty seamless before, I don't know what circumstances may be going on right now.

     

    Note that the forums are not the official support channel. If you need support, submit a ticket to support at this link: https://stablebit.com/Contact

×
×
  • Create New...