Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Posts posted by srcrist

  1. If you want to contact Christopher, do so here: https://stablebit.com/Contact.

    12 hours ago, Historybuff said:

    If all that is needed is to create a private API key why not just make a new updated version of this guide. Create a few test that the users could perform to validate this is ok again.

    I'm not sure what "updated version of this guide" means, but plenty of us have been using personal keys for a long time. It works fine. We have our own quotas. You can see it in the Google API dashboard. This feature was added over a year ago precisely in anticipation of this very issue. 

    12 hours ago, Historybuff said:

    The internet is buzzing and people are starting to scare people away from this service.

    There is a subset of the userbase who is using consumer grade cloud storage for terabytes worth of data that they do not consider to be expendable. And, honestly, those people should stop using this or any other cloud service. I don't think they were ever the target market, and there is nothing CloudDrive can do to guarantee the service level these people are expecting on the providers that they're using. Google's SLA (https://gsuite.google.com/terms/sla.html) does not even provide for that sort of data integrity on Drive (They do not have an SLA for Drive at all), and I have no idea what anyone thinks CloudDrive can do over and above the SLA of the provider you choose to use. Just a personal opinion, there. 

  2. 7 hours ago, Bowsa said:

    Haven't submitted a ticket, I'm  used to suffering data loss at least 2-5 time a year with StableBit cloud. Nothing I can do about it

    I mean...you could stop using it, if it loses your data...or at least contact support to see if you can get your issue fixed...

  3. On 11/22/2019 at 7:26 AM, zhup said:

    Hello,

    How does "automatic self-healing" work?

    Are only cloud files or also the files on hdds repaired? Does de-duplication have to be turned on?

    Thank you in advance.

     

    Self-healing refers to the structure of the drive on your storage, not your local files or even the files stored on the drive. It simply means that it will use the duplicated data to repair corrupt chunks when it finds them. 

     

    22 hours ago, Bowsa said:

    It doesn't work. I've still suffered data loss multiple times with this enabled...

     

    Have you submitted a ticket to support? I haven't heard of data loss issues even among those who aren't using data duplication in about 6 months. The self-healing works, in any case. I've seen it show up in the logs once or twice. 

  4. 8 hours ago, msq said:

    @srcrist@RG9400@ksnell
    Thank you guys for bringing it up!
    One question - are those individual keys for all configured Google Drives? This entry in %PROGRAMDATA%\StableBit CloudDrive\ProviderSettings.json is a global entry, so what will happen if I will setup the keys for Google Account A while in CloudDrive I have configured Google Drive A, Google Account B and so on?

    Based on the fact it worked so far with StableBit keys - I would assume this should work, but does anybody tried that? Last thing I want to mess my CD setup.... :|

    So an API key is for an application. It's what the app uses to contact Google and make requests of their service. So your new API key will be used by the application to request access to all of your drives--regardless of the Google account that they are on. The API key isn't the authorization to use a given account. It's just the key that the application uses to request access to the data on whatever account you sign-in with. 

    As an obvious example, Stablebit's default key for CloudDrive was obviously created on their Google account, but you were still using it to access your drives before changing it to your own key right now.

    When you set it up, you'll see that you still have to sign in and approve your app. It'll even give you a warning, since, unlike actual CloudDrive, Google can't actually vouch for the app requesting access with your key. 

    4 hours ago, Edwin999 said:

    So far I can't access my previously created google drive of 8TB., I can create a new drive of 10GB in the account I've created the api-key in.

    This just isn't how an API key works. Are you sure you're logging in with the correct account for each drive once you've added the new key? You don't log in with the account you used to create the key. You still log in with whatever credentials you used to create each drive. 

  5. 3 hours ago, ksnell said:

     

    @srcrist Followed this wiki you posted to get my key, modify json file, etc, but how do I know it's working (using my new key)?

     

    The bottom of the guide says: "The next connection that you make to Google Drive from StableBit CloudDrive's New Drive tab will use your new API keys." so does that mean I need to: detach my drive, delete the connection and completely re-setup and then it will pick up the new key?

    I'm actually not 100% sure on that one myself. I'm pretty sure that simply using "reauthorize" would work as well, but I think we'd need to ask Christopher or Alex to be sure. Sounds like RG9400 has tested it and seen that it works, though. So that's a good indication that you can just do that. 

    Worst case scenario, yeah, detaching the drive and establishing the Google Drive connection from scratch should use the new API key. 

  6. If you want a response directly from the developer, you'll almost certainly get one much quicker by submitting the question to the contact form (https://stablebit.com/Contact). Christopher only checks the forums occasionally. They're mostly for user to user help and advice. You are, of course, welcome to express whatever concerns you have--I am just providing the solution to your problem.

    I agree that the experimental provider classification was sudden, but I don't really agree that it's vague. It explains exactly what the problem is. The last time there was a problem with Google, they posted an explanation here on the forums and they were roundly criticized for not posting a warning in the code itself (https://old.reddit.com/r/PleX/comments/by4yc7/stablebit_clouddrive_warning_do_not_use_unless/). So they did that this time, and they're also being criticized for it. C'est la vie, I suppose.

    An API key from Google is free, yes. Anyone who uses rClone already has to supply their own API key, as rClone does not have global keys like CloudDrive to begin with. 

     

    EDIT:

    @Christopher (Drashna) - Just tagging you so you get a notification about this request. 

  7. My read of this is that Google's API keys have a query limit per key and they are approaching that limit, and their requests to have the limit raised have thus far been unanswered. There is quite a bit of overreaction here, though, since you can simply use your own API key regardless (see: http://wiki.covecube.com/StableBit_CloudDrive_Q7941147). If you're using your own key, the limit will not affect you. Worst case scenario here is that everyone will simply have to provide their own personal API keys. You won't lose access to your data. Note that anyone can get an API key for google cloud, and the default limits on personal keys are more than enough for a personal CloudDrive installation. 

    I think that it is a rather frustrating move on Google's part not to simply raise the limits when their developers are successful, though.

    In any case, unless I am mistaken here, this isn't a major problem. It just removes some ease of use. 

  8. 2 hours ago, sfg said:

    Thank you for the reply! I would like to try this, but I don't see "ordered placement" in the SSD optimizer... am I just overlooking it?

    The bottom third or so of your screenshot contains all of the ordered placement settings under the SSD optimizer. 

  9. You'll have to submit a ticket and attach the troubleshooter. I really can't help you further other than to say that your problems are not widely experienced, and the behavior you're experiencing should not be expected. Beyond that, support will have to help you. I've never seen any sort of crashing or "not found element" errors, so I don't know what might be responsible. 

  10. Submit an official support ticket here: https://stablebit.com/Contact

    Then download and run the troubleshooter and attach the support ticket number. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter

    I also wouldn't say that your reasoning in the last sentence is sound. rClone and CloudDrive interact with Google's APIs in very different ways and with very different frequencies. The performance or reliability of one does not suggest much about the expected performance or reliability of the other.

    In any case, read errors generally indicate a connection issue and crashes are not normal at all. CloudDrive is generally very stable software. So submit a ticket and let them look at it and go from there. 

  11. I'm just not sure that you're asking a question with an answer. That's probably why people are slow to respond. What you're asking here is sort of subjective by nature, and there isn't a "correct" answer to give you. It's your call how you want to structure your pool, and there aren't any notable performance difference to any particular option, as far as I can think of.

    My first thought is that it honestly seems like overkill and an inefficient use of your time as it stands already, and that I certainly wouldn't put any more time into even *more* pools with *additional* duplication. If I'm reading you correctly, all of the data is backed up up to your unraid server as it is. I'd probably just use a single cloud drive to provide 1x redundancy to the unraid and call it a day. Nothing that you truly cannot afford to lose should ever be trusted to consumer grade cloud storage in the first place, and I don't think any number of accounts or any pool structure is really going to improve that--at least not with the same provider and same service. If anything, it's probably just making it more likely that Google will shut down your (or everyone's) accounts for abuse.

    But, again, that isn't a "correct" answer, it's just a personal opinion. The six separate Google accounts and the concern about getting them shut down suggests to me that they are not all legitimate, paid, accounts for which the terms of service are being abided. And that, honestly, is a far bigger risk to your data than any advice anyone can give you here can mitigate. Google nukes resold accounts and illegitimate .edu accounts fairly regularly, and nothing we can say here will save your data from that. My advice would be to get one, legitimate, paid Google account and use it to backup your unraid 1x. 

  12. 4 minutes ago, Rob64 said:

    Yeah that my bad this was meant for the other section. Its a 36 tb pool on an external. its 5 8tb hdds and I need do recovery i guess. Do you know if drivepool splits files across disks or does it just equally spread them?

    That depends entirely on your DrivePool configuration settings. DrivePool will try to fill all drives in the pool equally by default. So you'd have files evenly distributed between all drives. 

  13. That can be caused by issues with any file system filter, and even explorer shell extensions. You'll need to troubleshoot the exact cause. One example: if you open a directory of media files, Windows, by default, wants to read certain information from those files for the file properties in explorer. That data is not pinned; it actually has to read some of the file. I believe that functionality can be disabled if it's a problem. 

    Also, pinned data will prevent the file system data from having to be retrieved from your cloud provider, yes. But pinned data can also be discarded if your cache is too full as well, so keep an eye on that. 

    You can test the actual responsiveness of the file system by using a command prompt, which will not try to load any extension or filter data when browsing the file system.

  14. To be clear: These are physical hard drives? Was there a CloudDrive drive in the pool? This is the CloudDrive forum.

    RAW indicates, in any case, that your filesystem has been corrupted. You'll likely want to vacate the data from the drive asap with a recovery tool, format it, and copy the data back, if you can. DrivePool will recognize the data as long as the directory structure remains intact. 

  15. Yeah, in that case, you'd probably have to uninstall DrivePool, reboot, and then the drive should mount immediately after reinstalling. I'm not sure if there is any other way to accomplish it. You might be able to take the pool offline using Disk Management to get the same effect, though. You can, in any case, submit a feature request, though. 

  16. I'm not sure what, exactly, you're trying to do. What sort of tests require the entire pool to be removed? Depending on what you need, you could remove the mount point using Windows Disk Management, uninstall DrivePool temporarily, or simply target the underlying drives in the pool by providing them a mount point. 

  17. On 10/10/2019 at 3:02 PM, buddyboy101 said:

    To accomplish this, I believe I need to create a tiered pool - one subpool that contains the 3 physical drives, and one subpool that is the CloudDrive.  I would then set duplication for the master pool, so that the physical drive's files are replicated to the DrivePool.

    This is correct. 

     

    On 10/10/2019 at 3:02 PM, buddyboy101 said:

    My second question: I've read that you should not upload more than 750GB of files per day to Google Drive, or else you will get a 24 hour ban from the service.  Is there any way I can ensure the initial (and subsequent) duplications to CloudDrive are spread spread out in 750GB increments per day? 

    It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able.

    On 10/10/2019 at 3:02 PM, buddyboy101 said:

    Will DrivePool update the duplicated file to reflect the new file name?  And if so, is it only changing the file name, rather than deleting it and re-duplicating the file entirely?  

    Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. 

     

    On 10/10/2019 at 3:02 PM, buddyboy101 said:

    My last question is how DrivePool handles the failure/removal/replacement of a drive.  In other words, how are the files that reside on the drive re-populated in the event of the drive failing and being replaced?  I imagine they would somehow be pushed from the duplicated versions? 

    This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. 

     

    On 10/10/2019 at 3:02 PM, buddyboy101 said:

    And would intentionally (or unintentionally) deleting a file on the physical disk result in the duplicate being deleted as well?  How does DrivePool know that the removal of a file due to drive failure or intentional removal of a disk should not prompt deletion of the corresponding duplicate? 

    DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. 

     

    On 10/10/2019 at 6:19 PM, buddyboy101 said:

    Just thought of something else - does the local cache size matter if I'm not streaming from the cloud drive?  I'm only using the cloud drive to house duplicate files.  

    It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. 

    Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. 

    On 10/10/2019 at 6:19 PM, buddyboy101 said:

    And Stablebit's materials state that clusters over 4k can lead to less than optimal performance.  Because I need to store at least 28TB of files, I'm leaning towardthe 8 KB cluster size that supports 32TB.  Does this deviation from 4k lead to noticeable performance issues.

    This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. 

    Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary. 

×
×
  • Create New...