Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. If you want to contact Christopher, do so here: https://stablebit.com/Contact. I'm not sure what "updated version of this guide" means, but plenty of us have been using personal keys for a long time. It works fine. We have our own quotas. You can see it in the Google API dashboard. This feature was added over a year ago precisely in anticipation of this very issue. There is a subset of the userbase who is using consumer grade cloud storage for terabytes worth of data that they do not consider to be expendable. And, honestly, those people should stop using this or any other cloud service. I don't think they were ever the target market, and there is nothing CloudDrive can do to guarantee the service level these people are expecting on the providers that they're using. Google's SLA (https://gsuite.google.com/terms/sla.html) does not even provide for that sort of data integrity on Drive (They do not have an SLA for Drive at all), and I have no idea what anyone thinks CloudDrive can do over and above the SLA of the provider you choose to use. Just a personal opinion, there.
  2. I mean...you could stop using it, if it loses your data...or at least contact support to see if you can get your issue fixed...
  3. Self-healing refers to the structure of the drive on your storage, not your local files or even the files stored on the drive. It simply means that it will use the duplicated data to repair corrupt chunks when it finds them. Have you submitted a ticket to support? I haven't heard of data loss issues even among those who aren't using data duplication in about 6 months. The self-healing works, in any case. I've seen it show up in the logs once or twice.
  4. So an API key is for an application. It's what the app uses to contact Google and make requests of their service. So your new API key will be used by the application to request access to all of your drives--regardless of the Google account that they are on. The API key isn't the authorization to use a given account. It's just the key that the application uses to request access to the data on whatever account you sign-in with. As an obvious example, Stablebit's default key for CloudDrive was obviously created on their Google account, but you were still using it to access your drives before changing it to your own key right now. When you set it up, you'll see that you still have to sign in and approve your app. It'll even give you a warning, since, unlike actual CloudDrive, Google can't actually vouch for the app requesting access with your key. This just isn't how an API key works. Are you sure you're logging in with the correct account for each drive once you've added the new key? You don't log in with the account you used to create the key. You still log in with whatever credentials you used to create each drive.
  5. I'm actually not 100% sure on that one myself. I'm pretty sure that simply using "reauthorize" would work as well, but I think we'd need to ask Christopher or Alex to be sure. Sounds like RG9400 has tested it and seen that it works, though. So that's a good indication that you can just do that. Worst case scenario, yeah, detaching the drive and establishing the Google Drive connection from scratch should use the new API key.
  6. If you want a response directly from the developer, you'll almost certainly get one much quicker by submitting the question to the contact form (https://stablebit.com/Contact). Christopher only checks the forums occasionally. They're mostly for user to user help and advice. You are, of course, welcome to express whatever concerns you have--I am just providing the solution to your problem. I agree that the experimental provider classification was sudden, but I don't really agree that it's vague. It explains exactly what the problem is. The last time there was a problem with Google, they posted an explanation here on the forums and they were roundly criticized for not posting a warning in the code itself (https://old.reddit.com/r/PleX/comments/by4yc7/stablebit_clouddrive_warning_do_not_use_unless/). So they did that this time, and they're also being criticized for it. C'est la vie, I suppose. An API key from Google is free, yes. Anyone who uses rClone already has to supply their own API key, as rClone does not have global keys like CloudDrive to begin with. EDIT: @Christopher (Drashna) - Just tagging you so you get a notification about this request.
  7. My read of this is that Google's API keys have a query limit per key and they are approaching that limit, and their requests to have the limit raised have thus far been unanswered. There is quite a bit of overreaction here, though, since you can simply use your own API key regardless (see: http://wiki.covecube.com/StableBit_CloudDrive_Q7941147). If you're using your own key, the limit will not affect you. Worst case scenario here is that everyone will simply have to provide their own personal API keys. You won't lose access to your data. Note that anyone can get an API key for google cloud, and the default limits on personal keys are more than enough for a personal CloudDrive installation. I think that it is a rather frustrating move on Google's part not to simply raise the limits when their developers are successful, though. In any case, unless I am mistaken here, this isn't a major problem. It just removes some ease of use.
  8. The bottom third or so of your screenshot contains all of the ordered placement settings under the SSD optimizer.
  9. You'll have to submit a ticket and attach the troubleshooter. I really can't help you further other than to say that your problems are not widely experienced, and the behavior you're experiencing should not be expected. Beyond that, support will have to help you. I've never seen any sort of crashing or "not found element" errors, so I don't know what might be responsible.
  10. Submit an official support ticket here: https://stablebit.com/Contact Then download and run the troubleshooter and attach the support ticket number. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter I also wouldn't say that your reasoning in the last sentence is sound. rClone and CloudDrive interact with Google's APIs in very different ways and with very different frequencies. The performance or reliability of one does not suggest much about the expected performance or reliability of the other. In any case, read errors generally indicate a connection issue and crashes are not normal at all. CloudDrive is generally very stable software. So submit a ticket and let them look at it and go from there.
  11. The SSD balancer plugin can effectively accomplish some of that. You can find it here: https://stablebit.com/DrivePool/Plugins But because DrivePool is a filesystem based solution, as opposed to a block-based solution like CloudDrive, it can't dynamically cache exactly like CloudDrive can.
  12. I'm just not sure that you're asking a question with an answer. That's probably why people are slow to respond. What you're asking here is sort of subjective by nature, and there isn't a "correct" answer to give you. It's your call how you want to structure your pool, and there aren't any notable performance difference to any particular option, as far as I can think of. My first thought is that it honestly seems like overkill and an inefficient use of your time as it stands already, and that I certainly wouldn't put any more time into even *more* pools with *additional* duplication. If I'm reading you correctly, all of the data is backed up up to your unraid server as it is. I'd probably just use a single cloud drive to provide 1x redundancy to the unraid and call it a day. Nothing that you truly cannot afford to lose should ever be trusted to consumer grade cloud storage in the first place, and I don't think any number of accounts or any pool structure is really going to improve that--at least not with the same provider and same service. If anything, it's probably just making it more likely that Google will shut down your (or everyone's) accounts for abuse. But, again, that isn't a "correct" answer, it's just a personal opinion. The six separate Google accounts and the concern about getting them shut down suggests to me that they are not all legitimate, paid, accounts for which the terms of service are being abided. And that, honestly, is a far bigger risk to your data than any advice anyone can give you here can mitigate. Google nukes resold accounts and illegitimate .edu accounts fairly regularly, and nothing we can say here will save your data from that. My advice would be to get one, legitimate, paid Google account and use it to backup your unraid 1x.
  13. Before checking anything else, 1165 had known bugs. Try the current release candidate (1178): https://stablebit.com/CloudDrive/Download If the problem persists after upgrading, submit a ticket: https://stablebit.com/Contact
  14. That depends entirely on your DrivePool configuration settings. DrivePool will try to fill all drives in the pool equally by default. So you'd have files evenly distributed between all drives.
  15. That can be caused by issues with any file system filter, and even explorer shell extensions. You'll need to troubleshoot the exact cause. One example: if you open a directory of media files, Windows, by default, wants to read certain information from those files for the file properties in explorer. That data is not pinned; it actually has to read some of the file. I believe that functionality can be disabled if it's a problem. Also, pinned data will prevent the file system data from having to be retrieved from your cloud provider, yes. But pinned data can also be discarded if your cache is too full as well, so keep an eye on that. You can test the actual responsiveness of the file system by using a command prompt, which will not try to load any extension or filter data when browsing the file system.
  16. To be clear: These are physical hard drives? Was there a CloudDrive drive in the pool? This is the CloudDrive forum. RAW indicates, in any case, that your filesystem has been corrupted. You'll likely want to vacate the data from the drive asap with a recovery tool, format it, and copy the data back, if you can. DrivePool will recognize the data as long as the directory structure remains intact.
  17. srcrist

    How to unmount Pool

    Yeah, in that case, you'd probably have to uninstall DrivePool, reboot, and then the drive should mount immediately after reinstalling. I'm not sure if there is any other way to accomplish it. You might be able to take the pool offline using Disk Management to get the same effect, though. You can, in any case, submit a feature request, though.
  18. srcrist

    How to unmount Pool

    I'm not sure what, exactly, you're trying to do. What sort of tests require the entire pool to be removed? Depending on what you need, you could remove the mount point using Windows Disk Management, uninstall DrivePool temporarily, or simply target the underlying drives in the pool by providing them a mount point.
  19. I haven't seen the issue since 1165. When it's happening, run the troubleshooter and submit a ticket so they can look at it.
  20. You cannot, unfortunately. You will have to recreate the drive. Encryption has to be enabled when the drive is created.
  21. CloudDrive had some significant bugs in the last major release that I believe have eaten up development time. I'm sure that DrivePool development has not stopped, it just hasn't been in need of much development of late. It's been comparatively stable. You can monitor development status here: http://wiki.covecube.com/Development_Status
  22. This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  23. I've been on 1174 for about a week without issue, and was on 1173 before that without issue as well. If you're experiencing crashes, definitely run the troubleshooter (located here http://wiki.covecube.com/StableBit_Troubleshooter) and submit an official ticket (here https://stablebit.com/Contact) so they can take a look at it.
×
×
  • Create New...