Jump to content

Gandi

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by Gandi

  1. Hi there

    I am running into problems again sadly.
    Every time when I want to create a cloud drive, no matter what provider, I get a message that it cant format the new drive:

    There was an error formatting a new cloud drive. You can try to format it manually from Disk Management.

    I see that clouddrive creates a new drive but it can never be formated :( and therefor in windows there is no new drive in my explorer, it also doesnt give the drive a driveletter (I assume since its unformatted). Is there anything I can do? I mean like this the software is unusable since I cant create drives anymore...

    Another problem I suddenly run into is that I get every few hours this error on my drive:

     

    grafik.png.219701b9cce1e5ed32658d74fc95a32b.png

    Is there something I can do about that?

    Thank you for your help - I will take everything to solve that problem and make clouddrive work again :(


     

    grafik.png

  2. 1 hour ago, srcrist said:

    10MB chunks may also impact your maximum throughput as well. The more chunks your data is stored in, if you accessing more than that amount of data, the more overhead those chunks will add to the data speed.

    The API limits are for your account, not the API key. So they will all share your allotment of API calls per user. It does not matter that they use different API keys to access your account.

    That may be, but note that Google Drive is not, regardless of whether or not it's a business account, an enterprise grade cloud storage service. Google does offer such a service (Google Cloud Storage), but at a much higher price. This remains true regardless of what you consider their service to be. CloudDrive also supports GCS APIs as well, if you actually need that level of service. Google Drive, even on a business account, is intended to provide consumer grade storage for individual users--not enterprise grade storage for high data volume, high data rate, or high availability purposes.

    See Google's own statement on the matter here: https://cloud.google.com/products/storage

    I think I understood what you were saying, actually. I don't really understand the use case that would necessitate what you're looking for, and I'm not entirely sure how you're testing the maximum throughput, but I do understand what you are seeing and what you are thinking is a problem. 110mbps is pretty low relative to my experience, assuming your system is actually requesting the amounts of data from the drive that would lead CloudDrive to max out the throughput.

    I am, for example, presently seeing these speeds:

    image.png.3145e68e45fd9521dcf28bf3cea2e390.png

    And that is with a largely idle workload. I believe upload verification is the only load on the drive at the moment.

    I don't, unfortunately, really know how to help you solve it, though. If your system is trying to pull a lot of data, and your I/O settings and your prefetcher settings are configured reasonably, speeds to Google should certainly be faster than what you are experiencing. Maybe not 100MB/s, but better than 110mbps. Have you considered a peering issue between your system and Google itself?

    To be clear: have you actually tested the performance in a real-world way? Trying to read large amounts of data off of the drive, as opposed to looking at the network metrics in the UI?  What speed does Windows report, for example, if you try to copy a large file off of the drive that isn't stored in the cache?

    And, on a related note, have you tested to see if your data is being pulled from the cache? 40GB every 12 hours is not a ton of data. If your cache is larger than that, it's probably all being cached locally. If that is the case, CloudDrive won't need to actually download the data from your provider in order to provide it to your system.

    I agree with you about the enterprise thingy. But I think my problem lies not there, since it works with other products more than fine. Lets focus on my speed issues.

    Well I can show you my setting:

    grafik.png.f0b29fa22343f55ae9161c2a7db9f395.png

    Maybe the prefetching is a bit too high? I played around with it actually and even set it off. There was no change. The other drive on this account has 1 up 1 down but does only read (at 50mbit/s 1 thread).

    I really dont knwo where it comes from that I get such low speeds.

    Btw, upload verfication is off :)

  3. 20 minutes ago, srcrist said:

    Is this a legacy drive created on the beta? I believe support for chunk sizes larger than 20MB was removed from the Google Drive provider in .450, unless that change was reverted and I am unaware. 20MB should be the maximum for any Google Drive drive created since. 50mbps might be a little slow if you are, in fact, using 100MB chunks--but you should double-check that.

    Gotcha. Yeah, if you set additional limitations on the API key on Google's end, then you'd have to create a key without those restrictions.

    And CloudDrive is the only application accessing that Google Drive space? Nothing else is using the API or accessing the data on your Google Drive? Those yellow arrows indicate that Google is throttling you--and that is handled on a user by user basis--so that's the angle to explore.

    I think you might just be running into a difference in expectation. I'm not sure how many users would consider 50mbps * X threads to be slow. I think that if most users hit 500mbps they would probably consider that to be relatively fast. So the only people who are saying, "yeah, my drive is slow," are the people whose drives are bottlenecked by I/O, rather than network throughput.

    It really sounds like you were looking for CloudDrive to provide effectively enterprise grade throughput on a consumer data storage service, and I don't think that was necessarily a design goal. You might be able to get the sorts of data speeds you're looking for, somewhere in the 100s of MB/s, from rClone if you configure it to operate on the data in parallel--so maybe look into that.

    Sure thing. Good luck finding something that works for you!

     I thought I set chunk size to 100 MB -- maybe I am wrong since when I check the size in the drive its 10 MB - that might explain it my wrong expectations.

    I use other tools for this space too - but they have their own google API Key (one for netdrive, one for rclone separatly).
     

    My Google space is actually for my business, so I consider it enterprise :) Maybne clouddrive isnt.

    What I mean with 50mbit/s wasnt clear I guess. The 2 drives have always only 1 threat shown and the speed is between 30 and 50 mbit/s. Since its just one, I only get that speed in total:

    grafik.png.021d4ae489bfa960a18bb15e0259c5b5.pngas seen here.

     

    The one drive that often jumps over the set 5 Threats never goes over 100mbit/s, even with more threats as seen here:

    grafik.png.141492d9281774167e0ece8d977c815f.png
     

     

    Those drives are the same Account A and use in total 15 Threads over all drives and up/download.

    That is why I say "it is slow". I read that Stablebit uses alternating APIs, but when I changed Rclone and or netdrive to own API, the speed increased massively, that is why I wanted to change here too.

  4. 57 minutes ago, srcrist said:

    Sometimes the threads-in-use lags behind the actual total. As long as you aren't getting throttled anymore, you're fine.

    Netdrive and rClone are not appropriate comparisons. They are uploading/downloading several gigbytes of contiguous data at a time. The throughput of that sort of data stream will always be measured as larger. CloudDrive pulls smaller chunks of data as needed and also no data at all if it's using data that is stored in the local cache. You're not going to get a measured 50MB/s per thread with CloudDrive from Google. It just isn't going to happen. 50mbps for a single thread with a 20MB chunk size is about what I would expect, accounting for network and API overhead. That's about 500mbps with 10 download threads--or roughly 62MB/s in the aggregate.

    A larger chunk size could raise that throughput, but Google's chunk size is limited because it creates other problems.

    You are hitting the expected and typical performance threshold already. Your aggregate thread count would already exceed the 50MB/s that you get from the applications making larger contiguous data reads. You just have to understand how the two applications differ in functionality. CloudDrive is not accessing a 10GB file. It is accessing X number of 20MB files simultaneously. That has performance implications for a single API thread. The aggregate performance should be similar.

    I am not 100% sure what this means. But, if I am interpreting correctly, I think you are confusing the API key's functionality. The API key has nothing to do with the account that you access with said key. That is: an API key created with any google account can (at least in theory) be used to access any other account's data, as long as a user with the appropriate credentials authorizes said access. So much to say that you can use the API key you create to authorize CloudDrive to access any Google account to which you have access, as long as you sign in with the appropriate credentials when you authorize the drive. Note that Google does place limits on the number of accounts that the API key can access without submitting it to an approval process, but those limits are much higher than anyone using a key for genuine personal use would ever need to worry about. It's something like 100 accounts.

    Note that, by default, you were already using an API key created on Stablebit's Google account to access YOUR data.

    Sure. Ultimately, CloudDrive isn't really designed to provide the fastest possible throughput with respect to storing data on Google or any other provider. If your needs are genuinely to have multiple 50MB/s simultaneous downloads at once, on a regular basis, this probably just isn't the tool for you.

    The most efficient performance configuration for CloudDrive is to have one large drive per Google account, and give that one drive the maximum number of threads as discussed above, leaving you with that 60ish MB/s aggregate performance.

    Your present configuration is sort of self-defeating with respect to speed to begin with. You've got multiple drives all pulling chunks from the same provider account, and they all have to compete for API access and data throughput.

    Well, let me start first:

    I have 2 accounts, so 2 different drive. In Total I have: Account A: 2 Drives; Account B: 1 Drive. Since I lost multiple times data due to corruption, I dont want 1 256 TB Drive. I set it to 100MB chunks to get better speeds, but it is sitll at 50 mbit/s using 1 single thread. But ok I see that it works different.

    About the API: I set it to Internal, which means only people from Account A Organization can use that API, so the other ACcount B which is in another org. cant use it. But Stablebit offers only to use 1 API key in the settings. But I might go back to normal settings if this wasnt the problem.

    The accounts get 1 time every 12 hours around 40gb of data, and they have to be moved quickly, that is why the 750GB limit doesnt take place and why I set high upload threads. Anyway I changed it to much lower. The API limit shouldnt be hit anymore since Account A has a total of 12 Threads now, Accoutn B has 5 in total over all drives. But it has still the yellow flashes beside Downloads. I dont know why, but stablebit is just slow in upload and downloading. I checked other people that had the same, there it was said its becouse of the cache that is pointed to a HDD, in my case its an NVME SSD...

    I guess it is the way how it is designed. I will try more in hope that I get better speeds.

    In fact, I wanted to have a fast reading for tiny files and (10 to 20 MB), that is why I went with stablebit - I simply had the lifetime license I bought a few years back and droppet it after 3 datalosses due to google outages (maybe you remember them, the RAW Disk problem).

    I will play around more and see if it gets better if not, well I will look out for some other solution.

    Thank you for your help anyway - I really appreaciate it.

  5. 11 minutes ago, srcrist said:

    I mean, the short answer is probably not using your API key. But using your own key also isn't the solution to your real problem, so I'm just sorta setting it aside. It's likely an issue with the way you've configured the key. If you don't see whatever custom name you created in the Google API dashboard when you go to authorize your drive, then you're still using the default Stablebit keys. But figuring out exactly where the mistake is might require you to open a ticket with support.

    Again, though, the standard API keys really do not need to be changed. They are not, in any case, causing the performance and API issues that you're seeing.

    I changed the settings to what you told me too. Eventhough I told it to use 5 download threats it still goes up to 12... No idea why that happens. 1 Threat is around 50mbit/s, with netdrive 3 I get 50 MB/s. Same with Rclone.

     

    One of my drives is another account (different gdrive user, different organization), I get the same there. This account obviously cant use my own API key, since I only set it up in the other account and as internal. But there is no setting for that at all - is it?

    Speed issues are for sure from clouddrive, since other software to google I get top speeds.

    Maybe I have to replace it, what would be pretty sad though. Btw it is not media files, it is other data.

  6. 22 minutes ago, srcrist said:

    Any setting that is only available by editing the JSON is considered "advanced." See: https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings

    That is more, in one drive, than Google's entire API limits will allow per account. You'll have to make adjustments. I'm not 100% sure the exact number, but the limit is somewhere around 10-15 simultaneous API connections at any given time. If you exceed that, Google will start requesting exponential back-off and CloudDrive will comply. This will significantly impact performance, and you will see those throttling errors that you are seeing. Note that Google's limits are across the entire google user, so, if your drives are all stored on the same account, your total number of upload and download threads across all of the drives on that account cannot exceed the 15 or so that Google will permit you to have.

    I use 10 down and 5 up, with the presumption that all 15 will rarely be used at once, and rarely run into throttling issues.

    It should not be. My server is only on a 1gbps connection, not 10gbps like yours, and I can still *easily* hit over 500mbps with 10 download threads. 20 upload threads is pointless, since Google will only allow you to upload 750GB per day, which averages out to around 70mbps or so, a rate that even two upload threads can easily saturate.

    Ultimately, though, your numbers are simply excessive. You're quite a bit over the recommended values. Drop it so that you're at less than 15 threads per Google account and you'll stop getting those errors. Then we can take a look at your throughput issues.

     

    Thank you, I will do that now and set up / down threats to a total of 15 over all drives.

    But can you tell me why I dont see any API hits on my own API?
     

    grafik.thumb.png.227e13484612a617e346e3a65c7b4f08.png

  7. Thank you srcrist

    I indeed detached and reattached all drives... But you might be right, I use 10 download and 20 Uploads per cloud drive. But I have multiple cloud drives. I did that since the speed with less is abysmal..

    You are talking about advanced setting? I opened the config with notepad -- do I switch something else on?

  8. Dear all

     

    I would like to ask a question. I often get that message on Downloading:

    grafik.png.42210c0ab857a79d561c66d7c2bc24fd.png

    I dont know what is causing it. I am far away from the 10TB limit of downloads of gdrive. I thought it might be an API Problem. So I set up my own OAUTH. The tutorial in the wiki is outdated and I dont know if I did all the steps right...

    I got the ClientID and the ClientSecret as seen here and added it to the clouddrive settings:

    grafik.png.12356fdbce0ab2966d378ac3bce1aa9d.png

     

    When I check the google API Dashboard, there are no API hits shown. I set it up for RClone too and there it shows me that rclone makes API requests. I dont know where the problem is. It is a 10gbit connection so bandwith cant be the problem.

    I hope someone can help me with that. I think the drive API does not work properly.

     

    Furthermore, I get often "internal error" on the drives, maybe that has nothing to do with that though.

    Can someone help me with that?

    Thank you

×
×
  • Create New...