Jump to content
Covecube Inc.
  • 0
Sign in to follow this  
Gandi

"server is throttling downloads" and own api problem

Question

Dear all

 

I would like to ask a question. I often get that message on Downloading:

grafik.png.42210c0ab857a79d561c66d7c2bc24fd.png

I dont know what is causing it. I am far away from the 10TB limit of downloads of gdrive. I thought it might be an API Problem. So I set up my own OAUTH. The tutorial in the wiki is outdated and I dont know if I did all the steps right...

I got the ClientID and the ClientSecret as seen here and added it to the clouddrive settings:

grafik.png.12356fdbce0ab2966d378ac3bce1aa9d.png

 

When I check the google API Dashboard, there are no API hits shown. I set it up for RClone too and there it shows me that rclone makes API requests. I dont know where the problem is. It is a 10gbit connection so bandwith cant be the problem.

I hope someone can help me with that. I think the drive API does not work properly.

 

Furthermore, I get often "internal error" on the drives, maybe that has nothing to do with that though.

Can someone help me with that?

Thank you

Share this post


Link to post
Share on other sites

13 answers to this question

Recommended Posts

  • 0

Once you change the API key in the advanced settings, you will need to reauthorize the drive (or detach it and reattach it). It will not immediately switch to the new key until it recreates the API authentication. Did you do that?

Also, how many upload and download threads are you using? 11 download threads per second is likely just hitting Google's API limit, and it is likely why you're getting throttled--especially since it looks like you're also allowing around 10 upload threads at the same time. In my experience, any more than 10-15 threads, accounting for both directions, will lead to exponential backoff and API throttling on Google's part. Try something like 8-10 download threads and 3-5 upload threads and see if it reduces the throttling or makes it disappear.

Share this post


Link to post
Share on other sites
  • 0

Thank you srcrist

I indeed detached and reattached all drives... But you might be right, I use 10 download and 20 Uploads per cloud drive. But I have multiple cloud drives. I did that since the speed with less is abysmal..

You are talking about advanced setting? I opened the config with notepad -- do I switch something else on?

Share this post


Link to post
Share on other sites
  • 0
3 hours ago, Gandi said:

You are talking about advanced setting? I opened the config with notepad -- do I switch something else on?

Any setting that is only available by editing the JSON is considered "advanced." See: https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings

3 hours ago, Gandi said:

I use 10 download and 20 Uploads per cloud drive.

That is more, in one drive, than Google's entire API limits will allow per account. You'll have to make adjustments. I'm not 100% sure the exact number, but the limit is somewhere around 10-15 simultaneous API connections at any given time. If you exceed that, Google will start requesting exponential back-off and CloudDrive will comply. This will significantly impact performance, and you will see those throttling errors that you are seeing. Note that Google's limits are across the entire google user, so, if your drives are all stored on the same account, your total number of upload and download threads across all of the drives on that account cannot exceed the 15 or so that Google will permit you to have.

I use 10 down and 5 up, with the presumption that all 15 will rarely be used at once, and rarely run into throttling issues.

3 hours ago, Gandi said:

I did that since the speed with less is abysmal..

It should not be. My server is only on a 1gbps connection, not 10gbps like yours, and I can still *easily* hit over 500mbps with 10 download threads. 20 upload threads is pointless, since Google will only allow you to upload 750GB per day, which averages out to around 70mbps or so, a rate that even two upload threads can easily saturate.

Ultimately, though, your numbers are simply excessive. You're quite a bit over the recommended values. Drop it so that you're at less than 15 threads per Google account and you'll stop getting those errors. Then we can take a look at your throughput issues.

Share this post


Link to post
Share on other sites
  • 0
22 minutes ago, srcrist said:

Any setting that is only available by editing the JSON is considered "advanced." See: https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings

That is more, in one drive, than Google's entire API limits will allow per account. You'll have to make adjustments. I'm not 100% sure the exact number, but the limit is somewhere around 10-15 simultaneous API connections at any given time. If you exceed that, Google will start requesting exponential back-off and CloudDrive will comply. This will significantly impact performance, and you will see those throttling errors that you are seeing. Note that Google's limits are across the entire google user, so, if your drives are all stored on the same account, your total number of upload and download threads across all of the drives on that account cannot exceed the 15 or so that Google will permit you to have.

I use 10 down and 5 up, with the presumption that all 15 will rarely be used at once, and rarely run into throttling issues.

It should not be. My server is only on a 1gbps connection, not 10gbps like yours, and I can still *easily* hit over 500mbps with 10 download threads. 20 upload threads is pointless, since Google will only allow you to upload 750GB per day, which averages out to around 70mbps or so, a rate that even two upload threads can easily saturate.

Ultimately, though, your numbers are simply excessive. You're quite a bit over the recommended values. Drop it so that you're at less than 15 threads per Google account and you'll stop getting those errors. Then we can take a look at your throughput issues.

 

Thank you, I will do that now and set up / down threats to a total of 15 over all drives.

But can you tell me why I dont see any API hits on my own API?
 

grafik.thumb.png.227e13484612a617e346e3a65c7b4f08.png

Share this post


Link to post
Share on other sites
  • 0
41 minutes ago, Gandi said:

But can you tell me why I dont see any API hits on my own API?

I mean, the short answer is probably not using your API key. But using your own key also isn't the solution to your real problem, so I'm just sorta setting it aside. It's likely an issue with the way you've configured the key. If you don't see whatever custom name you created in the Google API dashboard when you go to authorize your drive, then you're still using the default Stablebit keys. But figuring out exactly where the mistake is might require you to open a ticket with support.

Again, though, the standard API keys really do not need to be changed. They are not, in any case, causing the performance and API issues that you're seeing.

Share this post


Link to post
Share on other sites
  • 0
11 minutes ago, srcrist said:

I mean, the short answer is probably not using your API key. But using your own key also isn't the solution to your real problem, so I'm just sorta setting it aside. It's likely an issue with the way you've configured the key. If you don't see whatever custom name you created in the Google API dashboard when you go to authorize your drive, then you're still using the default Stablebit keys. But figuring out exactly where the mistake is might require you to open a ticket with support.

Again, though, the standard API keys really do not need to be changed. They are not, in any case, causing the performance and API issues that you're seeing.

I changed the settings to what you told me too. Eventhough I told it to use 5 download threats it still goes up to 12... No idea why that happens. 1 Threat is around 50mbit/s, with netdrive 3 I get 50 MB/s. Same with Rclone.

 

One of my drives is another account (different gdrive user, different organization), I get the same there. This account obviously cant use my own API key, since I only set it up in the other account and as internal. But there is no setting for that at all - is it?

Speed issues are for sure from clouddrive, since other software to google I get top speeds.

Maybe I have to replace it, what would be pretty sad though. Btw it is not media files, it is other data.

Share this post


Link to post
Share on other sites
  • 0
26 minutes ago, Gandi said:

Eventhough I told it to use 5 download threats it still goes up to 12... No idea why that happens.

Sometimes the threads-in-use lags behind the actual total. As long as you aren't getting throttled anymore, you're fine.

26 minutes ago, Gandi said:

1 Threat is around 50mbit/s, with netdrive 3 I get 50 MB/s. Same with Rclone.

Netdrive and rClone are not appropriate comparisons. They are uploading/downloading several gigbytes of contiguous data at a time. The throughput of that sort of data stream will always be measured as larger. CloudDrive pulls smaller chunks of data as needed and also no data at all if it's using data that is stored in the local cache. You're not going to get a measured 50MB/s per thread with CloudDrive from Google. It just isn't going to happen. 50mbps for a single thread with a 20MB chunk size is about what I would expect, accounting for network and API overhead. That's about 500mbps with 10 download threads--or roughly 62MB/s in the aggregate.

A larger chunk size could raise that throughput, but Google's chunk size is limited because it creates other problems.

26 minutes ago, Gandi said:

One of my drives is another account (different gdrive user, different organization), I get the same there.

You are hitting the expected and typical performance threshold already. Your aggregate thread count would already exceed the 50MB/s that you get from the applications making larger contiguous data reads. You just have to understand how the two applications differ in functionality. CloudDrive is not accessing a 10GB file. It is accessing X number of 20MB files simultaneously. That has performance implications for a single API thread. The aggregate performance should be similar.

26 minutes ago, Gandi said:

. This account obviously cant use my own API key, since I only set it up in the other account and as internal. But there is no setting for that at all - is it?

I am not 100% sure what this means. But, if I am interpreting correctly, I think you are confusing the API key's functionality. The API key has nothing to do with the account that you access with said key. That is: an API key created with any google account can (at least in theory) be used to access any other account's data, as long as a user with the appropriate credentials authorizes said access. So much to say that you can use the API key you create to authorize CloudDrive to access any Google account to which you have access, as long as you sign in with the appropriate credentials when you authorize the drive. Note that Google does place limits on the number of accounts that the API key can access without submitting it to an approval process, but those limits are much higher than anyone using a key for genuine personal use would ever need to worry about. It's something like 100 accounts.

Note that, by default, you were already using an API key created on Stablebit's Google account to access YOUR data.

26 minutes ago, Gandi said:

Maybe I have to replace it, what would be pretty sad though. Btw it is not media files, it is other data.

Sure. Ultimately, CloudDrive isn't really designed to provide the fastest possible throughput with respect to storing data on Google or any other provider. If your needs are genuinely to have multiple 50MB/s simultaneous downloads at once, on a regular basis, this probably just isn't the tool for you.

The most efficient performance configuration for CloudDrive is to have one large drive per Google account, and give that one drive the maximum number of threads as discussed above, leaving you with that 60ish MB/s aggregate performance.

Your present configuration is sort of self-defeating with respect to speed to begin with. You've got multiple drives all pulling chunks from the same provider account, and they all have to compete for API access and data throughput.

Share this post


Link to post
Share on other sites
  • 0
57 minutes ago, srcrist said:

Sometimes the threads-in-use lags behind the actual total. As long as you aren't getting throttled anymore, you're fine.

Netdrive and rClone are not appropriate comparisons. They are uploading/downloading several gigbytes of contiguous data at a time. The throughput of that sort of data stream will always be measured as larger. CloudDrive pulls smaller chunks of data as needed and also no data at all if it's using data that is stored in the local cache. You're not going to get a measured 50MB/s per thread with CloudDrive from Google. It just isn't going to happen. 50mbps for a single thread with a 20MB chunk size is about what I would expect, accounting for network and API overhead. That's about 500mbps with 10 download threads--or roughly 62MB/s in the aggregate.

A larger chunk size could raise that throughput, but Google's chunk size is limited because it creates other problems.

You are hitting the expected and typical performance threshold already. Your aggregate thread count would already exceed the 50MB/s that you get from the applications making larger contiguous data reads. You just have to understand how the two applications differ in functionality. CloudDrive is not accessing a 10GB file. It is accessing X number of 20MB files simultaneously. That has performance implications for a single API thread. The aggregate performance should be similar.

I am not 100% sure what this means. But, if I am interpreting correctly, I think you are confusing the API key's functionality. The API key has nothing to do with the account that you access with said key. That is: an API key created with any google account can (at least in theory) be used to access any other account's data, as long as a user with the appropriate credentials authorizes said access. So much to say that you can use the API key you create to authorize CloudDrive to access any Google account to which you have access, as long as you sign in with the appropriate credentials when you authorize the drive. Note that Google does place limits on the number of accounts that the API key can access without submitting it to an approval process, but those limits are much higher than anyone using a key for genuine personal use would ever need to worry about. It's something like 100 accounts.

Note that, by default, you were already using an API key created on Stablebit's Google account to access YOUR data.

Sure. Ultimately, CloudDrive isn't really designed to provide the fastest possible throughput with respect to storing data on Google or any other provider. If your needs are genuinely to have multiple 50MB/s simultaneous downloads at once, on a regular basis, this probably just isn't the tool for you.

The most efficient performance configuration for CloudDrive is to have one large drive per Google account, and give that one drive the maximum number of threads as discussed above, leaving you with that 60ish MB/s aggregate performance.

Your present configuration is sort of self-defeating with respect to speed to begin with. You've got multiple drives all pulling chunks from the same provider account, and they all have to compete for API access and data throughput.

Well, let me start first:

I have 2 accounts, so 2 different drive. In Total I have: Account A: 2 Drives; Account B: 1 Drive. Since I lost multiple times data due to corruption, I dont want 1 256 TB Drive. I set it to 100MB chunks to get better speeds, but it is sitll at 50 mbit/s using 1 single thread. But ok I see that it works different.

About the API: I set it to Internal, which means only people from Account A Organization can use that API, so the other ACcount B which is in another org. cant use it. But Stablebit offers only to use 1 API key in the settings. But I might go back to normal settings if this wasnt the problem.

The accounts get 1 time every 12 hours around 40gb of data, and they have to be moved quickly, that is why the 750GB limit doesnt take place and why I set high upload threads. Anyway I changed it to much lower. The API limit shouldnt be hit anymore since Account A has a total of 12 Threads now, Accoutn B has 5 in total over all drives. But it has still the yellow flashes beside Downloads. I dont know why, but stablebit is just slow in upload and downloading. I checked other people that had the same, there it was said its becouse of the cache that is pointed to a HDD, in my case its an NVME SSD...

I guess it is the way how it is designed. I will try more in hope that I get better speeds.

In fact, I wanted to have a fast reading for tiny files and (10 to 20 MB), that is why I went with stablebit - I simply had the lifetime license I bought a few years back and droppet it after 3 datalosses due to google outages (maybe you remember them, the RAW Disk problem).

I will play around more and see if it gets better if not, well I will look out for some other solution.

Thank you for your help anyway - I really appreaciate it.

Share this post


Link to post
Share on other sites
  • 0
56 minutes ago, Gandi said:

I set it to 100MB chunks to get better speeds, but it is sitll at 50 mbit/s using 1 single thread.

Is this a legacy drive created on the beta? I believe support for chunk sizes larger than 20MB was removed from the Google Drive provider in .450, unless that change was reverted and I am unaware. 20MB should be the maximum for any Google Drive drive created since. 50mbps might be a little slow if you are, in fact, using 100MB chunks--but you should double-check that.

56 minutes ago, Gandi said:

About the API: I set it to Internal, which means only people from Account A Organization can use that API, so the other ACcount B which is in another org. cant

Gotcha. Yeah, if you set additional limitations on the API key on Google's end, then you'd have to create a key without those restrictions.

56 minutes ago, Gandi said:

The API limit shouldnt be hit anymore since Account A has a total of 12 Threads now, Accoutn B has 5 in total over all drives. But it has still the yellow flashes beside Downloads.

And CloudDrive is the only application accessing that Google Drive space? Nothing else is using the API or accessing the data on your Google Drive? Those yellow arrows indicate that Google is throttling you--and that is handled on a user by user basis--so that's the angle to explore.

56 minutes ago, Gandi said:

I dont know why, but stablebit is just slow in upload and downloading. I checked other people that had the same, there it was said its becouse of the cache that is pointed to a HDD, in my case its an NVME SSD...

I think you might just be running into a difference in expectation. I'm not sure how many users would consider 50mbps * X threads to be slow. I think that if most users hit 500mbps they would probably consider that to be relatively fast. So the only people who are saying, "yeah, my drive is slow," are the people whose drives are bottlenecked by I/O, rather than network throughput.

It really sounds like you were looking for CloudDrive to provide effectively enterprise grade throughput on a consumer data storage service, and I don't think that was necessarily a design goal. You might be able to get the sorts of data speeds you're looking for, somewhere in the 100s of MB/s, from rClone if you configure it to operate on the data in parallel--so maybe look into that.

56 minutes ago, Gandi said:

Thank you for your help anyway - I really appreaciate it.

Sure thing. Good luck finding something that works for you!

Share this post


Link to post
Share on other sites
  • 0
20 minutes ago, srcrist said:

Is this a legacy drive created on the beta? I believe support for chunk sizes larger than 20MB was removed from the Google Drive provider in .450, unless that change was reverted and I am unaware. 20MB should be the maximum for any Google Drive drive created since. 50mbps might be a little slow if you are, in fact, using 100MB chunks--but you should double-check that.

Gotcha. Yeah, if you set additional limitations on the API key on Google's end, then you'd have to create a key without those restrictions.

And CloudDrive is the only application accessing that Google Drive space? Nothing else is using the API or accessing the data on your Google Drive? Those yellow arrows indicate that Google is throttling you--and that is handled on a user by user basis--so that's the angle to explore.

I think you might just be running into a difference in expectation. I'm not sure how many users would consider 50mbps * X threads to be slow. I think that if most users hit 500mbps they would probably consider that to be relatively fast. So the only people who are saying, "yeah, my drive is slow," are the people whose drives are bottlenecked by I/O, rather than network throughput.

It really sounds like you were looking for CloudDrive to provide effectively enterprise grade throughput on a consumer data storage service, and I don't think that was necessarily a design goal. You might be able to get the sorts of data speeds you're looking for, somewhere in the 100s of MB/s, from rClone if you configure it to operate on the data in parallel--so maybe look into that.

Sure thing. Good luck finding something that works for you!

 I thought I set chunk size to 100 MB -- maybe I am wrong since when I check the size in the drive its 10 MB - that might explain it my wrong expectations.

I use other tools for this space too - but they have their own google API Key (one for netdrive, one for rclone separatly).
 

My Google space is actually for my business, so I consider it enterprise :) Maybne clouddrive isnt.

What I mean with 50mbit/s wasnt clear I guess. The 2 drives have always only 1 threat shown and the speed is between 30 and 50 mbit/s. Since its just one, I only get that speed in total:

grafik.png.021d4ae489bfa960a18bb15e0259c5b5.pngas seen here.

 

The one drive that often jumps over the set 5 Threats never goes over 100mbit/s, even with more threats as seen here:

grafik.png.141492d9281774167e0ece8d977c815f.png
 

 

Those drives are the same Account A and use in total 15 Threads over all drives and up/download.

That is why I say "it is slow". I read that Stablebit uses alternating APIs, but when I changed Rclone and or netdrive to own API, the speed increased massively, that is why I wanted to change here too.

Share this post


Link to post
Share on other sites
  • 0
43 minutes ago, Gandi said:

 I thought I set chunk size to 100 MB -- maybe I am wrong since when I check the size in the drive its 10 MB - that might explain it my wrong expectations.

10MB chunks may also impact your maximum throughput as well. The more chunks your data is stored in, if you accessing more than that amount of data, the more overhead those chunks will add to the data speed.

43 minutes ago, Gandi said:

I use other tools for this space too - but they have their own google API Key (one for netdrive, one for rclone separatly).

The API limits are for your account, not the API key. So they will all share your allotment of API calls per user. It does not matter that they use different API keys to access your account.

43 minutes ago, Gandi said:

My Google space is actually for my business, so I consider it enterprise :) Maybne clouddrive isnt.

That may be, but note that Google Drive is not, regardless of whether or not it's a business account, an enterprise grade cloud storage service. Google does offer such a service (Google Cloud Storage), but at a much higher price. This remains true regardless of what you consider their service to be. CloudDrive also supports GCS APIs as well, if you actually need that level of service. Google Drive, even on a business account, is intended to provide consumer grade storage for individual users--not enterprise grade storage for high data volume, high data rate, or high availability purposes.

See Google's own statement on the matter here: https://cloud.google.com/products/storage

43 minutes ago, Gandi said:

That is why I say "it is slow".

I think I understood what you were saying, actually. I don't really understand the use case that would necessitate what you're looking for, and I'm not entirely sure how you're testing the maximum throughput, but I do understand what you are seeing and what you are thinking is a problem. 110mbps is pretty low relative to my experience, assuming your system is actually requesting the amounts of data from the drive that would lead CloudDrive to max out the throughput.

I am, for example, presently seeing these speeds:

image.png.3145e68e45fd9521dcf28bf3cea2e390.png

And that is with a largely idle workload. I believe upload verification is the only load on the drive at the moment.

I don't, unfortunately, really know how to help you solve it, though. If your system is trying to pull a lot of data, and your I/O settings and your prefetcher settings are configured reasonably, speeds to Google should certainly be faster than what you are experiencing. Maybe not 100MB/s, but better than 110mbps. Have you considered a peering issue between your system and Google itself?

To be clear: have you actually tested the performance in a real-world way? Trying to read large amounts of data off of the drive, as opposed to looking at the network metrics in the UI?  What speed does Windows report, for example, if you try to copy a large file off of the drive that isn't stored in the cache?

And, on a related note, have you tested to see if your data is being pulled from the cache? 40GB every 12 hours is not a ton of data. If your cache is larger than that, it's probably all being cached locally. If that is the case, CloudDrive won't need to actually download the data from your provider in order to provide it to your system.

Share this post


Link to post
Share on other sites
  • 0
1 hour ago, srcrist said:

10MB chunks may also impact your maximum throughput as well. The more chunks your data is stored in, if you accessing more than that amount of data, the more overhead those chunks will add to the data speed.

The API limits are for your account, not the API key. So they will all share your allotment of API calls per user. It does not matter that they use different API keys to access your account.

That may be, but note that Google Drive is not, regardless of whether or not it's a business account, an enterprise grade cloud storage service. Google does offer such a service (Google Cloud Storage), but at a much higher price. This remains true regardless of what you consider their service to be. CloudDrive also supports GCS APIs as well, if you actually need that level of service. Google Drive, even on a business account, is intended to provide consumer grade storage for individual users--not enterprise grade storage for high data volume, high data rate, or high availability purposes.

See Google's own statement on the matter here: https://cloud.google.com/products/storage

I think I understood what you were saying, actually. I don't really understand the use case that would necessitate what you're looking for, and I'm not entirely sure how you're testing the maximum throughput, but I do understand what you are seeing and what you are thinking is a problem. 110mbps is pretty low relative to my experience, assuming your system is actually requesting the amounts of data from the drive that would lead CloudDrive to max out the throughput.

I am, for example, presently seeing these speeds:

image.png.3145e68e45fd9521dcf28bf3cea2e390.png

And that is with a largely idle workload. I believe upload verification is the only load on the drive at the moment.

I don't, unfortunately, really know how to help you solve it, though. If your system is trying to pull a lot of data, and your I/O settings and your prefetcher settings are configured reasonably, speeds to Google should certainly be faster than what you are experiencing. Maybe not 100MB/s, but better than 110mbps. Have you considered a peering issue between your system and Google itself?

To be clear: have you actually tested the performance in a real-world way? Trying to read large amounts of data off of the drive, as opposed to looking at the network metrics in the UI?  What speed does Windows report, for example, if you try to copy a large file off of the drive that isn't stored in the cache?

And, on a related note, have you tested to see if your data is being pulled from the cache? 40GB every 12 hours is not a ton of data. If your cache is larger than that, it's probably all being cached locally. If that is the case, CloudDrive won't need to actually download the data from your provider in order to provide it to your system.

I agree with you about the enterprise thingy. But I think my problem lies not there, since it works with other products more than fine. Lets focus on my speed issues.

Well I can show you my setting:

grafik.png.f0b29fa22343f55ae9161c2a7db9f395.png

Maybe the prefetching is a bit too high? I played around with it actually and even set it off. There was no change. The other drive on this account has 1 up 1 down but does only read (at 50mbit/s 1 thread).

I really dont knwo where it comes from that I get such low speeds.

Btw, upload verfication is off :)

Share this post


Link to post
Share on other sites
  • 0
6 hours ago, Gandi said:

Maybe the prefetching is a bit too high?

I actually see multiple problems here, related to prompt uploads and data rate. The first is that, yes, I'm not sure that your prefetcher settings make much sense for what you've described your use case to be--and the second is that you've actually configured a delay here on uploads--contrary to your actual objective to upload data as fast as possible. But, maybe most importantly, you've also disabled the minimum download, which is the most important setting that increases throughput.

So let's tackle that delay first. Your "upload threshold" setting should be lower. That setting says "start uploading when you have X amount of modified data, or it has been Y minutes since the last upload." You've actually raised the values here from their lower defaults of 1MB or 5 minutes. So your drive will actually be more delayed relative to drives created with the default settings.

Your minimum download should be some reasonable amount of data that roughly corresponds to the amount of data you expect the system to need in short order whenever any amount of data is requested. So, for example, if you're dealing with files of 10-20MB, and you need the data asap, you'd probably want a minimum download of around 15MB so that any time the system makes a request of the drive, it pulls all of the data that might be quickly needed to the cache immediately.

Note that if you let it ClouddDrive will only pull the amount of data that your system requests of it for a particular file read. That means, say, a few hundred KB at a time of a music file, a few dozen megabits of a high quality video file,  or the data for a first few pages of a document. Just like a hard drive. But, unlike a hard drive, CloudDrive has to renegotiate an API connection and request data every time you system requests any amount. You absolutely cannot let it do that. That is tremendous overhead and very slow. You need to figure out what your actual data needs are, and force CloudDrive to pull that data asap when a request is made. I would suggest (and this is based on the still somewhat cryptic use case you've described thus far) 10 or 20MB for your minimum download.

You'll notice that your current minimum download recommended speed is presently only 14mbps. Compare that to the estimated data rate for these settings:

image.png.a6f8d946b2b12fed2a42b8eba9248bb7.png

With respect to your prefetcher, I'd probably need to know a bit more about both your intended purpose for the drive and your actual drive settings to give you better advice. For now, suffice it to say that I actually think that, based on your intention to load 10-20MB files as quickly as possible, the prefetcher probably doesn't need to be enabled at all. As long as your drive is structured correctly and the other settings are correct. If you want some more help here, you're going to have to provide a more detailed account of what you're actually doing with the drive, and preferably all of the structural information about the drive such as this information here:

image.png.f20df676e4986fbaca5baf57ca3f304d.png

I'd also note that if you want to prioritize writes to the drive over read speed, you'd probably want to enable background I/O as well. Turning it off makes it easier for drive reads to slow the write process.

And, relatedly, another thing that isn't clear to me: since data is available from the CloudDrive drive as soon as its added to the drive, what is the need for the rapid upload to the provider? That is, the data is already accessible, and the speed with which the data is uploaded to the provider has no bearing on that. So why the need for many MB/s to move what effectively amounts to duplicate data to your host? Just trying to understand your actual objective here.

And upload verification is not a relevant setting with respect to drive throughput, but, if data integrity is something you care about, I do suggest turning it on. It should effectively eliminate the potential for Google issues to ever cause data loss again, since every byte uploaded is verified to exist on Google's storage before CloudDrive will move on to new data and remove the uploaded data from the cache.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

×
×
  • Create New...