Jump to content
  • 0

slow file transfers to virtual drive


ffj

Question

Hi,

 

I was using the .463 beta for quite some time now & decided to update to the lastest  .723 beta.

After updating my file transfers to the virutal drive were really slow & sometimes even caused the system to hang or crash.

System is running Windows Server 2012 Standard 64-bit.

 

I went back to the .463 beta for now.

 

Thanks

Link to comment
Share on other sites

17 answers to this question

Recommended Posts

  • 0

By crashed, do you mean BSOD'ed? 

If so, could you upload the crash dump?

 

Go to "C:\Windows", and look for "MEMORY.DMP".  Move it to the desktop. Right click on the file, open "Send to" and select "Compressed (zipped) folder". 

Upload the zipped file here: 

http://wiki.covecube.com/StableBit_CloudDrive_Submit_Files

Link to comment
Share on other sites

  • 0

Hi,

 

The BSOD is gone, probably had an other cause. Still having problems with the transfer speeds. Only get ~20mb/sec on my server.Today I tried the latest beta on my desktop, running multiple ssds. Set one as cache & only get ~120/mb sec. Normal transfer speeds are +1gb/sec  & ~120mb on my server running HDDs.

 

I don't have these speed issues running .463.

 

Any idea?

 

Thanks

Link to comment
Share on other sites

  • 0

To clarify, there is a distinct difference between MB and mb.  

 

120MB/s is a good speed from a HDD, but average or slow speed from an SSD.   However, it is about the FASTEST that you'll see over the network regardless of the underlying drive.   This is because the max theoretical speed of gigabit networking is 125MB/s, not counting protocol overhead or other activity. 

 

 

 

The cache is sitting on a "spinning" drive (HDD), correct? 

If so, is the drive being used for anything else?

 

And are you using Google Drive for the provider? 

Link to comment
Share on other sites

  • 0

Hi,

 

Sorry If I was too unclear

 

I get only 25MB/s transfer speeds to the virtual drive on my server running only HDDs. 

As it crippled my speeds that much I then tried it on my home computer running only SSDs to see if it was a problem on my end. but It then also Reduced the transfer speeds from usally 1GB/s to only ~125 MB/s.

 

This is not a network issue, im talking about moving files to the virtual HDD created by CloudDrive.

 

Again I did not have these issues running .463, which I am running now & everything is fine.

 

 

 


The cache is sitting on a "spinning" drive (HDD), correct? 
If so, is the drive being used for anything else?
 

On my server yes everything is on HDDs / Nothing else

 


And are you using Google Drive for the provider?
yes
Link to comment
Share on other sites

  • 0

Any updates on the Google Drive optimization? It's pretty bad right now. I've been keeping an average of my upload speed and it's around 16-20mbs. I've tried 10 threads, 20 threads. They just sit there. Like right now I have 10 threads. 3 are uploading and the rest are just sitting there (write threads) with durations over 1min. Every 5-10min it'll have a "burst" of speed where all of the chunks in queue will upload at once and i'll  hit respectable speeds but then go back to 10-15mbs uploading one or two chunks. Speed tests are good to various servers all around so I know it's not my bandwidth.

Link to comment
Share on other sites

  • 0

I don't know what the issue is. Or how to fix it, or if it's fixable. Upgraded to 749, reboot server, launch CloudDrive. It was blazing even at 5 threads (200mbs+). After about 16 hours it dropped to 15-30mbs and holding there (with a majority of the time closer to 15mbs). The ONLY thing weird I've noticed is when I started with the reboot + upgrade the response time in Provider I/O was around 400ms. At this stage in running (it has slowly increased the entire time) it's up to 13,800ms... which is ridiculous. I don't know if I'm maybe just trying to feed it too much data without a break? (is that even a thing?). I mean I'm generally never under 40GB "to upload." 

 

edit: Yep, just for fun.. rebooted my server. Launch CloudBit... 250-350mbs upload to Google and back to 400-500ms latency. What could cause that sort of degradation over a period of time? 

Link to comment
Share on other sites

  • 0

I don't know what the issue is. Or how to fix it, or if it's fixable. Upgraded to 749, reboot server, launch CloudDrive. It was blazing even at 5 threads (200mbs+). After about 16 hours it dropped to 15-30mbs and holding there (with a majority of the time closer to 15mbs). The ONLY thing weird I've noticed is when I started with the reboot + upgrade the response time in Provider I/O was around 400ms. At this stage in running (it has slowly increased the entire time) it's up to 13,800ms... which is ridiculous. I don't know if I'm maybe just trying to feed it too much data without a break? (is that even a thing?). I mean I'm generally never under 40GB "to upload." 

 

edit: Yep, just for fun.. rebooted my server. Launch CloudBit... 250-350mbs upload to Google and back to 400-500ms latency. What could cause that sort of degradation over a period of time? 

 

 

What OS are you on? 

There is a hard limit to the number of open connections that can be maintained, IIRC.  Also, WSE has some weird issues with network connectivity, as well.  So the OS you're using *may* be a factor.

 

Also, check the logs for throttling errors. Or send them my way. 

As for throttling issues, these could absolutely cause the issue as we do respect any and all throttling requests, and use an exponential backoff if a specific timeout isn't given to us. 

Link to comment
Share on other sites

  • 0

Server 2016 Datacenter. Seems like any time I reset to default settings the stuff works great again for awhile. Does CloudDrive "go through" stablebit servers at all? Or is response time directly to Google's servers? Just weird how one minute it's 400ms and a few hours later it's 15,000ms. But then if I reset the settings to default it generally drops down to a couple thousand ms and transfers work well again. 

Link to comment
Share on other sites

  • 0

Reading @wid_sbdp's post caused me to check my response times and I am seeing a similar pattern but for read operations against GCD with v749 on Win10.  

 

I have 6 read threads enabled and it seems that CloudDrive is always reading 1MB (even though I have prefetching set for 20MB as these are all sequential media files). Response times to individual 1MB reads from GCD are all in the 10,000 - 15,000  millisecond range. As you can imagine, reading 6 MB every ~ 12 seconds does not give very good overall throughput.  I am not seeing any API throttling (no turtles).

 

Is this a google thing?  Should CloudDrive be performing larger reads, or is this dictated by the GCD API?  Are there settings I can use to change this behavior? I'd be perfectly happy to always have a full chunk read in a single operation if it gave me improved read performance. As it is I am only able to use less than 10% of my 150mbps download link.

Link to comment
Share on other sites

  • 0

I've done it three times now, resetting CloudDrive to default settings, reinputing my drive key, etc fixes it 100% of the time until it "gets messy" 8-10+ hours down the road again. So I know it's not my connection (as a straight reboot doesn't always fix the issue, and that clears any dormant connections on the network side). I know it's not Google because outside of going down to 0 connections for the 45sec it takes to start CloudDrive back up again and continue uploading it has no idea I reset CloudDrive settings. It's something to do with CloudDrive. I've let it run at 2 upload/download threads and it still eventually "bogs down" (much like the it did on the memory side of things a few versions ago). 

 

Not saying it's unusable, even at the slow end I'm still averaging 15-25mbs upload (it does seem to cause some issues though on the download if someone starts watching something that's not in cache) which when running 24/7 is still plenty of bandwidth to upload a good 400-500GB a day or more. But it's still kind of depressing to see when after a settings reset I'm getting 200-300mbs using only 3 upload threads.

Link to comment
Share on other sites

  • 0

So it may just be a fluke, only have about 9 hours of running so far proving it, but you might want to try adding exceptions to windows defender (or any other antivirus/malware programs you use) for the CloudDrive processes. I noticed the windows antimalware process was always running pretty high in CPU and figured maybe there's something to it, so I added all of the CloudDrive processes to the exception list and added all of my "virtual cloud drives" to the folder exception as well as my cache drive. Seeing as my initial downloads hit my main drive and are then moved over to their respective locations I figured they're already being scanned there so there's no point in wasting resources scanning a bunch of drives especially when they're technically network drives. 

 

Anyways, after 9 hours still chugging along at ~600ms latency (compared to probably 7-8,000ms or more normally at this point) and getting great speeds. Maybe there is some bad interaction between scanning and all of these I/O operations?

 

Unfortunately it's hard to say if that was what helped or not because I also loaded up DrivePool at the same time and it's going through its balancing routine... it'll upload a few GB on one CloudDrive, then stop and upload a few GB on the next CloudDrive. Also a possibility the non-excessive constant transfer to one drive is causing it to play  nice.

Link to comment
Share on other sites

  • 0

Thanks for posting this @wid_sbdp. I added my G:\ drive to the AV exceptions list, but I'll try adding the processes as well and report back.  I'm currently stuck at under 10mbps average download across 6 threads.

 

EDIT: I made the changes to AV configuration without seeing any effect. Since I needed to reboot for Windows updates anyway I took the opportunity to install CloudDrive build 753. Didn't help, in fact it appears to be a bit worse. Currently reading a 20GB file using 6 threads and seeing about 8 mbps aggregate read throughput with individual 1MB read transaction response times in the 6 sec - 12 sec range.

Edited by pocomo
Link to comment
Share on other sites

  • 0

None of the data goes through any of our servers.  

 

The only data that goes to us is the licensing info, and that's it. 

 

Everything else is directly between you and the cloud provider, with no middle man (I mean, unless you or your ISP has something installed). 

 

 

 

As for variance in speeds to and from Google Drive, this may be a provider specific issue.  Different times of day are going to have different loads on various services on the internet.  Additionally, "local congestion" may occur based on the usage in your area, depending. 

 

 

 

Furthermore, we do respect throttling requests from all providers, and try to back off aggressively, if they're returning a lot of responses. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...