Jump to content
  • 0

+1 for GoogleDrive for Work support


Reptile

Question

Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.

Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.

 

Fast, afordable, unlimited.

Is there a beta release supporting this provider already?

 

Yours sincerely 

 

Reptile

 

 

Edit:

Preliminary Google Drive support added. 

Download available here:

http://dl.covecube.com/CloudDriveWindows/beta/download/

Note: these are internal beta builds and may not be stable. Try at your own risk.

 

Google Drive Support starts with build 1.0.0.408 (current recommended for Google Drive)

Edited by Christopher (Drashna)
Google Drive info
Link to comment
Share on other sites

Recommended Posts

  • 0

It's not a numbered build, so keep in mind that this is very much an "In flux build". 

 

 

 

i doubt that it fixes it as it has nothing to do with prefetching

 

I do beleive it is related in this case, but I'd have to ask Alex.  

 

But the remaining prefetching issues that you guys have been seeing is something that he was looking into.

Link to comment
Share on other sites

  • 0

Yes prefetching is getting more data now, so that is pretty awesome! Also see prefetching dropping in size according to the minimal download size! Got 20 threads running all the time now at lower cpu usage! So some magic happened with this build :)

 

BUT im seeing massive increase in response times... sometimes above 20000ms for a single thread, in the early days i was sometimes down to 200ms.

 

Also still hoping for a system with prefetching, which makes it run per file, instead of a total. That would allow for pretty much unlimited amount of files opened - as long as your bandwidth could handle it with the available threads :)

Link to comment
Share on other sites

  • 0

Glad to hear that performance was greatly improved.  If you look at the changelog, there are some significant changes. 

 

 

 

Also still hoping for a system with prefetching, which makes it run per file, instead of a total. That would allow for pretty much unlimited amount of files opened - as long as your bandwidth could handle it with the available threads :)

 

Well, get some logs if the performance has dropped that much. Hopefully, it's something we can address.

 

 

As for the prefetching... change your values. :)

Try to align the prefetch values with the minimum download sizes you're using (eg, 20MB for the trigger, 200MB for the trigger, and 120 seconds for the time window).  See if that helps with latency.

 

These are just "guesstimates", so you may need to fine tune.  But let me know what settings give you the best performance.

 

 

But "per file" that's a lot more complicated. Especially when dealing with encrypted volumes (not our FDE, but with stuff like diskcryptor or bitlocker). Also, considering how NTFS stores data, ... I believe that it would essentially work the same anyhow. 

Link to comment
Share on other sites

  • 0

Glad to hear that performance was greatly improved.  If you look at the changelog, there are some significant changes. 

 

 

 

 

Well, get some logs if the performance has dropped that much. Hopefully, it's something we can address.

 

 

As for the prefetching... change your values. :)

Try to align the prefetch values with the minimum download sizes you're using (eg, 20MB for the trigger, 200MB for the trigger, and 120 seconds for the time window).  See if that helps with latency.

 

These are just "guesstimates", so you may need to fine tune.  But let me know what settings give you the best performance.

 

 

But "per file" that's a lot more complicated. Especially when dealing with encrypted volumes (not our FDE, but with stuff like diskcryptor or bitlocker). Also, considering how NTFS stores data, ... I believe that it would essentially work the same anyhow. 

 

Regarding the latency, it is not wait time with the actual download. It is the "Response times" listed from the Technical window, which i guess is the time it took to get response from the google API. When the chunks download they finish in like 1/10 of a second. So this problem would be the same if running 1 mb chunks or 100 mb chunks - downloading larger chunks of data would help with the latency here ;)

 

But the issue with prefetching atm. is that a single file can take all the bandwidth. So with if i have a forward of 200 mb and 1 file is already at 180mb, then it will only grab 20mb of the new file (atleast it seems so). Ideally you would want the program to prefetch just as much on the new file as on the first. Also opening more files now, would result in a lower prefetch in total, which means you would have to set higher values, which will start massive prefetching if you only run a single file.

 

IE. 200 MB prefetch = 100 MB prefetch with 2 files open = 50 MB prefetch with 4 files open ( but currently due to the randomness atm. it could be: file #1: 100 MB, file #2: 40 MB, file #3: 40 MB and file #4: 20 MB. )

 

 

I know it is a hassle, but the program would work a lot better with prefetching working pr. file, instead of a combined total, thus each file getting 200 MB with the above use case.

 

I saw someone mentioning they used plex - they won't have good success currently with 3 people streaming 3 different files, due to the prefetching being severely limited on each file, while some might lag due to one getting all the prefetching capacity.

 

Luckily i just use mine for data and will open 2 at most, but i can see a lot of use cases which could get problems and even with two files i can get issues with data not being available quite often on the second file  :) I got bandwidth and threads available, the prefetching settings were just reached before i started the new file, so nothing available to prefetch the new one :)

 

Oh and btw. while looking through the technical window, i also noticed that sometimes chunks can sit at 100% finished for 5-10 seconds before actually going to "completed" stage - could this be a google api delay as well ? When this happens i just see "speed" counting down, as the chunk finished already and is at 100%. It is just waiting for the thread to move to completed.

 

I will try to gather logs for you soon, just been busy :)

Link to comment
Share on other sites

  • 0

I noticed that when trying to upload, a lot of threads could get stuck in "ChecksumBlocksReadCopy" State which would start spending lots of CPU and with no new threads running.

It oftens ends up with "The thread was cancelled" errors. Can this state by any means end up with a deadlock?

 

Also see a few issues where prefetching doesnt keep getting new chunks, but instead first starts after a while again - could this be the same issue ?

 

Gettings logs now and also taking a memory dump

Link to comment
Share on other sites

  • 0

Im confused on how prefetching works. 

Lets say im prefetching 80 mb with 20 minimal download

 

#1

#2

#3

#4

 

When #1 is read, shouldn't it instantly start on #5 ? Or does it wait untill #4 is read before going to #5, #6, #7, #8 ?

 

Also, as mentioned, there is a huge issue with multiple files, as prefetching simply is not smart enough atm. So one file can grab all the prefetching, while another is unable to read due to not getting prefetching data.

Is this something that can be fixed ? or is it technically impossible due to your structure?

 

Also seeing some issues where download threads are being blocked if you are writing data to the drive. While copying 100 gb to the drive, i was unable to pretty much unable to start any download threads while it was copying, as well as upload wouldn't start untill copy was done. Instead i am just getting some "is having trouble downloading data from Google Drive" errors.

Link to comment
Share on other sites

  • 0

 

Also seeing some issues where download threads are being blocked if you are writing data to the drive. While copying 100 gb to the drive, i was unable to pretty much unable to start any download threads while it was copying, as well as upload wouldn't start untill copy was done. Instead i am just getting some "is having trouble downloading data from Google Drive" errors.

 

This has always been the case for me. However it is due to my HDD where the cache is hosted being absolutely slammed, with queue lengths super high. I even see the exact same error you describe, it is a daily occurrence for me. But it has nothing to do with CloudDrive but rather hardware limitations.

 

Is your HDD, or SSD even, too busy when this happens?

Link to comment
Share on other sites

  • 0

The drive I am copying to is a 512 gb SSD where nothing else happens, a dedicated cache drive. Would a copy to the drive hit limitations and create issues with threads? Only way to get a faster drive would be to get a raid of ssd instead....

 

I would rather that they throttle copies to the drive, than make your drive unavailable while you copy

Link to comment
Share on other sites

  • 0

Thought i would give an update on .605 and the issues i still have:

 

1# Like mentioned above, when copying to the drive (Dedicated 512gb SSD, with nothing else running) the drive gets locked to running none or close to no threads. An option to throttle copies to the drive would be great, so we can limit the copy speed depending on our cache drive and still get an available drive meanwhile

 

2# When prefetching, it seems to sometimes stop in the middle, to then resume. Example: 80 MB prefetch with 20 mb chunks, #1, #2, #3, #4. Initially it seems as it reads #1 it starts on #5 however suddently it can reach #9 and "forget" to start on #13 waiting untill #12 was read before starting on #13. This gives some issues with data not being available sometimes and requires me to load my data package all over. I could image that if it was a video, then it would cause a lag.

 

3# Opening multiple files have huge issues. Prefetching is currently set up to run a total MB amount, which means that the first file in theory could have prefetched the total amount, meaning that any new files are unable to prefetch forward. To me the ideal solution would be to make the system work with a prefetch value pr. file(as you can identify files in the drive, you should know which chunks relate to what file), so that each file would prefetch the value set. So a 80 MB prefetch would be 160 MB with 2 files running and 240 MB with 3 and so on. For me atleast, the issue is not the bandwidth, as the prefetch is running like 1 thread at a time when it reaches its max, so there should be plenty of bandwidth available for most people (if not it is because they are saturating their line anyway, which is not the fault of CloudDrive)

 

4# CONSTANT throttling from Google due to userRateLimitExceeded (Too many API calls i guess). High speed connections need way bigger chunks. I could imagine that some of these throttles could be making some of the above issues worse, as they cause wait times for threads because you put a delay on the retry. The download time for a 20 MB vs a 100 MB chunk for me on a 1 gbit line really doesn't have a big difference(Previously when we had 100 mb chunks Google gave me around 600 mbit download on a thread, making it possible to utilize my entire 1 gbit line without throttling) and therefore wouldn't increase the latency - All the latency i have currently is the API request times from Google, which is the same if it was 1 MB or 100 MB chunks.

 

5# You are not able to see your attached drives in the provider overview, running with multiple accounts, it would be great to perhaps see which accounts i have drives mounted from (perhaps marked with a green color?) - You currently show attached drives at other locations, marking the currently attached would also be nice :)

 

6# Had the issue with chunks being marked with success at 0% (view the last page) i don't know if it was fixed, but if not then it might be an issue! Nothing would get uploaded meanwhile and it required a reboot to get fixed.

 

7# Chunks can sometimes sit stuck at 100% in technical details for 5-10 seconds, before moving to completed stage.

 

8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads.

 

These are the current issues i am seeing, but as you know i will most likely find more :D But only in the spirit of making the application better :)

Link to comment
Share on other sites

  • 0

For #1 I use ultracopier when I am doing large copies to throttle the copy and keep the cache drive from getting overloaded. You could try that.

 

For #3 I think I noted earlier when you mentioned it and Chris later confirmed that CloudDrive has no concept of files, just the raw data blocks, so I doubt per file prefetching is possible.

Link to comment
Share on other sites

  • 0

For #1 I use ultracopier when I am doing large copies to throttle the copy and keep the cache drive from getting overloaded. You could try that.

 

For #3 I think I noted earlier when you mentioned it and Chris later confirmed that CloudDrive has no concept of files, just the raw data blocks, so I doubt per file prefetching is possible.

 

I have used ultracopier, but hate having a program running as a service. Like to keep my server running with as little software installed as possible :)  But of course for now that is a workaround.

 

Regarding the files, he mentioned it was complicated, but never said it couldn't be done. The system must somehow know what chunks belong to which file - else it wouldn't know where to begin and where to stop. My guess is that they keep info on this in the first chunks on the drive. If prefetching on first file is possible, then they must have gotten the starting location from some place - when another file is opened they should have a second location and should be able to run them individually

Link to comment
Share on other sites

  • 0

This has always been the case for me. However it is due to my HDD where the cache is hosted being absolutely slammed, with queue lengths super high. I even see the exact same error you describe, it is a daily occurrence for me. But it has nothing to do with CloudDrive but rather hardware limitations.

 

Is your HDD, or SSD even, too busy when this happens?

 

If you're seeing a consistently high queue length on the hard drive for the cache, then yes, the disk is definitely being overworked.

 

Generally, you don't want to see a queue length any more than one per platter, for extended periods of time. 

 

 

An SSD, a high RPM drive, or even a RAID array may be a good choice for a cache drive, 

(also, a fast drive/ssd will increase write speed to the cloud drive, but fill up the cache faster)

 

Also, windows tends to do weird things when there are IO issues/delays. 

 

The drive I am copying to is a 512 gb SSD where nothing else happens, a dedicated cache drive. Would a copy to the drive hit limitations and create issues with threads? Only way to get a faster drive would be to get a raid of ssd instead....

 

I would rather that they throttle copies to the drive, than make your drive unavailable while you copy

 

If the drive fills up all the way, yes, it can cause issues. 

 

That said, if you have a dedicated drive, try using the "proportional" cache setting. You can set the max size to the SSD size (or however much you want), and set a percentage for read cache and for upload/write cache.  

 

This may help prevent the issues you're seeing.

 

 

 

 

For the default caching method (expandable), yes.  Check the caching article. It outlines EXACTLY what happens when the drive gets full: 

http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/

 

But basically, if the upload cache has caused the system to dump the rest of the cache, then downloads *may* suffer (it should still work, but files won't remain cached for long, I believe.

 

Thought i would give an update on .605 and the issues i still have:

 

1# Like mentioned above, when copying to the drive (Dedicated 512gb SSD, with nothing else running) the drive gets locked to running none or close to no threads. An option to throttle copies to the drive would be great, so we can limit the copy speed depending on our cache drive and still get an available drive meanwhile

 

2# When prefetching, it seems to sometimes stop in the middle, to then resume. Example: 80 MB prefetch with 20 mb chunks, #1, #2, #3, #4. Initially it seems as it reads #1 it starts on #5 however suddently it can reach #9 and "forget" to start on #13 waiting untill #12 was read before starting on #13. This gives some issues with data not being available sometimes and requires me to load my data package all over. I could image that if it was a video, then it would cause a lag.

 

3# Opening multiple files have huge issues. Prefetching is currently set up to run a total MB amount, which means that the first file in theory could have prefetched the total amount, meaning that any new files are unable to prefetch forward. To me the ideal solution would be to make the system work with a prefetch value pr. file(as you can identify files in the drive, you should know which chunks relate to what file), so that each file would prefetch the value set. So a 80 MB prefetch would be 160 MB with 2 files running and 240 MB with 3 and so on. For me atleast, the issue is not the bandwidth, as the prefetch is running like 1 thread at a time when it reaches its max, so there should be plenty of bandwidth available for most people (if not it is because they are saturating their line anyway, which is not the fault of CloudDrive)

 

4# CONSTANT throttling from Google due to userRateLimitExceeded (Too many API calls i guess). High speed connections need way bigger chunks. I could imagine that some of these throttles could be making some of the above issues worse, as they cause wait times for threads because you put a delay on the retry. The download time for a 20 MB vs a 100 MB chunk for me on a 1 gbit line really doesn't have a big difference(Previously when we had 100 mb chunks Google gave me around 600 mbit download on a thread, making it possible to utilize my entire 1 gbit line without throttling) and therefore wouldn't increase the latency - All the latency i have currently is the API request times from Google, which is the same if it was 1 MB or 100 MB chunks.

 

5# You are not able to see your attached drives in the provider overview, running with multiple accounts, it would be great to perhaps see which accounts i have drives mounted from (perhaps marked with a green color?) - You currently show attached drives at other locations, marking the currently attached would also be nice :)

 

6# Had the issue with chunks being marked with success at 0% (view the last page) i don't know if it was fixed, but if not then it might be an issue! Nothing would get uploaded meanwhile and it required a reboot to get fixed.

 

7# Chunks can sometimes sit stuck at 100% in technical details for 5-10 seconds, before moving to completed stage.

 

8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads.

 

These are the current issues i am seeing, but as you know i will most likely find more :D But only in the spirit of making the application better :)

 

Ugh, make me go through all of these issues.... :P

 

  1. Is this happening consistently, or under certain circumstances? 

     

  2. Definitely not normal.  Have you grabbed logs yet? 

    Though, the prefetching may not grab files linearly. And have you tried tweaking things (the 605 build changes a lot with prefetching, so it may be worth experimenting some more)

     

  3. Try reducing the prefetching settings then. Not to much, but not a small amount. 

    try 5MB for the trigger, and 40MB for the forward, and see if that works better. 

     

  4. To be blunt, at this point, this is more of an edge case, a lot of people don't have the gloriously high speeds that you are blessed with! 

    That said, I'll pass this along to Alex again, to see if we can help optimize this for you. 

     

  5. Noted, and I think we already have a request for this, but added a sticky and will mention to Alex in our next meeting.

     

    Though, green sounds good. ... and thinking about it, a "detach" button here would be a good idea too. 

    https://stablebit.com/Admin/IssueAnalysis/27059

     

  6. Could you download this: 

    http://www.telerik.com/fiddler/fiddlercap

    Enable the "decrypt HTTPS" option (it will want to install a certificate, allow it to do so, you can remove it once you're done... let me know if you need help doing so. 

     

    Enable logging on StableBit CloudDrive, and start a capture with FiddlerCap.  Repro the behavior, stop logging, and then upload the logs.

     

    This may give some very detailed logs of what is going on.

     

Regarding the files, he mentioned it was complicated, but never said it couldn't be done. The system must somehow know what chunks belong to which file - else it wouldn't know where to begin and where to stop. My guess is that they keep info on this in the first chunks on the drive. If prefetching on first file is possible, then they must have gotten the starting location from some place - when another file is opened they should have a second location and should be able to run them individually

 

 

Well, the software does parse some of the information. Stuff like the MFT, directory entries, etc. But remember, that any additional parsing adds more resources, and latency.  Indexing all of the files on the drive, may be excessive....  

 

That, and it wouldn't be a small change to the software.

Link to comment
Share on other sites

  • 0

#1 happens as soon as i copy maybe 100 gb while the cache drive still has plenty of room. 

 

#2 will try to log it next time i experience this

 

#3 i doubt this low prefetching will work, but i will try

It seems to work a lot better! I can now get 3 files running at the same time with .606 I think larger chunks will will be the final fix to get data down faster for me, but besides that everything is pretty much perfect! Awesome work!!

 

#4 im sad that it seems you wont prioritize on optimizing for high speeds, when it wont affect users with lower speeds. Remember that in Europe it is normal to have 100mbit+. over the next years speeds will get higher and higher for everyone and the increased chunk size more important. im also pretty sure that the throttling might be one of the issues making some of the other issues worse, due to extra wait time for a chunk. 100 mb chunks also showed perfect results earlier, but back then we just didnt have the minimal download size and therefore got issues with too many downloads of the same file - that issue is gone now

 

#5 Sounds great

 

#6 Will try that when/if i encounter the issue again

 

 

Hopefully you don't get annoyed about my requests :D - i love CloudDrive, just trying to make it better :)

Link to comment
Share on other sites

  • 0

I don't know if it is possible, but would be nice if you somehow could be able to cache the header/file attribute data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same and that the file stille exists.

Just tried the drive with plex and everything runs fine, but indexing is slow as well as you lag while entering a title because plex tries to read the header/file attribute of each file and asks the OS to confirm the file exists.

If it is possible it would be a nice feature to be able to activate caching of the headers and/or file attributes of each file :-)

 

Besides helping in plex, i also believe it would make folders load faster while browsing in windows, as they use the headers as well - especially with lots of files.

 

Perhaps using the GetFileAttributesEx function in the WinAPI

 

 

8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads.

 
I see that this issue is happening if you have upload pending and threads running for upload and you start prefetching. It will then block both upload ans download and have threads sitting stuck in "ChecksumBlocksReadCopy" state untill prefetch timeframe runs out.
Link to comment
Share on other sites

  • 0

I don't know if it is possible, but would be nice if you somehow could be able to cache the header data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same.

 

Just tried the drive with plex and everything runs fine, but indexing is slow as well as you lag while entering a title because plex tries to read the header of each file.

 

If it is possible it would be a nice feature to be able to activate caching of the headers of each file :-)

 

I'm also using Plex so I'd be happy if this was something that could be implemented.

Link to comment
Share on other sites

  • 0

 

I don't know if it is possible, but would be nice if you somehow could be able to cache the header/file attribute data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same and that the file stille exists.

 

Just tried the drive with plex and everything runs fine, but indexing is slow as well as you lag while entering a title because plex tries to read the header/file attribute of each file and asks the OS to confirm the file exists.

 

If it is possible it would be a nice feature to be able to activate caching of the headers and/or file attributes of each file :-)

 

Besides helping in plex, i also believe it would make folders load faster while browsing in windows, as they use the headers as well - especially with lots of files.

 

Perhaps using the GetFileAttributesEx function in the WinAPI

 

 

8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads.

 
I see that this issue is happening if you have upload pending and threads running for upload and you start prefetching. It will then block both upload ans download and have threads sitting stuck in "ChecksumBlocksReadCopy" state untill prefetch timeframe runs out.

 

 

Was this possible? :)

Link to comment
Share on other sites

  • 0

 

I don't know if it is possible, but would be nice if you somehow could be able to cache the header/file attribute data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same and that the file stille exists.

 

 

 

You mean like pinning the $MFT records? AKA "Filesystem metadata"?

 

Also, a larger cache and frequent indexing may keep the relevant files/blocks stored in the cache. (adaptive cache, after all). 

 

 

Come on, just a quick yes/no and make a man happy :)

 

If you haven't guessed by now, I'm hesitant to give answers if I'm not absolutely certain about the answer.  

 

Also, I think there was a feature request for this... If not,  I'll make one. 

Link to comment
Share on other sites

  • 0

You mean like pinning the $MFT records? AKA "Filesystem metadata"?

 

Also, a larger cache and frequent indexing may keep the relevant files/blocks stored in the cache. (adaptive cache, after all). 

 

 

 

If you haven't guessed by now, I'm hesitant to give answers if I'm not absolutely certain about the answer.  

 

Also, I think there was a feature request for this... If not,  I'll make one. 

 

Possibly the MFT records yea - Not sure if that is what windows uses to "CheckIfFileExists". Like mentioned before Plex does this to all video files when trying to access the media on the web player or media player = massive load time per title.

 

The file headers would also be great, as you have to load data from all files when entering a folder. Having a folder with thousands of images means massive load, which seems to just give up at some point, showing only data for some of the files.

 

Letting both these types of data get cached (by user choice) would be an awesome feature which could speed up stuff at several places! 

 

If you could add it as a request, then Alex can look into if it is possible and if it is worthwhile doing vs time it takes.

Link to comment
Share on other sites

  • 0

Possibly the MFT records yea - Not sure if that is what windows uses to "CheckIfFileExists". Like mentioned before Plex does this to all video files when trying to access the media on the web player or media player = massive load time per title.

 

The file headers would also be great, as you have to load data from all files when entering a folder. Having a folder with thousands of images means massive load, which seems to just give up at some point, showing only data for some of the files.

 

Letting both these types of data get cached (by user choice) would be an awesome feature which could speed up stuff at several places! 

 

If you could add it as a request, then Alex can look into if it is possible and if it is worthwhile doing vs time it takes.

 

Seconded. I know exactly what @steffenmand is talking about and it can get quite annoying but not unbearable.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...