Jump to content
  • 0

+1 for GoogleDrive for Work support


Reptile

Question

Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.

Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.

 

Fast, afordable, unlimited.

Is there a beta release supporting this provider already?

 

Yours sincerely 

 

Reptile

 

 

Edit:

Preliminary Google Drive support added. 

Download available here:

http://dl.covecube.com/CloudDriveWindows/beta/download/

Note: these are internal beta builds and may not be stable. Try at your own risk.

 

Google Drive Support starts with build 1.0.0.408 (current recommended for Google Drive)

Edited by Christopher (Drashna)
Google Drive info
Link to comment
Share on other sites

Recommended Posts

  • 0

Yes, and IIRC, it's the next issue to address.  But I'll mention it to Alex to get it expedited. 

 

Thanks, as you know from the ticket I posted, without this fixed my CloudDrive is pretty much unusable for my intentions.

 

I think @steffenmand's guess is most likely the correct one in that CloudDrive thinks it is still getting 1-2mb chunks as anything above auto for minimal download has the same chunk downloading over and over.

 

Hope Alex gets going on this soon as this seems like one of the most replied to threads in this forum.

 

Appreciate your efforts so far, this is a pretty slick piece of software.

Link to comment
Share on other sites

  • 0

Thanks, as you know from the ticket I posted, without this fixed my CloudDrive is pretty much unusable for my intentions.

 

I think @steffenmand's guess is most likely the correct one in that CloudDrive thinks it is still getting 1-2mb chunks as anything above auto for minimal download has the same chunk downloading over and over.

 

Hope Alex gets going on this soon as this seems like one of the most replied to threads in this forum.

 

Appreciate your efforts so far, this is a pretty slick piece of software.

 

 

Well, the issue is very reproducible, so Alex shouldn't have any issues tracking it down and resolve it.

It's just a matter of getting to it. 

 

 

 

And yeah, it really looks like an issue with the chunk system, and that it still thinks it's small chunks and attempting to redownload the same bits over and over. The fix should be simple (so, this will end up being a super complex issue... because that's the way things just tend to go). 

Link to comment
Share on other sites

  • 0

I haven't tested it yet, but I am glad to see that the minimum download size issue has been resolved in version .593.

 

(* When minimum read size is enabled, avoid downloading the same range of data multiple times.)

 

I am testing now and it looks to be fixed! Very exciting!

Link to comment
Share on other sites

  • 0

Just for the fun of it, i tried testing opening several files at the same time.

 

It seems that if you are prefetching one file, a new file will often fail at loading, since it won't start reading that file properly. If it does load, it will give errors as it doesnt seem to get any priority in prefetching - do other people experience this ? 

 

And what about Prefetching - does it prefetch pr. file or is it a total limit ? - because a total limit will result in the first file taking all the bandwidth :)

 

Also seems that the CloudDrive index chunks can suddently activate prefetching and disable it for the files in use untill prefetching finish on those

 

In technical details it seems as if it only prefetches the first file and only does normal read operations on the other. A very few times i see prefetching on both files

 

.593

Windows 10 x64

20 mb chunks

20 mb minimal download

 

prefetch:

 

5 mb trigger

1000 mb forward

600 seconds time window

 

As another note - i can see that when prefetching finishes, then it will download the chunks multiple times, because it starts downloading as soon as there is 1-2 mb ready. With a minimal download size, the prefetching should not download a piece before it reaches the minimal download size.

Link to comment
Share on other sites

  • 0

The multiple concurrent stream issue is one of the issues that I was experiencing before, that I was hoping this update would resolve. :(

 

 

 

 

 

There is a definite issue with this latest release.  I am getting great speeds with it but it seems that it can't handle doing more than one thing at a time.

 
Link to comment
Share on other sites

  • 0

The speeds are good now for the single file, but when you want to prefetching on two files at the same time it fails at giving both the same priority.

 

In my opinion prefetching should work per file, so that a 1 gb prefetching would mean 2gb when loading two different files, currently it seems to be a total which means one could use 980mb and the other 20 mb..

Link to comment
Share on other sites

  • 0

The speeds are good now for the single file, but when you want to prefetching on two files at the same time it fails at giving both the same priority.

 

In my opinion prefetching should work per file, so that a 1 gb prefetching would mean 2gb when loading two different files, currently it seems to be a total which means one could use 980mb and the other 20 mb..

 

Agreed, I am seeing this issue (with multiple files) as well. I would say this newest build is a step in the right direction but the prefetcher still needs better logic, and I still see multiple chunks being downloaded over and over at certain points, but not nearly as often though.

Link to comment
Share on other sites

  • 0

The problem with the prefetch logic knowing about files, is that I do not think the Driver has any concept of files, only the underlying blocks so I don't see how that could be done.

 

but somehow they have to make sure that files are being prioritised equally sotno reads are being blocked because prefetching was fully used by another file. must be a solution for this

Link to comment
Share on other sites

  • 0

but somehow they have to make sure that files are being prioritised equally sotno reads are being blocked because prefetching was fully used by another file. must be a solution for this

 

Right. I'm not saying I disagree with you, just based on how the current architecture has been described, I'm not sure if it is possible. A solution that might help if all of your files are written sequentially is that block ranges are prioritized equally. For example if you are trying to read chunks 1012-1085 and chunks 8753-9125 at the same time, those would considered separate "files" and prioritized equally. Seems like a logic headache from a code perspective though, and if your drive has random writes, or updated chunks of a file outside of the file's main "chunk range", this algorithm would all fall apart quickly.

Link to comment
Share on other sites

  • 0

Man, I have some horrible timing, apparently.

 

Sorry for the delay, but I "contracted" the stomach flu for a couple of days. Just in time for Alex to get to the partial read issue. And 2-3 pages of posts. :)

 

 

The multiple concurrent stream issue is one of the issues that I was experiencing before, that I was hoping this update would resolve. :(

 

Okay, thanks for letting me know. I've reflagged the issue for Alex. 

https://stablebit.com/Admin/IssueAnalysis/26038

 

Additional logs on the newer version may help, but may not be necessary. 

 

 

Seems to work perfect! Now we just need the bigger chunks back so I can avoid constant throttling :-)

You could enforce a minimum download size of a tenth of the chunk size to ensure we don't get the download limit exceeded.

So 20 mb = 2 mb, 100mb = 10 mb

 

Glad to hear it! 

 

And yeah, it was exactly the issues you guys thought it was (at least that's what Alex said, IIRC, though I wasn't 100% at that time). 

 

 

Right. I'm not saying I disagree with you, just based on how the current architecture has been described, I'm not sure if it is possible. A solution that might help if all of your files are written sequentially is that block ranges are prioritized equally. For example if you are trying to read chunks 1012-1085 and chunks 8753-9125 at the same time, those would considered separate "files" and prioritized equally. Seems like a logic headache from a code perspective though, and if your drive has random writes, or updated chunks of a file outside of the file's main "chunk range", this algorithm would all fall apart quickly.

 

Well, this entire thing is crazy complex, and Alex has been doing a lot to tweak and improve it. 

 

Stuff like the read-modify-write, paging I/O, prioritization, etc, all has to deal with this.  And it's very complicated. 

 

I'd try to explain it better, but to be honest, I couldn't do it justice. A lot of it is very technical and very complex. 

Link to comment
Share on other sites

  • 0

Unless .594 reverted the changes in .593 The downloading same parts over and over issue still exists. It doesn't seem to be as bad but still very annoying and a big waste of bandwidth. It doesn't download them 20 times in a row but maybe 10 now, it doesn't seem to happen all the time either.

Link to comment
Share on other sites

  • 0

Unless .594 reverted the changes in .593 The downloading same parts over and over issue still exists. It doesn't seem to be as bad but still very annoying and a big waste of bandwidth. It doesn't download them 20 times in a row but maybe 10 now, it doesn't seem to happen all the time either.

Could you grab the logging from when this happens? 

 

http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

Link to comment
Share on other sites

  • 0

Christoffer, if you reintroduce larger chunks, is it possible that it can be a definable amount by the client instead of forced values? (Perhaps through a toggle advanced mode)

 

With 1 gbit here and even 10 gbit being mentioned in a couple of years I could see benefits in maybe 200 mb chunks or 250 mb. You could just enforce a minimal dowbload to always be a 10th or more of the chunk size

Link to comment
Share on other sites

  • 0

Christoffer, if you reintroduce larger chunks, is it possible that it can be a definable amount by the client instead of forced values? (Perhaps through a toggle advanced mode)

 

With 1 gbit here and even 10 gbit being mentioned in a couple of years I could see benefits in maybe 200 mb chunks or 250 mb. You could just enforce a minimal dowbload to always be a 10th or more of the chunk size

 

The best answer I have here is " it's complicated". 

 

As you (may) know/remember, with the Google Drive limit on the number of queries, we have to keep it at a certain number of units.  

 

Additionally, we can't have units that are TOO large, as this can (will) create too much latency for the drive.  This is because Windows does weird things while waiting in disk IO, and this isn't something we can fix (it's a very, very deep level of the OS). So we have to keep it small enough to prevent this issue from arising as well.  

Basically, we don't want to entire system locking up every time there is a request to the drive (it's a very real possibility that we've done a shitload to ensure that it doesn't happen). 

 

 

So, really, this depends on what Alex is really comfortable doing, based on the great deal of hands on experience with this stuff. 

 

 

 

And latency is just as important (more so, actually) than speed (or saturating your awesome, awesome speeds). 

Link to comment
Share on other sites

  • 0

But it does depend on the connection right? if i can finish a 200 mb chunk in 3 seconds that is not really that different than running 10 mb chunks on a 30 mbit line.

 

It all depends on speed and as long as users are informed by a tooltip or the like it should work.

 

 

It will also make me require less threads and thus limit the cpu usage

Link to comment
Share on other sites

  • 0

But it does depend on the connection right? if i can finish a 200 mb chunk in 3 seconds that is not really that different than running 10 mb chunks on a 30 mbit line.

 

It all depends on speed and as long as users are informed by a tooltip or the like it should work.

 

 

It will also make me require less threads and thus limit the cpu usage

 

 

Yes, exactly. And that would be latency, specifically. 

 

As I said, "it's complicated".  And for the most part, we're trying to fine tune to "most users". 

 

 

 

Perhaps there should be selectable (auto?) config also, based on upload/download speeds, suggested settings for an optimum setup.

 

This is something that we've discussed at length (eg, me bringing up, usually). 

 

At best, this would be very, very bandwidth intensive and would have to be an extended test done before creating the drive.  And it may not help in a lot of cases (eg, the defaults may still be better). 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...