Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Posts posted by steffenmand

  1. im using the encryption yea, so i imagine that requires some power.

     

    It is CloudDrive.Service.exe which goes crazy.

     

    Currently im copying a file from an old drive and disabled prefetching, which means it only does one read thread at a time. That gives me around 60% avg. cpu use with CloudDrive.Service.exe.

     

    If i enable prefetching it is pretty much 100% all the time

     

    .593

    windows 10 x64

    google drive

    encrypted drive with passfrase

    20 mb chunks

    20 mb minimal download

     

    32 gb ddr4

    i7 skylake quad (i7-6700)

    with 2 x 500gb ssd

     

    pretty much no other services running on the server as clouddrive is pretty much the only thing installed 

  2. I'm currently using an i7 Quad skylake cpu in my server, but Cloud Drive is like sucking 100% with less than 10 download threads, so i'm guessing my speed is severely limited due to the need of power.

     

    Anyone got a nice CPU which seems to handle everything nicely?

     

    Also hoping that they could find aways to optimize performance later on, as this really is hard on the CPU, but i guess it is due to the encryption on the drive.

     

    Perfect scenario would be to be able to utilize all 20 threads.

     

    Hopefully you other guys can give some tips on better systems :)

     

    Usage:

     

    20 mb chunks

    20 mb minimal download

    primarily large files 20gb -> 400 gb

    Encrypted drive

     

    Due to the huge file sizes i would to get all the speed i can get :)

  3. The problem with the prefetch logic knowing about files, is that I do not think the Driver has any concept of files, only the underlying blocks so I don't see how that could be done.

     

    but somehow they have to make sure that files are being prioritised equally sotno reads are being blocked because prefetching was fully used by another file. must be a solution for this

  4. The speeds are good now for the single file, but when you want to prefetching on two files at the same time it fails at giving both the same priority.

     

    In my opinion prefetching should work per file, so that a 1 gb prefetching would mean 2gb when loading two different files, currently it seems to be a total which means one could use 980mb and the other 20 mb..

  5. Just for the fun of it, i tried testing opening several files at the same time.

     

    It seems that if you are prefetching one file, a new file will often fail at loading, since it won't start reading that file properly. If it does load, it will give errors as it doesnt seem to get any priority in prefetching - do other people experience this ? 

     

    And what about Prefetching - does it prefetch pr. file or is it a total limit ? - because a total limit will result in the first file taking all the bandwidth :)

     

    Also seems that the CloudDrive index chunks can suddently activate prefetching and disable it for the files in use untill prefetching finish on those

     

    In technical details it seems as if it only prefetches the first file and only does normal read operations on the other. A very few times i see prefetching on both files

     

    .593

    Windows 10 x64

    20 mb chunks

    20 mb minimal download

     

    prefetch:

     

    5 mb trigger

    1000 mb forward

    600 seconds time window

     

    As another note - i can see that when prefetching finishes, then it will download the chunks multiple times, because it starts downloading as soon as there is 1-2 mb ready. With a minimal download size, the prefetching should not download a piece before it reaches the minimal download size.

  6. Seems to work perfect! Now we just need the bigger chunks back so I can avoid constant throttling :-)

     

    You could enforce a minimum download size of a tenth of the chunk size to ensure we don't get the download limit exceeded.

     

    So 20 mb = 2 mb, 100mb = 10 mb

  7. Hi...

     

    Just wanted if we could get an update if you have noted the issue with minimal download size? - prefetching still thinks it is working with 1-2 mb chunks so eventhough we download more it only uses a small part of it and then downloads the same chunk again. Means we are just downloading up to 20 times as much for the same data instead of getting all the data in up to 20 times less chunks

  8. Google is limiting you for making too many API calls/sec, not CloudDrive. No way for them to get around this. You could raise your chunk size so that you are making less API calls per second. Google does not seem to care how much bandwidth you are pushing/pulling, just how many API calls you make. With larger chunks you will obviously make less API calls to upload/download the same amount of data.

     

    And that is why im hoping for the return of 100mb chunks :-) 20mb simply finishes too fast and causes throttling! they should just add a minimal download size to it so we dont get the download quota exceeded 

  9. Recently i have gotten terrible performance in periods with Google Drive

     

    Each chunk goes to 100% completion in less than a second and then afterwards sits waiting, while counting up duration time.

     

    I can see in the server log thst google is throwing an internalError which seems to cause all of this. Tonight it is pretty much constantly failing :-(

     

    Also saw that i was getting 4000-5000 ms response times while downloading chunks from google - anyone know if they have issues ? - Could seem as being huge delays on responses from Google.

  10. Uploaded my logs showing the prefetching of the same chunk 20 times at the same time. Running newest version (.539) and drive was made on .537.

     

    Maybe it still believes it is getting 1-2 mb parts ?

     

    20mb chunks

    20mb minimal download

    20 TB Drive

     

    Also noticed that prefetching kicks in during attach, shouldn't this be disabled untill drive has been attached and data is readable?

     

    Result currently is that we are downloading the same chunk, leaving only 1 chunk to be downloaded for each 20 threads.

     

    Hopefully you will also notice how often i am throttled in the logs and get the 100mb chunks back :P

     

    Besides this issue, it seems to work but of course the speed is not near what it says, due to the fact it is the same stuff it is downloading :)

  11. Downloaded the new update 1.0.0.537

     

    I have noticed, that I am getting a lot of I/O errors. 

     

    I have tried different settings from having the download and upload threads from 20 originally and then even reverting back to default settings of 2 threads each. Still getting the errors. Also I am on a gigabit connection and only seeing 50 to 100 mpbs when before i was doing easily 300 to 600 mpbs

     

    https://drive.google.com/file/d/0B55xGiCB0c8ndEJnbzNRM19HbG8/view

     

    I have also uploaded the logs, hopefully that will help you guys out.  The image shows the error occurred twice in this instance, but i have been having errors go up in the hundreds (300 to 400) times.

     

    Normal Settings:

     

    20 Download Threads

    20 Upload Threads

     

    Enable prefetching

    Prefetch trigger - 2MB

    Prefetch Forward - 40 MB

    Prefetch Window - 3600 seconds

     

    Testing Settings (Still with errors)

     

    2 Download Threads

    2 Upload Threads

     

    Enable prefetching

    Prefetch trigger - 1MB

    Prefetch Forward - 10 MB

    Prefetch Window - 30 seconds

     

    I never see this issue and get around 700 mbit up and down with 20 mb chunks. With 100 mb chunks i can get my connection maxed around 1000 mbit.

     

    However the prefetch bug means that it is getting the same chunks tons of times, so the speed is not utilized to actually benefit something yet - but i'm sure that will be fixed soon :)

  12. that *may* be normal.  Is this at the same time, or one after the other. 

     

    If it's one after the other, the checksums may be failing, and redownloading the chunk.  Which would be 100% normal.

     

    Otherwise, enable logging and repro, please. 

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

     

     

     

    I wouldn't recommend it,  the older version doesn't support the correct partial chunk size, meaning that you may run into the download quota issue with the drive.  

     

    If(/When) we re-add the 100MB sections, at best, we'll have to limit the minimum partial chunk size based on the chunk size, so that we don't run into this issue. 

     

     

     

    as you can see above it works on the same chunk several times at the same time. will do log tomorrow

  13. Found a temporary workaround for 100mb chunks! Create a drive on an old version, detach and mount on a new version - then I can even get my 100 mb minimum dowbload :-) almost constant 1 gbit speed! Now the prefetching just needs to not download the same file 10 times in a row :-)

×
×
  • Create New...