Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Posts posted by steffenmand

  1. The drive I am copying to is a 512 gb SSD where nothing else happens, a dedicated cache drive. Would a copy to the drive hit limitations and create issues with threads? Only way to get a faster drive would be to get a raid of ssd instead....

     

    I would rather that they throttle copies to the drive, than make your drive unavailable while you copy

  2. Im confused on how prefetching works. 

    Lets say im prefetching 80 mb with 20 minimal download

     

    #1

    #2

    #3

    #4

     

    When #1 is read, shouldn't it instantly start on #5 ? Or does it wait untill #4 is read before going to #5, #6, #7, #8 ?

     

    Also, as mentioned, there is a huge issue with multiple files, as prefetching simply is not smart enough atm. So one file can grab all the prefetching, while another is unable to read due to not getting prefetching data.

    Is this something that can be fixed ? or is it technically impossible due to your structure?

     

    Also seeing some issues where download threads are being blocked if you are writing data to the drive. While copying 100 gb to the drive, i was unable to pretty much unable to start any download threads while it was copying, as well as upload wouldn't start untill copy was done. Instead i am just getting some "is having trouble downloading data from Google Drive" errors.

  3. I have 15 ish up so not terrible not amazing so would be happy to just have it work consistently and not start all over again!

     

    I've upgraded now and the drive looks ok, still seem to be getting the same "thread was aborted, 42 times" error. Will see if it settles down this evening.

     

    The error is not critical, it will just retry the tread afterwards. So you are not losing data :)

  4. However if the drive was made in .463 i believe that there is some features which wont work such as increasing minimal download size. This is due to the drive being made on a version before this feature, which required some changes to the chunks, became available.

     

    So if you want to utilize that, you would have to make a new drive and reupload.

     

    But just from upgrading, you may get a lot of speed improvements anyway which might be enough for you, depending on the bandwidth you have available :)

  5. I noticed that when trying to upload, a lot of threads could get stuck in "ChecksumBlocksReadCopy" State which would start spending lots of CPU and with no new threads running.

    It oftens ends up with "The thread was cancelled" errors. Can this state by any means end up with a deadlock?

     

    Also see a few issues where prefetching doesnt keep getting new chunks, but instead first starts after a while again - could this be the same issue ?

     

    Gettings logs now and also taking a memory dump

  6. Glad to hear that performance was greatly improved.  If you look at the changelog, there are some significant changes. 

     

     

     

     

    Well, get some logs if the performance has dropped that much. Hopefully, it's something we can address.

     

     

    As for the prefetching... change your values. :)

    Try to align the prefetch values with the minimum download sizes you're using (eg, 20MB for the trigger, 200MB for the trigger, and 120 seconds for the time window).  See if that helps with latency.

     

    These are just "guesstimates", so you may need to fine tune.  But let me know what settings give you the best performance.

     

     

    But "per file" that's a lot more complicated. Especially when dealing with encrypted volumes (not our FDE, but with stuff like diskcryptor or bitlocker). Also, considering how NTFS stores data, ... I believe that it would essentially work the same anyhow. 

     

    Regarding the latency, it is not wait time with the actual download. It is the "Response times" listed from the Technical window, which i guess is the time it took to get response from the google API. When the chunks download they finish in like 1/10 of a second. So this problem would be the same if running 1 mb chunks or 100 mb chunks - downloading larger chunks of data would help with the latency here ;)

     

    But the issue with prefetching atm. is that a single file can take all the bandwidth. So with if i have a forward of 200 mb and 1 file is already at 180mb, then it will only grab 20mb of the new file (atleast it seems so). Ideally you would want the program to prefetch just as much on the new file as on the first. Also opening more files now, would result in a lower prefetch in total, which means you would have to set higher values, which will start massive prefetching if you only run a single file.

     

    IE. 200 MB prefetch = 100 MB prefetch with 2 files open = 50 MB prefetch with 4 files open ( but currently due to the randomness atm. it could be: file #1: 100 MB, file #2: 40 MB, file #3: 40 MB and file #4: 20 MB. )

     

     

    I know it is a hassle, but the program would work a lot better with prefetching working pr. file, instead of a combined total, thus each file getting 200 MB with the above use case.

     

    I saw someone mentioning they used plex - they won't have good success currently with 3 people streaming 3 different files, due to the prefetching being severely limited on each file, while some might lag due to one getting all the prefetching capacity.

     

    Luckily i just use mine for data and will open 2 at most, but i can see a lot of use cases which could get problems and even with two files i can get issues with data not being available quite often on the second file  :) I got bandwidth and threads available, the prefetching settings were just reached before i started the new file, so nothing available to prefetch the new one :)

     

    Oh and btw. while looking through the technical window, i also noticed that sometimes chunks can sit at 100% finished for 5-10 seconds before actually going to "completed" stage - could this be a google api delay as well ? When this happens i just see "speed" counting down, as the chunk finished already and is at 100%. It is just waiting for the thread to move to completed.

     

    I will try to gather logs for you soon, just been busy :)

  7. Yes prefetching is getting more data now, so that is pretty awesome! Also see prefetching dropping in size according to the minimal download size! Got 20 threads running all the time now at lower cpu usage! So some magic happened with this build :)

     

    BUT im seeing massive increase in response times... sometimes above 20000ms for a single thread, in the early days i was sometimes down to 200ms.

     

    Also still hoping for a system with prefetching, which makes it run per file, instead of a total. That would allow for pretty much unlimited amount of files opened - as long as your bandwidth could handle it with the available threads :)

  8. you are using an old version!

     

    http://dl.covecube.com/CloudDriveWindows/beta/download/?C=M;O=D

     

    follow this url for the newest updates! newest is .597 LOTS of speed improvements since then!

     

    you can increase the minimum download size while creating/attaching to get it to download larger pieces and get throttled a lot less! i still get throttled a lot but 100 mb chunks will improve that!

     

    Remember to make a new drive with the new version since stuff changed

     

    You can find their changelog here:

     

    http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt

  9. But it does depend on the connection right? if i can finish a 200 mb chunk in 3 seconds that is not really that different than running 10 mb chunks on a 30 mbit line.

     

    It all depends on speed and as long as users are informed by a tooltip or the like it should work.

     

     

    It will also make me require less threads and thus limit the cpu usage

  10. Christoffer, if you reintroduce larger chunks, is it possible that it can be a definable amount by the client instead of forced values? (Perhaps through a toggle advanced mode)

     

    With 1 gbit here and even 10 gbit being mentioned in a couple of years I could see benefits in maybe 200 mb chunks or 250 mb. You could just enforce a minimal dowbload to always be a 10th or more of the chunk size

  11. I just think it is because my chunks finish so fast. They finish in like 0.2 sec but the http delay is 1-5 seconds :-)

     

    So I'm hoping for 100mb or maybe even 200mb chunks as I previously were able to download 100mb chunks in 1-2 seconds with way better performance :-)

     

    2-4 seconds would be decent for a chunk as well so even hoping for the 200 mb chunks - speeds will grow to even faster later on anyway :-)

  12. My bottleneck currently is only the http delay between each thread as they sometimes can be several seconds! So I look forward to the day they reintroduce the 100 mb chunks :-) with those I was at constant 1000 mbit because http delays didn't give issues :-)

     

    That and then getting prefetching to prioritise each file equally or each file having its own prefetching limit then we are really getting somewhere!

  13. With my 1 gbit i can currently utilize around 700 mbit, due to 20 mb chunks finishing so fast that the http delays are ruining the rest of the speed :) So im hoping for the return of 100 mb chunks so 1 gbit users can really utilize the full speed.

     

    Try increasing the maximum amount of download threads in the drive I/O settings - if stuck at 2 (default) it might be the reason for it to be slow, due to the http delays between each download

×
×
  • Create New...