Jump to content

raidz

Members
  • Posts

    12
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by raidz

  1. So I thought when I detached a drive last week I accidentally hit destroy instead of detach and wrote it off as my own stupidity.

     

    I just detached another drive today and CD destroyed it. I lost 40 TBs of data. DO NOT detach your cloud drive until they figure this out.

     

    I am extremely pissed to say the least. I have lost two drives now in the past 7 days, some with data I did not have backed up in another place.

     

    This is extremely unacceptable and I warn all of you of putting anything worth your time on these drives until they make this software more reliable.

     

    I guess the software is listed as beta, so my own fault for thinking it wouldn't ruin a drive in one fell swoop.

     

    Data actually isn't gone but the files they wrote to have no previous versions so it looks like I cannot recover. 

  2. Hi all,

     

    With 20 threads on a fresh start of the server I can push out at ~700mbps but that quickly degrades to ~100mbps after a few hours and ~10mbps after a day or so.

     

    Looking at the technical details over this time the response time of each write is ever increasing, from the <100ms that it starts with at top speed, to 20000ms at 100mbps, and 175000ms at 10mbps.

     

    Is the exponential backoff for the throttling just ever increasing and never resetting once the write is complete?

     

    It seems the decay in speed is the write threads sitting there waiting to write.

     

    Thanks

     

    It looks like we are actually discussing this same issue here: http://community.covecube.com/index.php?/topic/2334-poor-performance-google-drive-not-rate-limit/

  3. My Chunk size is at 100mb, Minimum download at 10mb, Download threads at 12 and Upload Threads at 6. No matter how many threads I set (more or less) it is still slow as molasses (on a 1gbit/1gbit line).  Download speeds have decreased as well due to delay in thread response time. This started happening to me after .726 and has not been different since. 

     

    I do believe this might has to do with Google limiting API request but also think it is something they changed in the software that is triggering it unless Google just made a big change to how they handle requests.

  4. Possibly the MFT records yea - Not sure if that is what windows uses to "CheckIfFileExists". Like mentioned before Plex does this to all video files when trying to access the media on the web player or media player = massive load time per title.

     

    The file headers would also be great, as you have to load data from all files when entering a folder. Having a folder with thousands of images means massive load, which seems to just give up at some point, showing only data for some of the files.

     

    Letting both these types of data get cached (by user choice) would be an awesome feature which could speed up stuff at several places! 

     

    If you could add it as a request, then Alex can look into if it is possible and if it is worthwhile doing vs time it takes.

     

    Seconded. I know exactly what @steffenmand is talking about and it can get quite annoying but not unbearable.

  5. Unless .594 reverted the changes in .593 The downloading same parts over and over issue still exists. It doesn't seem to be as bad but still very annoying and a big waste of bandwidth. It doesn't download them 20 times in a row but maybe 10 now, it doesn't seem to happen all the time either.

  6. The speeds are good now for the single file, but when you want to prefetching on two files at the same time it fails at giving both the same priority.

     

    In my opinion prefetching should work per file, so that a 1 gb prefetching would mean 2gb when loading two different files, currently it seems to be a total which means one could use 980mb and the other 20 mb..

     

    Agreed, I am seeing this issue (with multiple files) as well. I would say this newest build is a step in the right direction but the prefetcher still needs better logic, and I still see multiple chunks being downloaded over and over at certain points, but not nearly as often though.

  7. I haven't tested it yet, but I am glad to see that the minimum download size issue has been resolved in version .593.

     

    (* When minimum read size is enabled, avoid downloading the same range of data multiple times.)

     

    I am testing now and it looks to be fixed! Very exciting!

  8. Yes, and IIRC, it's the next issue to address.  But I'll mention it to Alex to get it expedited. 

     

    Thanks, as you know from the ticket I posted, without this fixed my CloudDrive is pretty much unusable for my intentions.

     

    I think @steffenmand's guess is most likely the correct one in that CloudDrive thinks it is still getting 1-2mb chunks as anything above auto for minimal download has the same chunk downloading over and over.

     

    Hope Alex gets going on this soon as this seems like one of the most replied to threads in this forum.

     

    Appreciate your efforts so far, this is a pretty slick piece of software.

  9. I never see this issue and get around 700 mbit up and down with 20 mb chunks. With 100 mb chunks i can get my connection maxed around 1000 mbit.

     

    However the prefetch bug means that it is getting the same chunks tons of times, so the speed is not utilized to actually benefit something yet - but i'm sure that will be fixed soon :)

     

    Is this still being looked into? Happening to me all the time, lots of wasted bandwidth with no real benefit.

  10. Uploaded my logs showing the prefetching of the same chunk 20 times at the same time. Running newest version (.539) and drive was made on .537.

     

    Maybe it still believes it is getting 1-2 mb parts ?

     

    20mb chunks

    20mb minimal download

    20 TB Drive

     

    Also noticed that prefetching kicks in during attach, shouldn't this be disabled untill drive has been attached and data is readable?

     

    Result currently is that we are downloading the same chunk, leaving only 1 chunk to be downloaded for each 20 threads.

     

     

     

    I am getting the exact same issues, very annoying.

×
×
  • Create New...