Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Posts posted by srcrist

  1.  

     

    I ask him if there is a limit for uploading XXX GB per 24 hours. This is the answer: “I have double checked and there is no limitations regarding that, as per this article : https://support.google.com/drive/answer/37603?visit_id=1-636381266873867728-2759172849&rd=1â€

     

     

    Google does not publish their API limits. So you won't see a public statement on this. Many people have tested it though. The limit is around 750GB/day. This is the same way we discovered the API limits that rClone previously ran into as well. You can always test it yourself if you want to verify. Stop uploading data for at least 24 hours and then track how much you upload before you start getting 403 errors. 

  2. To be clear: This is not going to go away by upgrading. The cutoff is server side and transfer based. Once you upload somewhere around 750-800GB of data in a 24 hour period you will be API locked until the following day. CloudDrive can certainly add some features to help us manage that total, but the throttling will not go away unless Google decides to change their policies again. 

     

    This simply appears to be a new limitation being imposed by Google. Uploads are now limited, server-side, to around 750GB or so per day. As far as anyone can tell, reads are not limited--they seem to function as they always have. 

  3. It's fine to mention rClone. They aren't really competitors. They don't even achieve the same ends. I think there is room for both.

     

    In any case, it looks like Google implemented some new limits on writes per day (somewhere around 800GB/day). There is some discussion of it here: http://community.covecube.com/index.php?/topic/3050-901-google-drive-server-side-disconnectsthrottling/

     

    Honestly, depending on the cache setup, it will likely affect CloudDrive less than rClone, since the CloudDrive can still operate and make changes (reads AND writes) to the cache when it cannot upload new data. It may present the opportunity for new tweaks, I would guess, but, overall, the drives should continue working fine--assuming we understand the changes that have been made. 

  4. Reading through the rclone thread and some other forums, it looks like it might just be a new upload limit. I'm reading a lot of people say that it kicks in at around 800GB/day or so. That's a pretty reasonable limit imo, but it's probably going to change some things for CloudDrive and other solutions. We'll have to see what Christopher has to say when he has a chance to hop on the forums.

     

    Sadly, for me, I was in the middle of transferring data from one large drive to two smaller ones (to delete the large one), so that's going to take a lot longer now. 

  5. This sounds similar to the internal error resets. See the below threads for some additional discussion. They are aware of it, and I believe looking into a solution.

     

    http://community.covecube.com/index.php?/topic/2993-drive-mount-error/

     

    http://community.covecube.com/index.php?/topic/2967-reindexing-google-internal-server-errors/

     

    Are you getting internal server errors preceding your timeouts? 

     

    In any case, I would consider opening up a ticket as well, so you can submit logs and other troubleshooting information. 

  6. Okay.  Alex is busy working on code, ATM, and trying to get StableBit DrivePool "ready to ship a new version".  So it may be a bit before he can look into this.  But the issue is flagged as important, so it does have higher priority, so he should get to it sooner rather than later. 

     

     

    And please do let me know if the smaller size helps.

     

     

     

    Yeah, no worries. I have the drive mounted again right now. Google has just been exceptionally stable the last few days and I've been able to get it to remount with maybe one or two restarts (so about 24 hours).

     

    As far as the lower drive sizes, now that I've realized chkdsk can't be used on anything larger, this is probably best for NTFS drives anyway--assuming I care about the data longer-term. Drivepool is in the process of migrating the data now, but it's going to take weeks. 

  7. My ticket on this issue suggests that they still think the software attempts to retry after failures when indexing. It doesn't. This doesn't bode well for actually getting a fix.

     

     

    I think I agree with you that there seems to be some confusion about what problem we are actually addressing here on the part of the Stablebit team. I tried to clarify in the other thread with some log snippets. Hopefully that will help.

  8. Christopher,

     

    To be clear: Are you saying that it is retrying multiple times even though the service log only records one error? Because I can see where it retries when the drive is actually mounted, but only a *single* server error needs to show up in the logs during the indexing process in order for it to completely restart the process. I'll paste you an example:

    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26]   chunks IDs: 2067212,2067217,2067218,2067219,2067220,2067221,2067222,2067223,2067224,2067225,2067226,2067227,2067228,2067229,2067230,2067231,2067232,2067233,2067234,2067235,2067236,2067237,2067238,2067239,2067240,2067241,2067242,2067243,2067244,2067245,2067246,2067247,2067248,2067249,2067250,2067251,2067252,2067253,2067254,2067255,2067256,2067257,2067258,2067259,2067260,2067261,2067262,2067263,2067264,2067265,2067266,2067267,2067268,2067269,2067270,2067271,2067272,2067273,2067274,2067275,2067276,2067277,2067278,2067279,2067280,2067281,2067282,2067283,2067284,2067285,2067286,2067287,2067288,2067289,2067290,2067291,2067292,2067293,2067294,2067295,2067296,2067297,2067298,2067299,2067300,2067301,2067302,2067303,2067304,2067305,2067306,2067307,2067308,2067309,2067310,2067311,2067312,2067313,2067314,2067315	2017-07-04 07:50:02Z	209022985613
    CloudDrive.Service.exe	Warning	0	[ApiGoogleDrive:26] Google Drive returned error (internalError): Internal Error	2017-07-04 07:50:07Z	209039853973
    CloudDrive.Service.exe	Warning	0	[ApiHttp:26] HTTP protocol exception (Code=InternalServerError).	2017-07-04 07:50:07Z	209039867036
    CloudDrive.Service.exe	Warning	0	[CloudDrives] Cannot start I/O manager for cloud part 62179051-cb2c-46b8-818d-4ee2a186e3ec. Internal Error	2017-07-04 07:50:07Z	209039897780
    CloudDrive.Service.exe	Information	0	[CloudDrives] Synchronizing cloud drives...	2017-07-04 07:50:10Z	209048101740
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: a439b0c4-9e85-42a7-bccc-01ed3ff606dd, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&020000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:13Z	209058284693
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: c5c2798c-a82e-420b-97c2-0bd1f6740b75, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&030000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:13Z	209058286325
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: be624dce-5691-42a7-b43a-0ebc2693e4c7, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&040000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:13Z	209058286790
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: 57d7cdea-d18a-47e2-a3d1-40012094e450, Name: Covecube Virtual Disk, Adaptor: Covecube Cloud Disk Enumerator, Device path: \\?\SCSI#DiskCOVECUBECloudFs_________0001#1&2afd7d61&0&{53883972-f0d3-45bf-897d-8d8cb951cbb2}#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:13Z	209058287247
    CloudDrive.Service.exe	Warning	0	[ChunkIdHelper:26] Chunk ID storage engine does not have all the chunk IDs. Enumerating chunks...	2017-07-04 07:50:13Z	209059088095
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26] Clear all chunk IDs...	2017-07-04 07:50:13Z	209059088767
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: a439b0c4-9e85-42a7-bccc-01ed3ff606dd, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&020000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:14Z	209059949614
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: c5c2798c-a82e-420b-97c2-0bd1f6740b75, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&030000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:14Z	209059951785
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: be624dce-5691-42a7-b43a-0ebc2693e4c7, Name: INTEL SSDSC2BB160G4, Adaptor: Standard SATA AHCI Controller, Device path: \\?\scsi#disk&ven_intel&prod_ssdsc2bb160g4#4&15828421&0&040000#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:14Z	209059953983
    CloudDrive.Service.exe	Information	0	[Disks] Got Disk_Modify (Disk ID: 57d7cdea-d18a-47e2-a3d1-40012094e450, Name: Covecube Virtual Disk, Adaptor: Covecube Cloud Disk Enumerator, Device path: \\?\SCSI#DiskCOVECUBECloudFs_________0001#1&2afd7d61&0&{53883972-f0d3-45bf-897d-8d8cb951cbb2}#{53f56307-b6bf-11d0-94f2-00a0c91efb8b})...	2017-07-04 07:50:14Z	209059955406
    CloudDrive.Service.exe	Information	0	[Disks] Updating disks / volumes...	2017-07-04 07:50:15Z	209063279396
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26] Set chunks IDs...	2017-07-04 07:50:19Z	209076555738
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26]   chunks IDs: 11,139,142,1856284,2797743,2797744,2797745,2797746,4210125,4210126,4210127,4210128,4210129,4210130,4210131,4210132,4210133,4210134,4210135,4210136,4210137,4210138,4210139,4210140,4210141,4210142,4210143,4210144,4210145,4210146,4210147,4210148,4210149,4210150,4210151,4210152,4210153,4210154,4210155,4210156,4210157,4210158,4210159,4210160,4210161,4210162,4210163,4210164,4210165,4210166,4210167,4210168,4210169,4210170,4210171,4210172,4210173,4210174,4210175,4210176,4210177,4210178,4210179,4210180,4210181,4210182,4210183,4210184,4210185,4210186,4210187,4210188,4210189,4210190,4210191,4210192,4210193,4210194,4210195,4210196,4210197,4210198,4210199,4210200,4210201,4210202,4210203,4210204,4210205,4210206,4210207,4210208,4210209,4210210,4210211,4210212,4210213,4210214,4210215,4210216	2017-07-04 07:50:19Z	209078418489
    CloudDrive.Service.exe	Warning	0	[ApiGoogleDrive:106] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded	2017-07-04 07:50:19Z	209078588436
    CloudDrive.Service.exe	Warning	0	[ApiHttp:106] HTTP protocol exception (Code=ServiceUnavailable).	2017-07-04 07:50:19Z	209078598915
    CloudDrive.Service.exe	Warning	0	[ApiHttp] Server is throttling us, waiting 1,236ms and retrying.	2017-07-04 07:50:19Z	209078599786
    

    You can see, here, that it only got a single server error before deciding to enumerate the chunks again. It was at roughly chunk number 2067212, it gets the error, and then it starts over again at chunk number 4210125. As far as I can tell, there are no successive server errors in this log. Just one. One was all it took to restart the process.

     

    Here is another, even simpler, log snippet from a different occasion of the same situation:

    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26]   chunks IDs: 1008805,1008806,1008807,1008808,1008809,1008810,1008811,1008812,1008813,1008814,1008815,1008816,1008817,1008818,1008819,1008820,1008821,1008822,1008823,1008824,1008825,1008826,1008827,1008828,1008829,1008830,1008831,1008832,1008833,1008834,1008835,1008836,1008837,1008838,1008839,1008840,1008841,1008842,1008843,1008844,1008845,1008846,1008847,1008848,1008849,1008850,1008851,1008852,1008853,1008854,1008855,1008856,1008857,1008858,1008859,1008860,1008861,1008862,1008863,1008864,1008865,1008866,1008867,1008868,1008869,1008870,1008871,1008872,1008873,1008874,1008875,1008876,1008877,1008878,1008879,1008880,1008881,1008882,1008883,1008884,1008885,1008886,1008887,1008888,1008889,1008890,1008891,1008892,1008893,1008894,1008895,1008896,1008897,1008898,1008899,1008901,1008902,1008903,1008905,1008906	2017-07-05 05:37:37Z	179431003884
    CloudDrive.Service.exe	Warning	0	[ApiGoogleDrive:26] Google Drive returned error (internalError): Internal Error	2017-07-05 05:38:06Z	179525435228
    CloudDrive.Service.exe	Warning	0	[ApiHttp:26] HTTP protocol exception (Code=InternalServerError).	2017-07-05 05:38:06Z	179525436540
    CloudDrive.Service.exe	Warning	0	[CloudDrives] Cannot start I/O manager for cloud part 62179051-cb2c-46b8-818d-4ee2a186e3ec. Internal Error	2017-07-05 05:38:06Z	179525503627
    CloudDrive.Service.exe	Information	0	[CloudDrives] Synchronizing cloud drives...	2017-07-05 05:39:57Z	179895055905
    CloudDrive.Service.exe	Warning	0	[ChunkIdHelper:26] Chunk ID storage engine does not have all the chunk IDs. Enumerating chunks...	2017-07-05 05:39:58Z	179898074584
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26] Clear all chunk IDs...	2017-07-05 05:39:58Z	179898075210
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26] Set chunks IDs...	2017-07-05 05:40:06Z	179923605343
    CloudDrive.Service.exe	Information	0	[ChunkIdSQLiteStorage:26]   chunks IDs: 11,139,142,1856284,2797743,2797744,2797745,2797746,4210125,4210126,4210127,4210128,4210129,4210130,4210131,4210132,4210133,4210134,4210135,4210136,4210137,4210138,4210139,4210140,4210141,4210142,4210143,4210144,4210145,4210146,4210147,4210148,4210149,4210150,4210151,4210152,4210153,4210154,4210155,4210156,4210157,4210158,4210159,4210160,4210161,4210162,4210163,4210164,4210165,4210166,4210167,4210168,4210169,4210170,4210171,4210172,4210173,4210174,4210175,4210176,4210177,4210178,4210179,4210180,4210181,4210182,4210183,4210184,4210185,4210186,4210187,4210188,4210189,4210190,4210191,4210192,4210193,4210194,4210195,4210196,4210197,4210198,4210199,4210200,4210201,4210202,4210203,4210204,4210205,4210206,4210207,4210208,4210209,4210210,4210211,4210212,4210213,4210214,4210215,4210216	2017-07-05 05:40:06Z	179925052680
    

    Again, it was at chunk number 1008805 and after a SINGLE server error it's back at 4210125. It does not seem to be retrying chunk 1008805 at all. It does not, in fact, even wait for a second server error. It just starts over at the beginning.

     

    So, again, are there additional attempts to retry these reads that are not being recorded in the logs? Or are we still just failing to understand each other about the problem?

     

    Additionally, the drive is not unmounting at all--well, not most of the time. It's just restarting the indexing process. So the expected behavior if it was getting multiple errors within a 120 second window is also not obtaining. I can also tell you that I've literally been sitting here and watching the service tracing logs when this happens, and it certainly isn't waiting 120 seconds between getting that error and starting over. It's a matter of seconds. Maybe one or two.

     

    I appreciate that Alex is already looking at it, but I also want to make sure that we're clear about this particular problem--and I'm not sure that we are. 

     

    Again: this is a separate, less tolerant behavior during the reindexing process than the 120 sec threshold timeouts that you are describing. At least as far as I can tell. If there is something I'm missing, though, please let me know. 

  9. Christopher, I don't know if any of the recent beta changes were a part of the more efficient code, but I am stuck (again) in a mounting loop with beta 894. Server went down over 24 hours ago (OVH had some sort of issue) and it's still mounting over and over again due to internal server errors. I'd really rather this not take a week every time it happens. Waiting this long to get access to my data again is honestly rendering CloudDrive an unusable storage option. 

  10. See the wiki about advanced settings: http://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings

     

    Specifically, this:

     

    • CloudFsDisk_MaximumConsecutiveIoFailures - This setting controls how many unrecoverable I/O failures can occur before the drive is forcifully unmounted from the system. Increasing or disabling this setting can cause issues where the system may hang/lock up, as Windows will wait on the I/O to finish. The window for these errors is 120 seconds, and if this time is exceeded without an error, the count is reset. The default value for this is "3".

     

    Adjust this to 8 to 10 or so. See if that fixes it. CloudDrive will detach your drive to prevent system lockups, but Google has hiccups that are temporary and solved by raising this number a bit. Don't go too high, though, as some sort of genuine failure will lock up your system--and you want CloudDrive to work around those. 

  11. open a ticket at https://stablebit.com/Contact

     

    And if you could, do this: 

    http://wiki.covecube.com/StableBit_CloudDrive_Web_Logs

     

     

     

    That said, the "Internal Server Error" is an HTTP 500 error.  This means that there is a server side issue here.  As in, Google Drive is coming back with errors.   In this case, there isn't much that the software can do, other than wait and retry.

     

    To be clear, Christopher--and I mentioned this in the other thread--the problem here is that, during the reindexing process, it simply *doesn't* retry. Retry would mean that it puts in a request for chunk 2,345,234, gets a server error, waits, and puts in that request again. Instead, it puts in a request for chunk 2,345,234, gets a server error, and restarts at chunk 4,234,122--slowly working its way back down. It needs to retry, rather than abort and start over.

     

    In some cases, it doesn't even start over. It just fails, the UI gives a mounting error, and it sits there until you press the "retry" button. This behavior, of course, adds even MORE time to the process. 

     

    The good news, for the other folks in this thread, is that the data on my drive was, ultimately, completely fine when this was happening to me. It will EVENTUALLY remount, and all of your data will be there. It can just take an entirely unreasonable amount of time. 

  12.  

    I believe I did respond to this already.

     

     

    Right. We talked in the support ticket.

     

     

     

    As for re-indexing.... the only time this should happen is if the "chunk ID" database is lost.   If something happened to that file, then it would trigger this to occur. 

     

    In my case it was triggered when the CloudDrive service crashed. But that's fine. I understand that it has to reindex. The very big problems comes when:

     

     

    So, the only thing our software can do is "wait and retry", which really is the only thing you can do, as well. Unfortunately. 

     

     

     

    The problem is that it does not retry during the indexing process. It starts over. That means you have to go through the entire indexing process without a single error or it simply restarts the process over again. I imagine that this does not *have* to be handled this way, but maybe I'm wrong. I think you suggested, in the support ticket, that this was something for Alex to maybe take a look at.

     

    With very large drives, or drives that simply have a lot of chunks, this is very real and very frustrating issue, though. When your reindexing takes 8 or 9 hours, you can neither expect to go that entire time without even a minor server error--nor afford to start the process over when it's halfway done. It took 5 DAYS (no exaggeration at all) to get my largest drive remounted when this happened. Meanwhile, my other, smaller drives mounted just fine because they were able to completely reindex within the time in between server errors. Once the drives are mounted, these internal server errors do not cause downtime. CloudDrive simply tries the request again, Google complies (the errors are always temporary), and life moves on. 

     

    But this problem during the reindexing process has to be fixed. Every hour someone has to go without any sort of server error whatsoever makes the process exponentially less likely to complete. Ever. It shouldn't take 5 days of crossing fingers just to get the drive to remount as it restarts over and over again. There needs to be more error tolerance than that.

     

     

     

    @srcrist - this keeps happening to me today, did you managed to resolve it? 

     

     

    As Christopher said, I didn't (and couldn't) do anything to resolve it. I did, however, eventually get the drive to remount. It just took a very long and very frustrating amount of time. I just had to wait until I didn't get any errors during the reindexing process.

     

     

     

    In any case, this seems like a really critical problem to me right now, and I'm dreading the next time my drive gets uncleanly dismounted. Who knows how long it will take to get it back. There just isn't anything that can be done about the occasional bad server response from Google. So we've got to be able to work through those, rather than completely aborting the process and starting over again.

  13. it appears to be normalized now. Fingers crossed for you! You might want to try editing the Program Files/Stablebit/Clouddrive/Clouddrive.service.default.config file to up the timeout and retry threshold/tolerance counts, then restart the service.

     

    As far as I can tell, they do not affect the indexing process. It just doesn't pay attention to those values until the drive is actually mounted. Also, as far as I can tell, it doesn't even try a second time before restarting the indexing process.

     

    Frankly, I think it's just something that Alex is going to have to revisit if larger drives are going to be usable. We just can't expect that a drive can go through an 8 or 9 hour index without a single connectivity error. I've had three days of downtime from a single service crash. The google errors are obviously not CloudDrive's fault, but the way it handles them seems poor to me, as it stands. 

     

    In any case, I'm down to about 1.5 million chunks to go as we speak, which is the lowest it's been able to get in three days. So I'll keep my fingers crossed.

     

    EDIT: Finally got it remounted. 

  14. Does CloudDrive retry to reconnect automatically?

     

    Yes, to a point--and depending on the context as well. If your drive is mounted, it will try again up to the limit set in the advanced settings file. Once it meets that limit of consecutive failures, it will disconnect the drive to prevent data loss or system instability. 

×
×
  • Create New...