Jump to content
Covecube Inc.

borez

Members
  • Content Count

    20
  • Joined

  • Last visited

  1. BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?
  2. To recap on my issues, I have been hitting the API call limits even I've not hit the proverbial 750GB cap. To provide some context: I'm trying to create hierarchical pools in Drivepool (backup-ing my local drivepool to the cloud). I manually replicated my data in the cloud using GoodSync, and had relocated the files into the Poolpart folder. Created a combined pool and did a duplication check. This triggered off a series of read and partial writes (and re-uploads) on CD. Based off the technical details, i see the following: Partial re-writes are extremely slow (< 130kbps). I'm not sure if it's the same chunk that's being written (and re-written) again. See below for screenshot and error logs. Once that partial write clears, the upload speeds are back up to 100-200mbps. Still not as fast as what I used to get, but will leave that for another day. So apart from the bandwidth caps, it seems that there are caps with the number of calls you can make, for a specific file. It's my gut intuition, but appreciate some thoughts. Pulling my hair out here! 2:09:05.8: Warning: 0 : [ReadModifyWriteRecoveryImplementation:96] [W] Failed write (Chunk:34168, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [TemporaryWritesImplementation:96] Error performing read-modify-write, marking as failed (Chunk=34168, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:4] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:27] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:104] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:4] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:27] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:104] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
  3. That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps. A couple of observations: The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4. The errors seemed to have stopped when I was doing full writes to CD.
  4. BTW I'm hitting the errors upon startup, so it seems to be independent on upload caps. And I've only managed to upload at speeds < 200mbps. So something has changed with Google, but without a clear trend.
  5. Have been getting lots of throttling/server side disconnect messages from Google Drive recently, see below. Have been using the same settings for some time, seems that there's some changes to their throttling mechanism? If this helps, some comments: 1) I had an unsafe shutdown, and as a result i need to reupload 18GB of data. From the tech page, it seems that I'm redownload chunks for partial re-writes. Seems to be fairly intensive vs a straight-upload? 2) The errors have started since I upgraded to the .900/901 builds. 3) I typically run 6 threads, with a max upload speed of 400mbps. Scaling this down to 3/4 threads doesn't help. 0:10:29.5: Warning: 0 : [IoManager:32] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.5: Warning: 0 : [ReadModifyWriteRecoveryImplementation:62] [W] Failed write (Chunk:564, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [TemporaryWritesImplementation:62] Error performing read-modify-write, marking as failed (Chunk=564, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:61] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:62] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:61] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.3: Warning: 0 : [ReadModifyWriteRecoveryImplementation:95] [W] Failed write (Chunk:2743, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [TemporaryWritesImplementation:95] Error performing read-modify-write, marking as failed (Chunk=2743, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:95] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:56.1: Warning: 0 : [ApiGoogleDrive:102] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.1: Warning: 0 : [ApiHttp:102] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,517ms and retrying. 0:16:56.2: Warning: 0 : [ApiGoogleDrive:79] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.2: Warning: 0 : [ApiHttp:79] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.2: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,873ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:100] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:100] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,561ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:84] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:84] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,664ms and retrying. 0:16:57.3: Warning: 0 : [ApiGoogleDrive:96] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:57.3: Warning: 0 : [ApiHttp:96] HTTP protocol exception (Code=ServiceUnavailable). 0:16:57.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,546ms and retrying.
  6. borez

    Drive mount Error

    Sadly, I'm back with more problems. I've updated to the latest beta and I need to reauthorize the drive everytime it starts. Logs show a "Cannot start I/O manager for cloud part xxx (security exception). Security error." problem. When I reauthorize (and if I'm lucky), CD gets back online without re-indexing. If I'm not then it re-indexes everything all over. Any idea? 0:00:30.3: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.4: Information: 0 : [Main] Cleaning up cloud drives... 0:00:30.5: Information: 0 : [CloudDrives] Valid encryption key specified for cloud part xxx. 0:00:30.6: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:30.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.9: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:31.6: Information: 0 : [ChunkIdSQLiteStorage:4] Cleaning up drives... 0:00:31.6: Information: 0 : [Main] Enumerating disks... 0:00:31.6: Information: 0 : [Disks] Updating disks / volumes... 0:00:33.1: Information: 0 : [Main] Starting disk metadata... 0:00:33.1: Information: 0 : [Main] Updating free space... 0:00:33.1: Information: 0 : [Main] Service started. 0:12:44.2: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:44.2: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:12:46.0: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:46.0: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxxx (security exception). Security error. 0:13:14.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [Disks] Got Pack_Arrive (pack ID: yyy)... 0:13:18.9: Information: 0 : [Disks] Got Disk_Arrive (disk ID: yyy)... 0:13:19.9: Information: 0 : [Disks] Updating disks / volumes... 0:13:20.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:20.9: Information: 0 : [Disks] Got Volume_Arrive (volume ID: xxxx, plex ID: 00000000-0000-0000-0000-000000000000, %: 0)... 0:13:20.9: Information: 0 : [Disks] Got drive letter assign (volume ID: xxx)... 0:13:22.5: Information: 0 : [Disks] Updating disks / volumes... 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx and more...
  7. BTW, apologies if my earlier post sounded harsh - that was definitely not my intent, and I know it's a small team of 2 managing everything. If I can recollect on my problems - it was all fine and dandy when I upgraded to the release version (0.870). The issue started to kick in between Builds 870-880. I'm not entirely sure if the break was server led (I'm using Google Drive BTW). But I can confirm that the issue is solved when I re-created the drive w/ the release version. My old drive was created way back in the early beta builds. And kudos to a great product: I've been an early beta user and it's amazing to see how polished this product has become. Looking forward to the product's deeper integration with Drivepool.
  8. Is the team working on this? I'm getting the same issue, which is pretty annoying. The software re-indexes the drive everytime I boot up. Had waited till it was fully indexed and rebooted - but it re-indexes all over. Tried re-mounting the drive but no avail. I've uploaded my troubleshooter logs though. EDIT: I decided to re-create a new drive (given the comments on pre-release drives in the other thread), and the issue's resolved. So the issue might be related to drives created with pre-release versions.
  9. Unfortunately I've removed the clouddrive, but prior logs should have the errors. Have uploaded the files. Thanks for following up!
  10. Unfortunately tried this, and go go. This is a deal breaker, and I'll be removing the Clouddrive from the drivepool. I'm going back to manual backups - hope future iterations of DP can fix this integration.
  11. Unfortunately no go. Tried both options including the Automatic (delayed) service. The service does get delayed. but it still re-measures. After that i get a duplication inconsistent message and need to re-check all over.
  12. Hi there, First and foremost, congrats on the stable release of CloudDrive. A milestone indeed! Gave CloudDrive a re-spin, and it's much better integrated with Drivepool. I'm successfully running 3x duplication (one offline, one to the cloud via Clouddrive). However, I need to remeasure the pool upon every startup - Drivepool kicks in before CloudDrive is started. I've tried these solutions but to no avail. my virtual disk dependency is correctly set for both CD and DP. http://community.covecube.com/index.php?/topic/1085-how-to-delay-drivepool-service-after-reboot/ My observation on start up sequence: 1) The drivepool gets recognized almost instantaneously upon startup, but without the added cloudrive. 2) CD starts, gets mounted - then the drivepool gets updated immediately with new capacity. But by then the pool is dirty and I need to re-measure it. EDIT: I also keep getting a "Duplication inconsistent" error, although I believe it's more of my placement rules than anything else. Is there any way to restrict the clouddrive to store only files with 3x duplication enabled? my backup priority would be for 2x dups to be fully offline, and 3x dups to have both offline and online.
  13. borez

    Unable to Mount Drive

    Yep fixes the issue!
  14. borez

    Unable to Mount Drive

    Same problem also. Downgrading to .821 didn't help. There are no duplicate folders within Google Drive.
  15. Thanks again for your comments, as always. Specifically on the part below: Get what you mean on this, and understand on the logic. However, the issue was that it was taking forever to remove the CD disk, even when I initiated a "force detach" option. No idea on what was the bottleneck (slow download speeds?). Was comfortable with this as the clouddrive had duplicated data, and it would have been faster to re-dup the files from the offline storage pool. There might be an option for this, but can Drivepool focus on evacuating non-duplicated files, rather than the whole drive? This can be a life saver, particularly when you're trying to salvage data from a dying drive (with limited read cycles left).
×
×
  • Create New...