Jump to content

borez

Members
  • Posts

    23
  • Joined

  • Last visited

borez's Achievements

Member

Member (2/3)

1

Reputation

  1. Hi all, Got a problem with slow download speeds on Google Drive. The issue arose as my hybrid pool in DrivePool was experiencing tremendously slow read activity, which was attributed to the cloud drive's slow read speed. Detaching the cloud drive from the hybrid pool resolved the problem. On CloudDrive, I could hit close to 400mbps upload speed but download speeds are slow at < 50mbps (in most cases 1-2 threads @ 10mbps), despite increasing download threads and tweaking prefetch settings. This issue persists across multiple accounts, and alternative download methods (either via the hybrid pool drive, or the mounted cloud drive in Explorer). It's also not an issue with my ISP. Tried downloading a file directly via the web browser and the official Google Drive software was fast. From what I see, CD continues to be pinning data - not sure if this is the reason? Also realised my cache size was set to 1GB (Expendable), have increased that to 30GB and monitoring. Any suggestions on improving read speeds for the cloud drive? Stablebit CloudDrive 1.20.1485 BETA Stablebit DrivePool 2.30.1234 BETA
  2. Apologies for hijacking this thread, but got a similar query. Got a hybrid pool consisting of 1) online Google Drive and 2) offline drive pool (4xHDDs). The hybrid pool becomes very slow to check/measure, which is being dragged by the online drive. CloudDrive shows 1x download thread at 2-3mbps, and trying to pin data despite having enabled data pinning earlier. This happened as the NAS experienced an improper shutdown. @Christopher (Drashna)Is there anyway to accelerate data pinning (or force multi-threaded downloads)? Thank you!
  3. Hi, +1 to this request. I use CloudDrive as part of a offline+online pool in DrivePool, and it's a challenge to coordinate backups while navigating the 750GB upload cap. Would be great if there's improved scheduling communication between DrivePool and CloudDrive, i.e. for DrivePool to scale back when cache drives are full, or uploads are rate limited, and to increase priority when there's sufficient bandwidth + no backlogged writes.
  4. BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?
  5. To recap on my issues, I have been hitting the API call limits even I've not hit the proverbial 750GB cap. To provide some context: I'm trying to create hierarchical pools in Drivepool (backup-ing my local drivepool to the cloud). I manually replicated my data in the cloud using GoodSync, and had relocated the files into the Poolpart folder. Created a combined pool and did a duplication check. This triggered off a series of read and partial writes (and re-uploads) on CD. Based off the technical details, i see the following: Partial re-writes are extremely slow (< 130kbps). I'm not sure if it's the same chunk that's being written (and re-written) again. See below for screenshot and error logs. Once that partial write clears, the upload speeds are back up to 100-200mbps. Still not as fast as what I used to get, but will leave that for another day. So apart from the bandwidth caps, it seems that there are caps with the number of calls you can make, for a specific file. It's my gut intuition, but appreciate some thoughts. Pulling my hair out here! 2:09:05.8: Warning: 0 : [ReadModifyWriteRecoveryImplementation:96] [W] Failed write (Chunk:34168, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [TemporaryWritesImplementation:96] Error performing read-modify-write, marking as failed (Chunk=34168, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:4] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:27] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:104] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:4] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:27] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:104] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
  6. That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps. A couple of observations: The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4. The errors seemed to have stopped when I was doing full writes to CD.
  7. BTW I'm hitting the errors upon startup, so it seems to be independent on upload caps. And I've only managed to upload at speeds < 200mbps. So something has changed with Google, but without a clear trend.
  8. Have been getting lots of throttling/server side disconnect messages from Google Drive recently, see below. Have been using the same settings for some time, seems that there's some changes to their throttling mechanism? If this helps, some comments: 1) I had an unsafe shutdown, and as a result i need to reupload 18GB of data. From the tech page, it seems that I'm redownload chunks for partial re-writes. Seems to be fairly intensive vs a straight-upload? 2) The errors have started since I upgraded to the .900/901 builds. 3) I typically run 6 threads, with a max upload speed of 400mbps. Scaling this down to 3/4 threads doesn't help. 0:10:29.5: Warning: 0 : [IoManager:32] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.5: Warning: 0 : [ReadModifyWriteRecoveryImplementation:62] [W] Failed write (Chunk:564, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [TemporaryWritesImplementation:62] Error performing read-modify-write, marking as failed (Chunk=564, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:61] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:62] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:61] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.3: Warning: 0 : [ReadModifyWriteRecoveryImplementation:95] [W] Failed write (Chunk:2743, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [TemporaryWritesImplementation:95] Error performing read-modify-write, marking as failed (Chunk=2743, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:95] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:56.1: Warning: 0 : [ApiGoogleDrive:102] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.1: Warning: 0 : [ApiHttp:102] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,517ms and retrying. 0:16:56.2: Warning: 0 : [ApiGoogleDrive:79] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.2: Warning: 0 : [ApiHttp:79] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.2: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,873ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:100] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:100] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,561ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:84] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:84] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,664ms and retrying. 0:16:57.3: Warning: 0 : [ApiGoogleDrive:96] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:57.3: Warning: 0 : [ApiHttp:96] HTTP protocol exception (Code=ServiceUnavailable). 0:16:57.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,546ms and retrying.
  9. borez

    Drive mount Error

    Sadly, I'm back with more problems. I've updated to the latest beta and I need to reauthorize the drive everytime it starts. Logs show a "Cannot start I/O manager for cloud part xxx (security exception). Security error." problem. When I reauthorize (and if I'm lucky), CD gets back online without re-indexing. If I'm not then it re-indexes everything all over. Any idea? 0:00:30.3: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.4: Information: 0 : [Main] Cleaning up cloud drives... 0:00:30.5: Information: 0 : [CloudDrives] Valid encryption key specified for cloud part xxx. 0:00:30.6: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:30.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.9: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:31.6: Information: 0 : [ChunkIdSQLiteStorage:4] Cleaning up drives... 0:00:31.6: Information: 0 : [Main] Enumerating disks... 0:00:31.6: Information: 0 : [Disks] Updating disks / volumes... 0:00:33.1: Information: 0 : [Main] Starting disk metadata... 0:00:33.1: Information: 0 : [Main] Updating free space... 0:00:33.1: Information: 0 : [Main] Service started. 0:12:44.2: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:44.2: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:12:46.0: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:46.0: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxxx (security exception). Security error. 0:13:14.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [Disks] Got Pack_Arrive (pack ID: yyy)... 0:13:18.9: Information: 0 : [Disks] Got Disk_Arrive (disk ID: yyy)... 0:13:19.9: Information: 0 : [Disks] Updating disks / volumes... 0:13:20.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:20.9: Information: 0 : [Disks] Got Volume_Arrive (volume ID: xxxx, plex ID: 00000000-0000-0000-0000-000000000000, %: 0)... 0:13:20.9: Information: 0 : [Disks] Got drive letter assign (volume ID: xxx)... 0:13:22.5: Information: 0 : [Disks] Updating disks / volumes... 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx and more...
  10. BTW, apologies if my earlier post sounded harsh - that was definitely not my intent, and I know it's a small team of 2 managing everything. If I can recollect on my problems - it was all fine and dandy when I upgraded to the release version (0.870). The issue started to kick in between Builds 870-880. I'm not entirely sure if the break was server led (I'm using Google Drive BTW). But I can confirm that the issue is solved when I re-created the drive w/ the release version. My old drive was created way back in the early beta builds. And kudos to a great product: I've been an early beta user and it's amazing to see how polished this product has become. Looking forward to the product's deeper integration with Drivepool.
  11. Is the team working on this? I'm getting the same issue, which is pretty annoying. The software re-indexes the drive everytime I boot up. Had waited till it was fully indexed and rebooted - but it re-indexes all over. Tried re-mounting the drive but no avail. I've uploaded my troubleshooter logs though. EDIT: I decided to re-create a new drive (given the comments on pre-release drives in the other thread), and the issue's resolved. So the issue might be related to drives created with pre-release versions.
  12. Unfortunately I've removed the clouddrive, but prior logs should have the errors. Have uploaded the files. Thanks for following up!
  13. Unfortunately tried this, and go go. This is a deal breaker, and I'll be removing the Clouddrive from the drivepool. I'm going back to manual backups - hope future iterations of DP can fix this integration.
  14. Unfortunately no go. Tried both options including the Automatic (delayed) service. The service does get delayed. but it still re-measures. After that i get a duplication inconsistent message and need to re-check all over.
  15. Hi there, First and foremost, congrats on the stable release of CloudDrive. A milestone indeed! Gave CloudDrive a re-spin, and it's much better integrated with Drivepool. I'm successfully running 3x duplication (one offline, one to the cloud via Clouddrive). However, I need to remeasure the pool upon every startup - Drivepool kicks in before CloudDrive is started. I've tried these solutions but to no avail. my virtual disk dependency is correctly set for both CD and DP. http://community.covecube.com/index.php?/topic/1085-how-to-delay-drivepool-service-after-reboot/ My observation on start up sequence: 1) The drivepool gets recognized almost instantaneously upon startup, but without the added cloudrive. 2) CD starts, gets mounted - then the drivepool gets updated immediately with new capacity. But by then the pool is dirty and I need to re-measure it. EDIT: I also keep getting a "Duplication inconsistent" error, although I believe it's more of my placement rules than anything else. Is there any way to restrict the clouddrive to store only files with 3x duplication enabled? my backup priority would be for 2x dups to be fully offline, and 3x dups to have both offline and online.
×
×
  • Create New...