Jump to content

borez

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by borez

  1. borez

    WebDAV support

    +1 for WebDAV support, especially with dwindling provider support after Google Drive. I've tried Hetzner Storage Box via FTP w/ CloudDrive but get rate-limited by the provider very quickly, due to the opening and closing of multiple FTP connections.
  2. Hi all, Got a problem with slow download speeds on Google Drive. The issue arose as my hybrid pool in DrivePool was experiencing tremendously slow read activity, which was attributed to the cloud drive's slow read speed. Detaching the cloud drive from the hybrid pool resolved the problem. On CloudDrive, I could hit close to 400mbps upload speed but download speeds are slow at < 50mbps (in most cases 1-2 threads @ 10mbps), despite increasing download threads and tweaking prefetch settings. This issue persists across multiple accounts, and alternative download methods (either via the hybrid pool drive, or the mounted cloud drive in Explorer). It's also not an issue with my ISP. Tried downloading a file directly via the web browser and the official Google Drive software was fast. From what I see, CD continues to be pinning data - not sure if this is the reason? Also realised my cache size was set to 1GB (Expendable), have increased that to 30GB and monitoring. Any suggestions on improving read speeds for the cloud drive? Stablebit CloudDrive 1.20.1485 BETA Stablebit DrivePool 2.30.1234 BETA
  3. Apologies for hijacking this thread, but got a similar query. Got a hybrid pool consisting of 1) online Google Drive and 2) offline drive pool (4xHDDs). The hybrid pool becomes very slow to check/measure, which is being dragged by the online drive. CloudDrive shows 1x download thread at 2-3mbps, and trying to pin data despite having enabled data pinning earlier. This happened as the NAS experienced an improper shutdown. @Christopher (Drashna)Is there anyway to accelerate data pinning (or force multi-threaded downloads)? Thank you!
  4. Hi, +1 to this request. I use CloudDrive as part of a offline+online pool in DrivePool, and it's a challenge to coordinate backups while navigating the 750GB upload cap. Would be great if there's improved scheduling communication between DrivePool and CloudDrive, i.e. for DrivePool to scale back when cache drives are full, or uploads are rate limited, and to increase priority when there's sufficient bandwidth + no backlogged writes.
  5. BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?
  6. To recap on my issues, I have been hitting the API call limits even I've not hit the proverbial 750GB cap. To provide some context: I'm trying to create hierarchical pools in Drivepool (backup-ing my local drivepool to the cloud). I manually replicated my data in the cloud using GoodSync, and had relocated the files into the Poolpart folder. Created a combined pool and did a duplication check. This triggered off a series of read and partial writes (and re-uploads) on CD. Based off the technical details, i see the following: Partial re-writes are extremely slow (< 130kbps). I'm not sure if it's the same chunk that's being written (and re-written) again. See below for screenshot and error logs. Once that partial write clears, the upload speeds are back up to 100-200mbps. Still not as fast as what I used to get, but will leave that for another day. So apart from the bandwidth caps, it seems that there are caps with the number of calls you can make, for a specific file. It's my gut intuition, but appreciate some thoughts. Pulling my hair out here! 2:09:05.8: Warning: 0 : [ReadModifyWriteRecoveryImplementation:96] [W] Failed write (Chunk:34168, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [TemporaryWritesImplementation:96] Error performing read-modify-write, marking as failed (Chunk=34168, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:4] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:27] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:104] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:4] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:27] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 2:09:06.0: Warning: 0 : [IoManager:104] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
  7. That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps. A couple of observations: The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4. The errors seemed to have stopped when I was doing full writes to CD.
  8. BTW I'm hitting the errors upon startup, so it seems to be independent on upload caps. And I've only managed to upload at speeds < 200mbps. So something has changed with Google, but without a clear trend.
  9. Have been getting lots of throttling/server side disconnect messages from Google Drive recently, see below. Have been using the same settings for some time, seems that there's some changes to their throttling mechanism? If this helps, some comments: 1) I had an unsafe shutdown, and as a result i need to reupload 18GB of data. From the tech page, it seems that I'm redownload chunks for partial re-writes. Seems to be fairly intensive vs a straight-upload? 2) The errors have started since I upgraded to the .900/901 builds. 3) I typically run 6 threads, with a max upload speed of 400mbps. Scaling this down to 3/4 threads doesn't help. 0:10:29.5: Warning: 0 : [IoManager:32] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.5: Warning: 0 : [ReadModifyWriteRecoveryImplementation:62] [W] Failed write (Chunk:564, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [TemporaryWritesImplementation:62] Error performing read-modify-write, marking as failed (Chunk=564, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:61] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:62] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:15:35.9: Warning: 0 : [IoManager:61] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.3: Warning: 0 : [ReadModifyWriteRecoveryImplementation:95] [W] Failed write (Chunk:2743, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [TemporaryWritesImplementation:95] Error performing read-modify-write, marking as failed (Chunk=2743, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:95] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:51.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 0:16:56.1: Warning: 0 : [ApiGoogleDrive:102] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.1: Warning: 0 : [ApiHttp:102] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,517ms and retrying. 0:16:56.2: Warning: 0 : [ApiGoogleDrive:79] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.2: Warning: 0 : [ApiHttp:79] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.2: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,873ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:100] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:100] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,561ms and retrying. 0:16:56.3: Warning: 0 : [ApiGoogleDrive:84] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:56.3: Warning: 0 : [ApiHttp:84] HTTP protocol exception (Code=ServiceUnavailable). 0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,664ms and retrying. 0:16:57.3: Warning: 0 : [ApiGoogleDrive:96] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 0:16:57.3: Warning: 0 : [ApiHttp:96] HTTP protocol exception (Code=ServiceUnavailable). 0:16:57.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,546ms and retrying.
  10. borez

    Drive mount Error

    Sadly, I'm back with more problems. I've updated to the latest beta and I need to reauthorize the drive everytime it starts. Logs show a "Cannot start I/O manager for cloud part xxx (security exception). Security error." problem. When I reauthorize (and if I'm lucky), CD gets back online without re-indexing. If I'm not then it re-indexes everything all over. Any idea? 0:00:30.3: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.4: Information: 0 : [Main] Cleaning up cloud drives... 0:00:30.5: Information: 0 : [CloudDrives] Valid encryption key specified for cloud part xxx. 0:00:30.6: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:30.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:00:30.9: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:00:31.6: Information: 0 : [ChunkIdSQLiteStorage:4] Cleaning up drives... 0:00:31.6: Information: 0 : [Main] Enumerating disks... 0:00:31.6: Information: 0 : [Disks] Updating disks / volumes... 0:00:33.1: Information: 0 : [Main] Starting disk metadata... 0:00:33.1: Information: 0 : [Main] Updating free space... 0:00:33.1: Information: 0 : [Main] Service started. 0:12:44.2: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:44.2: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error. 0:12:46.0: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:12:46.0: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxxx (security exception). Security error. 0:13:14.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:18.9: Information: 0 : [Disks] Got Pack_Arrive (pack ID: yyy)... 0:13:18.9: Information: 0 : [Disks] Got Disk_Arrive (disk ID: yyy)... 0:13:19.9: Information: 0 : [Disks] Updating disks / volumes... 0:13:20.4: Information: 0 : [CloudDrives] Synchronizing cloud drives... 0:13:20.9: Information: 0 : [Disks] Got Volume_Arrive (volume ID: xxxx, plex ID: 00000000-0000-0000-0000-000000000000, %: 0)... 0:13:20.9: Information: 0 : [Disks] Got drive letter assign (volume ID: xxx)... 0:13:22.5: Information: 0 : [Disks] Updating disks / volumes... 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx 0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx and more...
  11. BTW, apologies if my earlier post sounded harsh - that was definitely not my intent, and I know it's a small team of 2 managing everything. If I can recollect on my problems - it was all fine and dandy when I upgraded to the release version (0.870). The issue started to kick in between Builds 870-880. I'm not entirely sure if the break was server led (I'm using Google Drive BTW). But I can confirm that the issue is solved when I re-created the drive w/ the release version. My old drive was created way back in the early beta builds. And kudos to a great product: I've been an early beta user and it's amazing to see how polished this product has become. Looking forward to the product's deeper integration with Drivepool.
  12. Is the team working on this? I'm getting the same issue, which is pretty annoying. The software re-indexes the drive everytime I boot up. Had waited till it was fully indexed and rebooted - but it re-indexes all over. Tried re-mounting the drive but no avail. I've uploaded my troubleshooter logs though. EDIT: I decided to re-create a new drive (given the comments on pre-release drives in the other thread), and the issue's resolved. So the issue might be related to drives created with pre-release versions.
  13. Unfortunately I've removed the clouddrive, but prior logs should have the errors. Have uploaded the files. Thanks for following up!
  14. Unfortunately tried this, and go go. This is a deal breaker, and I'll be removing the Clouddrive from the drivepool. I'm going back to manual backups - hope future iterations of DP can fix this integration.
  15. Unfortunately no go. Tried both options including the Automatic (delayed) service. The service does get delayed. but it still re-measures. After that i get a duplication inconsistent message and need to re-check all over.
  16. Hi there, First and foremost, congrats on the stable release of CloudDrive. A milestone indeed! Gave CloudDrive a re-spin, and it's much better integrated with Drivepool. I'm successfully running 3x duplication (one offline, one to the cloud via Clouddrive). However, I need to remeasure the pool upon every startup - Drivepool kicks in before CloudDrive is started. I've tried these solutions but to no avail. my virtual disk dependency is correctly set for both CD and DP. http://community.covecube.com/index.php?/topic/1085-how-to-delay-drivepool-service-after-reboot/ My observation on start up sequence: 1) The drivepool gets recognized almost instantaneously upon startup, but without the added cloudrive. 2) CD starts, gets mounted - then the drivepool gets updated immediately with new capacity. But by then the pool is dirty and I need to re-measure it. EDIT: I also keep getting a "Duplication inconsistent" error, although I believe it's more of my placement rules than anything else. Is there any way to restrict the clouddrive to store only files with 3x duplication enabled? my backup priority would be for 2x dups to be fully offline, and 3x dups to have both offline and online.
  17. Yep fixes the issue!
  18. Same problem also. Downgrading to .821 didn't help. There are no duplicate folders within Google Drive.
  19. Thanks again for your comments, as always. Specifically on the part below: Get what you mean on this, and understand on the logic. However, the issue was that it was taking forever to remove the CD disk, even when I initiated a "force detach" option. No idea on what was the bottleneck (slow download speeds?). Was comfortable with this as the clouddrive had duplicated data, and it would have been faster to re-dup the files from the offline storage pool. There might be an option for this, but can Drivepool focus on evacuating non-duplicated files, rather than the whole drive? This can be a life saver, particularly when you're trying to salvage data from a dying drive (with limited read cycles left).
  20. Thanks, very interesting. I tried CD separately by manually copying my files. And it worked perfectly. Apart from duplication grouping, I think what needs to be done is on integration with Drivepool, particularly on read/write flow control: 1) In my previous test (where Drivepool was managing the duplication), the cache drive (a 256GB SSD) was filled up and become terribly slow. As a result, the whole dup process (~600GB) took more than 1 full day. In today's test (Goodsync x CD), I had full manual control of the copying process. What I did was to flood the cache up to 60-70GB of uploads, pause the copying (to allow CD to flush those uploads into the cloud) and repeat. This whole process took me not more than 6 hours. Perhaps DP could have better control of the copying process. 2) In my previous experience, DP would always have difficulty in pulling the CD out from the system, and I always have to force a "missing disk" forced pull (e.g. by killing the CD service, trigger a missing disk in DP, and removing it out). However, at the next reboot, DP would remember the cloud drive, and just put it back. Strange. Looking forward to future builds though!
  21. Thanks for this. So are there any best practices on how to integrate Clouddrive with Drivepool? Specifically on the following: 1) CD for only duplicated data (Drive Usage Limiters) 2) Specific folders to have duplicated copy on CD - set folder rules for 3x duplication (2x offline, 1x online?) 3) All other folders to have 2x duplication offline Thanks again!
  22. Hi all, Need some tips on cloud backups: I'm running a drivepool with 2x duplication, and would like to create a secondary cloud backup for critical files. These files are in a specific folder, where I've turned on 3x duplication (e.g. 2x local, 1x cloud). Specifically, the cloud drive should only store files with 3x duplication. Tried fiddling around with the settings, but Drivepool keeps wanting to move files with 2x duplication over to the cloud. Or rather that's what I think, because it's proposing to move 1TB of data from my HDD pool to the cloud (my filesizes are below that). My approach of doing this: 1) Limit the cloud drive to only duplicated files (Balancing section) 2) In the Placement Rules section, set the root folders to not use the cloud drive, but create a separate rule that sets the specific sub-folder to use the clouddrive. The subfolder rules are ordered above the root folder rules. 3) Turn on 3x duplication for the specific folders Thanks again, and happy holidays! EDIT: I went ahead with the duplication, and worked after all. What I realised was that the re-balancing wasn't correct. I get the same issue below. http://community.covecube.com/index.php?/topic/2061-drivepool-not-balancing-correctly/ Clouddrive: 1.0.0.0.784 BETA Drivepool: 2.2.0.737 BETA Furthermore, when I click on the "increase priority" arrow in Drivepool, the duplication process speeds up but Clouddrive's uploads dramatically slow down. Any idea why?
  23. Interestingly, I now get error messages when I shutdown the system. Apparently, now Clouddrive seems to be terminated earlier than Drivepool, and I'm getting the dropout emails upon shutdown. BTW, is there any way to relink the drive back on a freshly formatted system? I plan to reformat my server in a couple of weeks' time.
  24. Hi there, I'm a Drivepool user, and am testing Clouddrive as a duplicating backup option to the Cloud. The integration works perfectly with Drivepool, but I've got issues when rebooting/restarting the server: 1) Sometimes the shutdown will take a long time before doing so (as if something's still running) 2) Whenever I boot the server up, Drivepool will warn about the missing cloud drive (for a short while), before certifying that all drives are back online. Got to know this because of the email notifications. As a result, I often get duplication/inconsistencies in my existing Drivepool, and have to rescan everytime I restart. This issue goes away when I remove the Clouddrive from the pool. Have installed both latest betas but to no avail. Thanks!
×
×
  • Create New...