Jump to content

borez

Members
  • Posts

    24
  • Joined

  • Last visited

Posts posted by borez

  1. +1 for WebDAV support, especially with dwindling provider support after Google Drive.

    I've tried Hetzner Storage Box via FTP w/ CloudDrive but get rate-limited by the provider very quickly, due to the opening and closing of multiple FTP connections.

  2. Hi all,

    Got a problem with slow download speeds on Google Drive. The issue arose as my hybrid pool in DrivePool was experiencing tremendously slow read activity, which was attributed to the cloud drive's slow read speed. Detaching the cloud drive from the hybrid pool resolved the problem.

    On CloudDrive, I could hit close to 400mbps upload speed but download speeds are slow at < 50mbps (in most cases 1-2 threads @ 10mbps), despite increasing download threads and tweaking prefetch settings. This issue persists across multiple accounts, and alternative download methods (either via the hybrid pool drive, or the mounted cloud drive in Explorer).

    It's also not an issue with my ISP. Tried downloading a file directly via the web browser and the official Google Drive software was fast. From what I see, CD continues to be pinning data - not sure if this is the reason? Also realised my cache size was set to 1GB (Expendable), have increased that to 30GB and monitoring.

    Any suggestions on improving read speeds for the cloud drive?

    Stablebit CloudDrive 1.20.1485 BETA
    Stablebit DrivePool 2.30.1234 BETA

     

  3. On 9/11/2021 at 2:25 AM, Christopher (Drashna) said:

    Tinkering with the I/O performance settings may help here, namely the download threads.  

    And the enumeration is dependant on the pinning and connection speed.  

    Apologies for hijacking this thread, but got a similar query.

    Got a hybrid pool consisting of 1) online Google Drive and 2) offline drive pool (4xHDDs). The hybrid pool becomes very slow to check/measure, which is being dragged by the online drive. CloudDrive shows 1x download thread at 2-3mbps, and trying to pin data despite having enabled data pinning earlier. This happened as the NAS experienced an improper shutdown.

    @Christopher (Drashna)Is there anyway to accelerate data pinning (or force multi-threaded downloads)?

    Thank you!

  4. Hi,

    +1 to this request. I use CloudDrive as part of a offline+online pool in DrivePool, and it's a challenge to coordinate backups while navigating the 750GB upload cap.

    Would be great if there's improved scheduling communication between DrivePool and CloudDrive, i.e. for DrivePool to scale back when cache drives are full, or uploads are rate limited, and to increase priority when there's sufficient bandwidth + no backlogged writes.

  5. To be clear, Christopher: that actually is not the case here. When you hit the threshold it will begin to give you the 403 errors (User rate limit exceeded), but ONLY on uploads. It will still permit downloaded data to function as normal. It isn't actually an API limit, wholesale. They're just using the API to enforce an upload quota. Once you exceed the quota, all of your upload API calls fail but your download calls will succeed (unless you hit the 10TB/day quota on those as well). 

     

     

    Right. For the people who think they aren't hitting the 750GB/day limit, remember that it's a total limit across *every* application that is accessing that drive. So your aggregate cannot be more than that. But enough people have confirmed that the limit is somewhere right around 750GB/day, at this point, that I think we can call that settled. 

     

    BTW is the 750GB/day cap enforced on a per user account basis, or is it enforced amongst the whole user group?

  6.  

    That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps.

     

    A couple of observations:

    • The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4.
    • The errors seemed to have stopped when I was doing full writes to CD.

     

     

    To recap on my issues, I have been hitting the API call limits even I've not hit the proverbial 750GB cap.

     

    To provide some context: I'm trying to create hierarchical pools in Drivepool (backup-ing my local drivepool to the cloud). I manually replicated my data in the cloud using GoodSync, and had relocated the files into the Poolpart folder. Created a combined pool and did a duplication check.

     

    This triggered off a series of read and partial writes (and re-uploads) on CD. Based off the technical details, i see the following:

     

    • Partial re-writes are extremely slow (< 130kbps). I'm not sure if it's the same chunk that's being written (and re-written) again. See below for screenshot and error logs.
    • Once that partial write clears, the upload speeds are back up to 100-200mbps. Still not as fast as what I used to get, but will leave that for another day.

    So apart from the bandwidth caps, it seems that there are caps with the number of calls you can make, for a specific file. It's my gut intuition, but appreciate some thoughts. Pulling my hair out here!

    2:09:05.8: Warning: 0 : [ReadModifyWriteRecoveryImplementation:96] [W] Failed write (Chunk:34168, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [TemporaryWritesImplementation:96] Error performing read-modify-write, marking as failed (Chunk=34168, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:4] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:27] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [WholeChunkIoImplementation:104] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [IoManager:4] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [IoManager:27] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    2:09:06.0: Warning: 0 : [IoManager:104] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. 

    MS5e9Oz.png

  7. Are you sure it's the "User rate limit exceeded" issue?

     

    If it is, then ... that will be normal, as it takes time for this status to be removed from your account.  

     

     

    And for reference, 75mbps for 24 hours will hit around 800-900GBs.  So that may/will hit this limit.  It's ... disturbingly low, IMO. 

     

    That was what I was seeing in the error logs. The errors kicked in around 20-30 mins of operations so I'm definitely way below the bandwidth caps.

     

    A couple of observations:

    • The issues arose when CD was working on partial re-read/write chunk operations. In that instance the GUI showed i was running way more threads than what I stipulated in the settings. For example, I was downloading with 6-7 threads where my settings indicated 4.
    • The errors seemed to have stopped when I was doing full writes to CD.
  8. Have been getting lots of throttling/server side disconnect messages from Google Drive recently, see below. Have been using the same settings for some time, seems that there's some changes to their throttling mechanism?

     

    If this helps, some comments:

     

    1) I had an unsafe shutdown, and as a result i need to reupload 18GB of data. From the tech page, it seems that I'm redownload chunks for partial re-writes. Seems to be fairly intensive vs a straight-upload?

    2) The errors have started since I upgraded to the .900/901 builds.

    3) I typically run 6 threads, with a max upload speed of 400mbps. Scaling this down to 3/4 threads doesn't help.

    0:10:29.5: Warning: 0 : [IoManager:32] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.5: Warning: 0 : [ReadModifyWriteRecoveryImplementation:62] [W] Failed write (Chunk:564, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [TemporaryWritesImplementation:62] Error performing read-modify-write, marking as failed (Chunk=564, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:62] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [WholeChunkIoImplementation:61] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [IoManager:62] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:15:35.9: Warning: 0 : [IoManager:61] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.3: Warning: 0 : [ReadModifyWriteRecoveryImplementation:95] [W] Failed write (Chunk:2743, Offset:0x00000000 Length:0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [TemporaryWritesImplementation:95] Error performing read-modify-write, marking as failed (Chunk=2743, Offset=0x00000000, Length=0x01400500). Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error on write when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:95] Error when performing master partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on write when performing shared partial write. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [IoManager:95] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:51.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
    0:16:56.1: Warning: 0 : [ApiGoogleDrive:102] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
    0:16:56.1: Warning: 0 : [ApiHttp:102] HTTP protocol exception (Code=ServiceUnavailable).
    0:16:56.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,517ms and retrying.
    0:16:56.2: Warning: 0 : [ApiGoogleDrive:79] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
    0:16:56.2: Warning: 0 : [ApiHttp:79] HTTP protocol exception (Code=ServiceUnavailable).
    0:16:56.2: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,873ms and retrying.
    0:16:56.3: Warning: 0 : [ApiGoogleDrive:100] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
    0:16:56.3: Warning: 0 : [ApiHttp:100] HTTP protocol exception (Code=ServiceUnavailable).
    0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,561ms and retrying.
    0:16:56.3: Warning: 0 : [ApiGoogleDrive:84] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
    0:16:56.3: Warning: 0 : [ApiHttp:84] HTTP protocol exception (Code=ServiceUnavailable).
    0:16:56.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,664ms and retrying.
    0:16:57.3: Warning: 0 : [ApiGoogleDrive:96] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded
    0:16:57.3: Warning: 0 : [ApiHttp:96] HTTP protocol exception (Code=ServiceUnavailable).
    0:16:57.3: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,546ms and retrying.
    
  9. Sadly, I'm back with more problems.

     

    I've updated to the latest beta and I need to reauthorize the drive everytime it starts. Logs show a "Cannot start I/O manager for cloud part xxx (security exception). Security error." problem.

     

    When I reauthorize (and if I'm lucky), CD gets back online without re-indexing. If I'm not then it re-indexes everything all over.

     

    Any idea?

    0:00:30.3: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:00:30.4: Information: 0 : [Main] Cleaning up cloud drives...
    0:00:30.5: Information: 0 : [CloudDrives] Valid encryption key specified for cloud part xxx.
    0:00:30.6: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error.
    0:00:30.9: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:00:30.9: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error.
    0:00:31.6: Information: 0 : [ChunkIdSQLiteStorage:4] Cleaning up drives...
    0:00:31.6: Information: 0 : [Main] Enumerating disks...
    0:00:31.6: Information: 0 : [Disks] Updating disks / volumes...
    0:00:33.1: Information: 0 : [Main] Starting disk metadata...
    0:00:33.1: Information: 0 : [Main] Updating free space...
    0:00:33.1: Information: 0 : [Main] Service started.
    0:12:44.2: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:44.2: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxx (security exception). Security error.
    0:12:46.0: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:12:46.0: Warning: 0 : [CloudDrives] Cannot start I/O manager for cloud part xxxx (security exception). Security error.
    0:13:14.4: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:13:18.9: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:13:18.9: Information: 0 : [Disks] Got Pack_Arrive (pack ID: yyy)...
    0:13:18.9: Information: 0 : [Disks] Got Disk_Arrive (disk ID: yyy)...
    0:13:19.9: Information: 0 : [Disks] Updating disks / volumes...
    0:13:20.4: Information: 0 : [CloudDrives] Synchronizing cloud drives...
    0:13:20.9: Information: 0 : [Disks] Got Volume_Arrive (volume ID: xxxx, plex ID: 00000000-0000-0000-0000-000000000000, %: 0)...
    0:13:20.9: Information: 0 : [Disks] Got drive letter assign (volume ID: xxx)...
    0:13:22.5: Information: 0 : [Disks] Updating disks / volumes...
    0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx
    0:13:24.9: Information: 0 : [Disks] Got Disk_Modify (Disk ID: xxx
     and more...
  10. It's in the queue.  And as for "team", it's just Alex and myself.  I do the customer service and tech support, while he handles the more in depth tech support stuff, and development and everything else.  

     

    That said, Alex is aware of the issue (i've brought it up to him directly), so he is aware of it (and I'm sure thinking it over). 

     

    As for the release vs pre-release, that's entirely possible.  Depending on the provider, there may have been a lot of changes between when the drive was created and the RC and release versions. 

     

    BTW, apologies if my earlier post sounded harsh - that was definitely not my intent, and I know it's a small team of 2 managing everything.

     

    If I can recollect on my problems - it was all fine and dandy when I upgraded to the release version (0.870). The issue started to kick in between Builds 870-880. I'm not entirely sure if the break was server led (I'm using Google Drive BTW). 

     

    But I can confirm that the issue is solved when I re-created the drive w/ the release version. My old drive was created way back in the early beta builds.

     

    And kudos to a great product: I've been an early beta user and it's amazing to see how polished this product has become. Looking forward to the product's deeper integration with Drivepool.

  11. Is the team working on this? I'm getting the same issue, which is pretty annoying. 

     

    The software re-indexes the drive everytime I boot up. Had waited till it was fully indexed and rebooted - but it re-indexes all over. Tried re-mounting the drive but no avail.

     

    I've uploaded my troubleshooter logs though.

     

    EDIT: I decided to re-create a new drive (given the comments on pre-release drives in the other thread), and the issue's resolved. So the issue might be related to drives created with pre-release versions.

  12. The other option is to add the StableBit CloudDrive service as a dependancy for StableBit DrivePool.  However, this is a bit more complicated, and requires registry editing. 

    Unfortunately tried this, and go go. This is a deal breaker, and I'll be removing the Clouddrive from the drivepool. I'm going back to manual backups - hope future iterations of DP can fix this integration.

  13. Try this: 

    http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings

    Set "CoveFs_WaitForVolumesOnMountMs" 50000, and see if that helps. 

     

     

    Otherwise, you could run "services.msc", find the "StableBit DrivePool Service", right click on it and select "Properties". Then set the "startup type" to "Automatic (Delayed)", and that may help with the issue. 

     

    Unfortunately no go. Tried both options including the Automatic (delayed) service. The service does get delayed. but it still re-measures. After that i get a duplication inconsistent message and need to re-check all over.

  14. Hi there,

     

    First and foremost, congrats on the stable release of CloudDrive. A milestone indeed!

     

    Gave CloudDrive a re-spin, and it's much better integrated with Drivepool. I'm successfully running 3x duplication (one offline, one to the cloud via Clouddrive).

     

    However, I need to remeasure the pool upon every startup - Drivepool kicks in before CloudDrive is started. I've tried these solutions but to no avail. my virtual disk dependency is correctly set for both CD and DP.

     

    http://community.covecube.com/index.php?/topic/1085-how-to-delay-drivepool-service-after-reboot/

     

    My observation on start up sequence:

     

    1) The drivepool gets recognized almost instantaneously upon startup, but without the added cloudrive. 

    2) CD starts, gets mounted - then the drivepool gets updated immediately with new capacity. But by then the pool is dirty and I need to re-measure it.

     

    EDIT: I also keep getting a "Duplication inconsistent" error, although I believe it's more of my placement rules than anything else. Is there any way to restrict the clouddrive to store only files with 3x duplication enabled? my backup priority would be for 2x dups to be fully offline, and 3x dups to have both offline and online.

  15. Thanks again for your comments, as always. Specifically on the part below: 

     

     

    As for the removal issue... Well, the best way is "you're doing it wrong".  I don't mean to be overtly blunt here, but from the sounds of it, what you're trying to do isn't really supported, not properly. 

    In StableBit DrivePool, the removal process wants to move all of the data off of the drive.  For the CloudDrive disk, that means that it gets redownloaded and them moved to a local disk.  this can take hours or days, depending on the exact configuration (hardware, software, network, ISP, etc). 

     

    There is a "force detach" option when detaching the CloudDrive disk from the system. That closes "open files" and will detach the drive regardless.  This is most likely what you would want to do, but ...even still, probably not.

    This will cause the disk to show up as "missing" when it does disappear, and will reduplicate data that was on the CloudDrive disk. 

     

    See what I mean about "not supported". 

     

     

    Get what you mean on this, and understand on the logic. However, the issue was that it was taking forever to remove the CD disk, even when I initiated a "force detach" option. No idea on what was the bottleneck (slow download speeds?). Was comfortable with this as the clouddrive had duplicated data, and it would have been faster to re-dup the files from the offline storage pool.

     

    There might be an option for this, but can Drivepool focus on evacuating non-duplicated files, rather than the whole drive? This can be a life saver, particularly when you're trying to salvage data from a dying drive (with limited read cycles left).

  16. http://community.covecube.com/index.php?/topic/1226-how-can-i-use-stablebit-drivepool-with-stablebit-clouddrive/

     

     

    Unfortuantely, there isn't a good way to do this right now.  

     

    You can set the "Drive Usage Limiter" to only have duplicated data on the CloudDrive disk.  Since it needs 2 valid disks, it WILL use this drive first, and then it will find a second drive for this as well.

     

    For x3 duplication, that will store one copy on the CloudDrive disk, and 2 on the local disks. 

     

    However, this degrades the pool condition, because the balancer settings have been "violated" (duplicated data on drives not marked for duplication). 

     

    However, we do plan on adding "duplication grouping" to handle this seamlessly. But there is no ETA on that.

     

    Thanks, very interesting. I tried CD separately by manually copying my files. And it worked perfectly.

     

    Apart from duplication grouping, I think what needs to be done is on integration with Drivepool, particularly on read/write flow control:

     

    1) In my previous test (where Drivepool was managing the duplication), the cache drive (a 256GB SSD) was filled up and become terribly slow. As a result, the whole dup process (~600GB) took more than 1 full day. In today's test (Goodsync x CD), I had full manual control of the copying process. What I did was to flood the cache up to 60-70GB of uploads, pause the copying (to allow CD to flush those uploads into the cloud) and repeat. This whole process took me not more than 6 hours. Perhaps DP could have better control of the copying process.

     

    2) In my previous experience, DP would always have difficulty in pulling the CD out from the system, and I always have to force a "missing disk" forced pull (e.g. by killing the CD service, trigger a missing disk in DP, and removing it out). However, at the next reboot, DP would remember the cloud drive, and just put it back. Strange.

     

    Looking forward to future builds though!

  17. Well, without more info, I can't really tell... But likely one or more settings are violating the others, so it's degrading the pool condidtion. 

     

     

    As for the priority change and upload decrease, this is because more writing is occurring to the drive, and it adds some processing to optimize the upload, I believe.

     

    Once it's settled down, it should "normalize".

     

    Thanks for this. So are there any best practices on how to integrate Clouddrive with Drivepool? Specifically on the following:

     

    1) CD for only duplicated data (Drive Usage Limiters)

    2) Specific folders to have duplicated copy on CD - set folder rules for 3x duplication (2x offline, 1x online?)

    3) All other folders to have 2x duplication offline

     

    Thanks again!

  18. Hi all,

     

    Need some tips on cloud backups: I'm running a drivepool with 2x duplication, and would like to create a secondary cloud backup for critical files. These files are in a specific folder, where I've turned on 3x duplication (e.g. 2x local, 1x cloud). Specifically, the cloud drive should only store files with 3x duplication.

     

    Tried fiddling around with the settings, but Drivepool keeps wanting to move files with 2x duplication over to the cloud. Or rather that's what I think, because it's proposing to move 1TB of data from my HDD pool to the cloud (my filesizes are below that).

     

    My approach of doing this:

     

    1) Limit the cloud drive to only duplicated files (Balancing section)

    2) In the Placement Rules section, set the root folders to not use the cloud drive, but create a separate rule that sets the specific sub-folder to use the clouddrive. The subfolder rules are ordered above the root folder rules.

    3) Turn on 3x duplication for the specific folders

     

    Thanks again, and happy holidays!

     

    EDIT: I went ahead with the duplication, and worked after all. What I realised was that the re-balancing wasn't correct. I get the same issue below.

    http://community.covecube.com/index.php?/topic/2061-drivepool-not-balancing-correctly/

     

    post-2257-0-38915000-1482735710_thumb.png

     

    Clouddrive: 1.0.0.0.784 BETA

    Drivepool: 2.2.0.737 BETA

     

    Furthermore, when I click on the "increase priority" arrow in Drivepool, the duplication process speeds up but Clouddrive's uploads dramatically slow down. Any idea why?

  19. Hi there,

     

    I'm a Drivepool user, and am testing Clouddrive as a duplicating backup option to the Cloud.

     

    The integration works perfectly with Drivepool, but I've got issues when rebooting/restarting the server:

     

    1) Sometimes the shutdown will take a long time before doing so (as if something's still running)

    2) Whenever I boot the server up, Drivepool will warn about the missing cloud drive (for a short while), before certifying that all drives are back online. Got to know this because of the email notifications.

     

    As a result, I often get duplication/inconsistencies in my existing Drivepool, and have to rescan everytime I restart. This issue goes away when I remove the Clouddrive from the pool.

     

    Have installed both latest betas but to no avail.

     

    Thanks!

×
×
  • Create New...