Jump to content

jsbzoh6

Members
  • Posts

    8
  • Joined

  • Last visited

Posts posted by jsbzoh6

  1. My concern is less Googles end and more my network end with a constant 100mbps (or 75mbps after I've throttled it down) stream going out. I just looked through today's logs and it looks like the exponential backoff may not be working/configured properly. I'm showing identical log entries every second which coincide with my traffic logs.

  2. I have a very large library I'm trying to upload to Google Drive (~100TB worth). I obviously hit the 750GB limit pretty quickly. My issue is how CloudDrive handles this limit once it happens.. which is not handling it at all. It will continue to bombard Google with ~500mbps uploads until the limit lifts and it can successfully upload again. The pause button doesn't seem to work so I'm finding myself either drastically lowering the throttle or killing the service until later in the day (if I remember to turn it back on). Is there a setting or a way for CloudDrive to keep uploading until it hits the limit, then pause or stop for a certain amount of time? I know it isn't smart enough to know when the limit is lifted, but even a periodic "test" file is better than hammering Google (and my network) with near gigabit speeds into a brick wall.

  3. Are there any recommended performance settings for CloudDrive and Google Drive? I'm on a gigabit up/down. My upload seems to be great, usually around 500-800mbps. But my download seems abysmal, usually around 30mbps but the best I've seen was 100mbps. I've tried playing around with the I/O performance settings but I can't seem improve anything.

     

    Current settings:

    Download threads: 10

    Upload threads: 10 (Background I/O checked)

    Upload threshold: 10 MB or 5 minutes

    Prefetch trigger: 20 MB

    Prefetch forward: 400 MB

    Prefetch time window: 1000 seconds

     

    There's a 50 GB local cache on a SSD that it's sharing with the host OS (Server 2016).

  4. Now that that's out of the way I'm having another issue. It appears most of my CloudDrive pool is "Unusable for duplication".

     

    CloudDrive_Pool (200TB total)

    --- CloudDrive_1

    --- CloudDrive_2

     

    Phyiscal_Pool (100TB total)

    --- Disk_1

    --- ...

    --- Disk_21

     

    Pool_All: 300TB

    --- CloudDrive_Pool

    --- Phyiscal_Pool

     

    Both the Physical and CloudDrive pools have no issues/errors and don't display any noticeable "Unusable for duplication" (75MB  on the CloudDrive_Pool, for example). When I add the CloudDrive pool, it becomes 175 TB "Unusable for duplication" on the Pool_All.

  5. I'm using the latest beta (2.2.0.802) and I've discovered a bit of an issue when running pools of pools. My current structure is:

     

    Pool A: 2x CloudDrives

    Pool B: 21x Phyiscal Disks

    Pool C: Pool A + Pool B

     

    Because I have Pools of Pools, there are 2 PoolPart hidden folders, each consisting of what looks like 45 characters. So for both of these folders we're at 90 characters of the 255 size limit before we even start with any other files/folders. Most of my files are OK but I'm still running into many that are hitting this limit and causing all kinds of problems.

     

    This is less of an issue and more of a discussion if this is a feature that will be released in the future. Do I have any options for reducing the size of the PoolPart folder names?

×
×
  • Create New...