Jump to content
Covecube Inc.

thnz

Members
  • Content Count

    139
  • Joined

  • Last visited

  • Days Won

    7

Posts posted by thnz

  1. Plex does 'extensive media analysis' during the maintenance window (I think its like 1am-6am, or thereabouts by default) which involves reading through entire media files. It will be very download heavy if its doing it on remote files. I'm not sure how often it does it per file - I assume its a one off per file, though it might get redone again.

     

    Regardless, it can be disabled by unchecking 'Perform extensive media analysis during maintenance' under 'Scheduled Tasks'. There's not much of a downside in having it disabled.

     

    Bit more info here, and what benefits it gives here.

  2. Did StableBit CloudDrive report the unsafe shutdown, or did Windows (eg, the event logs). 

     

    If Windows did, then the system may actually be crashing when shutting down,  Running a utility like "BlueScreenViewer" may be a good idea, as it may pick up the crash dump from what the shutdown occurred. 

     

    It was reported both in CloudDrive and in the system event logs, so I'm now assuming the system was actually crashing while shutting down, and CloudDrive was doing what it was supposed to by recovering. Unfortunately CCleaner has since removed any dump files, so I'm not able to investigate any further at the moment. 

     

    As tends to happen with these things, I've been unable to reproduce it since. I've excluded system memory dumps from CCleaner in the future, so if/when it happens in the future, I should have more to work with - it could well just be a wobbly driver or something.

  3. Windows 10 x64, and CloudDrive beta 850.

     

    Looking over the system event logs, it did mention an unexpected shutdown when It was supposedly shut down safely, so there may well be something else going on preventing a clean shut down. I'll have a closer look later on.

  4. I see the latest beta version (.818) contains the following change:

     

     

    * [D] Cloud drives were sometimes not shutting down properly, unnecessarily forcing unsafe recovery.

     

    Hopefully that fixes this issue, and I can now go back to a much larger cache size without the risk of it going into recovery so often after a restart.

  5. i can play them fine without buffering. they only take 6 seconds to load

     

    but the 20 MB minimal download was important for me, as 1 MB chunks would cause too slow a dl.

     

    Alternatively, you can manually increase IoManager_DefaultMaximumReadAggregation in the service config file - by default its set to 1MB. This will combine consecutive 1MB downloads into larger requests (ie. while prefetching), while leaving smaller reads at 1MB. That way you can get the benefit of fewer API calls/larger downloads, without suffering the extra overhead of increasing the minimum download size on smaller.

     

    Here's some clarification on what it does: https://stablebit.com/Admin/IssueAnalysis/27309

  6. I've found manually increasing IoManager_DefaultMaximumReadAggregation via the config file and then restarting the service helps work around this. I'm not entirely sure what it does as there's only minimal documentation in the changelog regarding it, so I'm not entirely sure what the full consequences in altering it are.

     

    By default its set to 1048576. My observations are that by increasing it to a value >10MB, the pre-fetcher would then download up to the drive's chunk size (10MB) per pre-fetch download request, rather than splitting it into lots of smaller 1MB requests. I guess whether or not this a good thing depends on use case. As a 10MB chunk downloads in less than a second for me (after the delay/throttling response), splitting the 10MB request up into separate threads isn't really necessary, and I'd rather use less API calls.

  7. I'm happy with the minimum download size as is and don't think it should be changed. It's perfect for smaller reads.

     

    HOWEVER, for larger reads, ie. when the pre-fetcher kicks in, it doesn't increase that download size at all, making (by default) 10x the number of API calls than is needed. A 10MB pre-fetch (with default drive settings) could be one or two API calls rather than 10 as is what currently happens. So its more the way in which the pre-fetcher doesn't increase the download size when contiguous parts of the same chunk are downloaded, rather than the minimum download size itself.

  8. The Minimum download size would change this, in most cases.

     

    Increasing the minimum download sizes improves it, but then the opposite happens and you download much more than is needed for smaller reads.

     

    Also, IIRC, if it's reading multiple contiguous blocks, it should be downloading the whole chunk, rather than just the single sections, I beleive. 

     

     

    In that case there must be a bug, as this isn't happening with the pre-fetcher - it always breaks it up into MINIMUM_DOWNLOAD_SIZE chunks, even when multiple contiguous blocks are read. 10MB of prefetching will result in 10x1MB downloads, rather than a single 10MB download (or an 8MB + 2MB etc). Fixing it could cut down on API requests by 90% with default chunk sizes.

  9. Any updates on this? Optimizing this could have a significant impact on speeds. If we're going to pre-fetch 50MB it'll be much more efficient to do so using 5x10MB downloads rather than 50x1MB ones - its a potential 90% saving in API requests on drives with default settings.

  10. I'm 99% sure it has to do with the 1MB prefetch block downloads, that's just absurd I/O being put on ACD for a 3GB file that needs to be prefetched. 

     

    Might want to follow this thread to see if/when its fixed. But yeah, that 3GB download being done in 1MB chunks is crazy inefficient. 3000 I/O requests when 300 would do (assuming the default 10MB chunk size) - thats 90% less API calls.

     

    FWIW you only need to reattach a drive to change the minimum download size - you don't need to delete and recreate it.

  11. Would it be possible to allow the use of your own Drive API client ids in CloudDrive?  If users were able to use their own ids then there would be no need to request a rate increase.

     

    Doing that might have unforeseen consequences. Google throttles by application for a reason, and bypassing that could potentially make their service worse for everyone. Amazon made their API invite only shortly after CloudDrive allowed custom profiles (not saying that the two were necessarily related), making it now much harder for other applications to integrate Amazon Drive support as a result.

  12. Just noticed that the machine running CloudDrive was unresponsive - the CloudDrive service was apparently using upwards of 8GB of memory and it was thrashing the page file. It's running .744 and was uploading at the time, I had previously copied across several GB to the cloud drive, but am unsure if that had finished before the memory explosion.

     

    Not sure if any of this will be relevant, but AV is Nod32, and the machine is also running Crashplan for backups. Only a single GoogleDrive is attached.

  13. Right now, Alex (the Developer) is implementing a fairly major change that should significantly cut down on the number of API calls and prevent the Rate Limit Exceeded" issue. 

     

    Is this the local file ID database, or is there something else in the pipeline too?

  14. Would it be possible for us to somehow use our own "client ID" and "client secret"? Would this help stop the rate limit exceeded problems?

     

    Amazon made their API invite only shortly after they did this with Amazon Drive.

  15. I had mostly been testing with ACD (with a custom dev limited security profile), and have only recently been trying Google Drive instead - speeds are far superior on the latter. Google traffic seems to go through Sydney, so its also a lot closer than Amazon (I'm in NZ - Amazon data goes to the US). Granted its more expensive ($10 vs $5 /mo), but its probably worth it for the speed and reliability - files on Amazon seem to have a very small chance to randomly disappear altogether.

     

    NZ is making great progress in net speeds - so long as youre not rural - Gigabit speeds are now available in a lot of places - you can get uncapped/unthrottled/unshaped 1000/500 for $130NZD ($95USD)/mo.

×
×
  • Create New...