Jump to content

thnz

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by thnz

  1. thnz

    Rate Limit Exceeded

    I say have the pre-fetcher consolidate consecutive reads into fewer threads. That could dramatically lower API calls - instead of 10 calls to download 1MB each, a single call could do all 10MB. Also.... why not both?
  2. I'm OK with standard reads being 1MB. The pre-fetcher should kick in when anything larger is done anyway - copying large files, playing videos etc. If the pre-fetcher worked as intended it would potentially be a lot faster/efficient - ie a singleish 10MB read instead of 10x1MB reads. And yeah, it'll lower API calls on the provider too, so you'd be much less likely to be throttled.
  3. thnz

    Checksum Mismatch

    Its been that way since at least May of last year - https://web.archive.org/web/20150512201810/https://developer.amazon.com/public/apis/experience/cloud-drive/content/nodes Amazon doesn't support partial writes anyway, so CloudDrive will be uploading the entire chunk regardless. Would be a lot more efficient to simply compare md5 hashes once the upload completes rather than downloading it for verification.
  4. thnz

    Checksum Mismatch

    Would verification be more efficient if it were to use server side hashing to ensure file consistency? It might be a good halfway point between just assuming a chunk uploaded successfully and having to redownload it in order to verify. For instance, it looks like Amazon Drive can return a node/file's md5 hash (assuming I'm reading the documentation correctly).
  5. So is there actually a whole new API? Or did they just slap a '2.0' on it and make it invite only?
  6. I captured those logs on a 2GB NTFS drive (Google drive, though the issue isn't provider specific as it happens on Dropbox and Amazon drives too) created with default settings (10MB chunks, 4KB sector size, 20MB cache chunk, no minimum download size, ntfs, windows default cluster size) to demonstrate the issue in the latest beta (.722) - the cluster size was set to 'Windows Default', so 4KB IIRC(?). The drive was empty apart from a single 800mb video file that I was using to demonstrate this issue - after copying the file across I cleared cache and restarted the machine, then simply played the video with drive tracing enabled to trigger it. Pre-fetch settings were default. The 'technical details' window would show lots of individual 1MB pre-fetch reads that weren't consolidated into single download threads, even though they were happening simultaneously and on consecutive parts of the same chunk.
  7. Submitted logs via the dropbox link of it not working as expected yesterday. OP also linked logs in post #8, Hopefully its an easy thing to fix/tweak.
  8. No, its still default. Increasing that would just make smaller reads more inefficient. If the pre-fetcher would consolidate consecutive reads into a single download thread, then we could have the best of both worlds. This is possibly more of an issue with a low download thread count - for instance with a 2 thread limit only 2x1MB would be able to be downloaded at once, whereas optimally it would be 2x10MB. It looks like its already supposed to do something similar, though it might either be bugged or need tweaking. From the changelog: .598 * [D] The kernel prefetcher is now able to perform special aligned (long range) prefetch requests. These can be as large as 100 MB per request, and multiple requests are permitted. These special long range requests are only allowed if the entire prefetch range can perform prefetching. If the long range prefetch check fails, then a standard more granular prefetch takes place (using 512 KB blocks). The more granular prefetch is now optimized to combine multiple smaller requests into one or more larger requests, if possible, in order to reduce overhead. - Aligned long range requests are possible under these circumstances: - If legacy chunk verification is used, alignment is set to the chunk size. - If block based chunk verification is used, alignment is set to 1 MB. - If minimum read size is specified, then alignment is set to the minimum read size or to the above, whichever is greater (use this with caution as it can reduce the effectiveness of multi-threading while downloading, depending on your settings).
  9. Have uploaded logs to that dropbox link - wasn't able to ref this thread when I uploaded, but hopefully it ends up where it needs to go. I watched the pre-fetcher get maybe ~50mb of data in consecutive 1mb reads - ideally it should have been consolidated into 5ish 10mbish reads. The drive was created with default settings (10mb chunk size).
  10. A logical optimization for the pre-fetcher would be if it were to combine consecutive reads into a single download thread. For instance in that screenshot: https://i.imgur.com/3nO6yHC.png The 8 highlighted requests should be combined into a single 8mb download, rather than 8x1mb downloads as its all consecutive reads of the same chunk. Perhaps the pre-fetcher should wait a second or so to see how far ahead it wants to read, and then firing off a single consolidated download request.
  11. Will this have any effect on the way ACD is currently supported? If this applies to the current API then would new users would need an invite in order to use it? FWIW It looks like 'Amazon Drive' is no longer an available tab in the dev console.
  12. https://addons.mozilla.org/en-US/firefox/addon/alertbox/ Have it set to poll for changes every 8 hrs atm.
  13. Posted what was causing it before going to bed last night. When I first opened Firefox this morning I got a popup saying the changelog had been updated - checked it out and found it had been fixed 20mins prior
  14. From a bit more testing, I've found that this happens when the drive is created with a 64K cluster size (default). Pinning works as expected when creating a drive with a 4K cluster size - which I assume was the old default as older drives have it set to 4K. I'm going to guess that this has been happening on all drives created with default settings since .618, as thats when the cluster size option was added. I haven't tried any other cluster sizes. Hopefully it'll be easy enough to reproduce now. Good luck!
  15. Done. I didn't get to reference this issue/thread when uploading, so hopefully it ends up in the right place.
  16. Whereas older drives created a few months ago will still have a few GBs of pinned data, new drives seem to only have 588KB pinned. This is regardless of drive size, files/folders/data on the drive, and provider. To reproduce, simply create a new drive with default settings (4K sector size, drive encryption, format NTFS+assign letter, 64KB cluster size). It happens on both Amazon and Dropbox. I've only just noticed - the oldest effected drive I have is from 13 June, so its been happening since at least .621.
  17. I have several files on ACD that return 404 errors when downloads are attempted. {"statusCode":"404"}" to be precise! They appear as normal in the web interface, but downloads always fail. Is this the same issue? I've made contact with Amazon regarding this, and have had it escalated to the Drive team, so am hoping for a response soonTM. On the other hand, if it was the 429 errors when downloading, then that has since been solved.
  18. Yeah I saw the notes in the changelog. Thats pretty bad on Amazon's part. Are you guys in touch at all with them regarding the disappearing data issues, or are you still prioritizing the next stable beta release before focusing on ACD again?
  19. Here's a thread on Amazon's dev forums I saw earlier in the year regarding storing a lot (~100TB) of data on Amazon Drive. FWIW I have ~3TB backed up to CrashPlan, with most of that also backed up to Amazon via CloudDrive.
  20. I guess it might look that way to begin with - and thats probably what had me confused at first. Before reaching the cache limit, locally stored data would be about the same as total used data on the drive. Once you pass the cache limit and it starts getting trimmed, or the cache gets cleared, it will no longer match up.
  21. I thought it was % free space when I first saw it too, though I think the graph on the right actually shows the % of the drive currently stored locally - its usually the size of the cache + data to upload (total of the graph on the left) / total space.
  22. thnz

    Cache clearing itself

    Would reproducing it (cache clearing on cache resize) with drive tracing enabled help at all, or is that not something that the drive tracing would see? I can't say if its the same issue as when the cache was cleared all by itself, but I guess it could conceivably be related if something is going wrong with trimming.
  23. thnz

    Cache clearing itself

    So this morning, cache was at 33/33GB. I then resized the cache to 30GB. Instead of trimming 3GB, it went and cleared everything, and is now re-pinning data. Not sure if this behavior is related to the cache clearing itself when unattended, but it certainly wasn't expected, and don't think is intended behavior. It's definitely not very efficient!
  24. thnz

    Cache clearing itself

    I had updated to the latest beta build - 1.0.0.599 - yesterday, so it was running that version when it happened overnight. I'm sure I've seen it happen before recently though in an earlier build too - IIRC the other time it happened it had about 20/50GB cached, then I saw a while later that it was down to 0/50GB and was 'pinning data' again. On an unrelated note, I've noticed that only drives that are mounted, either in a folder or with a drive letter, are available for selection as a cache drive when attaching.
×
×
  • Create New...