Jump to content
Covecube Inc.

Firerouge

Members
  • Content Count

    30
  • Joined

  • Days Won

    1

Firerouge last won the day on May 8 2017

Firerouge had the most liked content!

About Firerouge

  • Rank
    Advanced Member
  1. With the new hierarchical chunk organization, shouldn't this now be technically possible?
  2. Whoa, that's way slower than I expected. You're seeing only about 5.5TB migrated per day! What sort of system specs, or resource consumption are you seeing, does it seem bottlenecked by anything other than Google's concurrency limit?
  3. It's certainly possible but cloud hosting can be prohibitively expensive if you intend to get a system capable of hardware transcoding (a computational must for more than one or two high resolution streams) along with the bandwidth capacity. Furthermore, you'll probably want to look at providers who has locality to your client streaming locations. It's also important, since you mention using a (sketchy) seebox host, that you don't attempt to download torrents directly into your cloud drive. You will almost certainly fragment the filesystem and nullify the capabilities of the p
  4. New beta looks to have major improvements to the migration process, make sure you're on it before reporting any additional bugs or doing something crazy like delete local cache. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off. * [Issue #28410] Output a descriptive Warning to the log when a
  5. I'm still holding off patiently on the conversion, it sounds like it works, but waiting to get a better idea at the of the time it takes by drive data size. I've noticed that without any changed settings these past few days I've gotten a couple yellow I/O error warnings about user upload rate limit exceeded (which otherwise haven't been problems), and I've noticed gdrive side upload throttling at a lower than normal concurrency, only 4 workers at 70mbit. I'm guessing some of these rate limit errors people may be seeing in converting are transient from gdrive being under
  6. I'm guessing this latest beta changelog is referencing the solution to this .1305 * Added a progress indicator when performing drive upgrades. * [Issue #28394] Implemented a migration process for Google Drive cloud drives to hierarchical chunk organization: - Large drives with > 490,000 chunks will be automatically migrated. - Can be disabled by setting GoogleDrive_UpgradeChunkOrganizationForLargeDrives to false. - Any drive can be migrated by setting GoogleDrive_ForceUpgradeChunkOrganization to true. - The number of concurrent requests to use when migrating can be s
  7. It's worth mentioning that in low disk space scenarios, the drive will also stop writing entirely. With about 3GBs of space left on the cache hosting disk (with expandable cache set to minimal) it will entirely disable upload IO. This is independant of upload size, so for example, with 3GBs of space on the cache drive left, you'll still be unable to upload a 700MB file. Upload IO is also significantly slowed in the range of only 4-6GBs of space on the cache hosting drive. This is worth noting, as it can lead to scenarios where you're trying to move files off the cache hosting drive i
  8. I actually think I know what I was observing. It would seem that if the cache hosting drive nears (or perhaps hits) it's full capacity, the entirety of the cache appears to get wiped. This is probably intended behavior, so I've simply set the cache to a smaller size, which seems to more or less resolve the issue.
  9. Is there a way to set the cache cleanup/expiration time to be higher or infinite? Essentially, I have a large expandable cache set, but with time the cache shrinks as it automatically removes data, presumedly if it's not accessed soon/often enough. I'd like the cache to remain at it's maximum level until I see fit to clear it myself, or there after no longer any files in the drive to cache. Is this possible? Perhaps with one of the other cache modes besides expandable?
  10. Yes, it's probably only a partial log of it, but I submitted the archive to your dropbox submission link (labelled with this account name).
  11. I caught it happening on 861. As you can see a 2+ minute read operation on one chunk... I've attempted to trace this (I didn't get the start, but It should have recorded this read). Upon completion of the connection, it jumped back up to its normal one to two dozen parallel write operations (most in a variant of the SharedWait state) I'll hopefully be switching to a faster transit VPS shortly, in an effort to disprove network misconfiguration as the cause. I realize also, that this is in part a limitation in the program utilizing the clouddrive, as it seems to wait until all (
  12. My, you're right, it is, thank you. Wonder why the update checker never gave me it though.
  13. I too have noticed this is a common user oversight in the current design. If I can make a suggestion, I think the windows 7 screen resolution slider (sadly now gone) is a decent case study of how this can be cleanly implemented, by listing only the extremes and common middle options. Obviously using a slider has limitations to fine granularity, so for users not inclined to max out sliders, the box should still be type-able. I suspect a majority of users would fall into one of these common drive sizes 1, 10, 50, 100, 256, 512GB, or 1, 10, 100, or, 256TB. Probably most heavily dictated b
  14. I just now noticed that setting in the wiki as well, it isn't listed in the default config, I'm going to experiment with some variations of that setting as a solution. As for recording it, I've only ever noticed it twice, and that was just luck of glancing at the technical log at the right time and noticing that it had dropped to one single read request, which upon a detailed look, showed the slow speed and the 1min+ connection timer. I'll try and create logs, but I might not have much luck locking it down. Minor side point while I have your attention, a brand new windows V
  15. That's true, but so will a flexible cache, which queues up writes ontop of the existing cache, and if the cache drive itself gets within 6GB of being full it'll throttle. Where as the fixed queue will shrink the existing cache until it's all a write queue, before throttling. My cache is 15GB smaller than the 60GB free SSD it is on, so for a flexible cache I'd only get about 9GBs of queued writes before throttling, where as the fixed queue can dedicate all 45GB of the cache to writing (at the loss of all other cached torrent data), before throttling. Better still, since that initial preall
×
×
  • Create New...