Jump to content

Firerouge

Members
  • Posts

    31
  • Joined

  • Days Won

    1

Firerouge last won the day on May 8 2017

Firerouge had the most liked content!

Recent Profile Visitors

788 profile views

Firerouge's Achievements

Advanced Member

Advanced Member (3/3)

3

Reputation

  1. Can this be revaluated. With workspace drives now being limited by google, team drives are the only google option that has no sized based quota. Large chunk sizes could help mitigate the 400k files limit. With 100mb chunks, 40TB team drives may be possible.
  2. With the new hierarchical chunk organization, shouldn't this now be technically possible?
  3. Whoa, that's way slower than I expected. You're seeing only about 5.5TB migrated per day! What sort of system specs, or resource consumption are you seeing, does it seem bottlenecked by anything other than Google's concurrency limit?
  4. It's certainly possible but cloud hosting can be prohibitively expensive if you intend to get a system capable of hardware transcoding (a computational must for more than one or two high resolution streams) along with the bandwidth capacity. Furthermore, you'll probably want to look at providers who has locality to your client streaming locations. It's also important, since you mention using a (sketchy) seebox host, that you don't attempt to download torrents directly into your cloud drive. You will almost certainly fragment the filesystem and nullify the capabilities of the prefetcher. But fundamentally the cloud drive migration is as simple as unmounting from one location and remounting in another.
  5. New beta looks to have major improvements to the migration process, make sure you're on it before reporting any additional bugs or doing something crazy like delete local cache. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off. * [Issue #28410] Output a descriptive Warning to the log when a storage provider's data organization upgrade fails.
  6. I'm still holding off patiently on the conversion, it sounds like it works, but waiting to get a better idea at the of the time it takes by drive data size. I've noticed that without any changed settings these past few days I've gotten a couple yellow I/O error warnings about user upload rate limit exceeded (which otherwise haven't been problems), and I've noticed gdrive side upload throttling at a lower than normal concurrency, only 4 workers at 70mbit. I'm guessing some of these rate limit errors people may be seeing in converting are transient from gdrive being under high load.
  7. I'm guessing this latest beta changelog is referencing the solution to this .1305 * Added a progress indicator when performing drive upgrades. * [Issue #28394] Implemented a migration process for Google Drive cloud drives to hierarchical chunk organization: - Large drives with > 490,000 chunks will be automatically migrated. - Can be disabled by setting GoogleDrive_UpgradeChunkOrganizationForLargeDrives to false. - Any drive can be migrated by setting GoogleDrive_ForceUpgradeChunkOrganization to true. - The number of concurrent requests to use when migrating can be set with GoogleDrive_ConcurrentRequestCount (defaults to 10). - Migration can be interrupted (e.g. system shutdown) and will resume from where it left off on the next mount. - Once a drive is migrated (or in progress), an older version of StableBit CloudDrive cannot be used to access it. * [Issue #28394] All new Google Drive cloud drives will use hierarchical chunk organization with a limit of no more than 100,000 children per folder. Some questions, seeing as the limit appears to be around 500,000, is there an option to set the new hierarchical chunk organization folder limit to something higher than 100,000? Has anyone performed the migration yet, what is the approximate time it takes to transfer a 500,000 chunk drive to the new format? Seeing as there are concurrency limit options, does the process also entail a large amount of upload or download bandwidth? After migrating, is there any performance difference compared to the prior non hierarchical chunk organization? Edit: if the chunk limit is 500,000, and if chunks are 20Mb, shouldn't this be occurring on all drives over 10Tb in size? Note, I haven't actually experienced this issue and I have a few large drives under my own api key, so it may be a very slow rollout or an A/B test.
  8. It's worth mentioning that in low disk space scenarios, the drive will also stop writing entirely. With about 3GBs of space left on the cache hosting disk (with expandable cache set to minimal) it will entirely disable upload IO. This is independant of upload size, so for example, with 3GBs of space on the cache drive left, you'll still be unable to upload a 700MB file. Upload IO is also significantly slowed in the range of only 4-6GBs of space on the cache hosting drive. This is worth noting, as it can lead to scenarios where you're trying to move files off the cache hosting drive into the cloud drive, but be unable to make more room for the cache.
  9. I actually think I know what I was observing. It would seem that if the cache hosting drive nears (or perhaps hits) it's full capacity, the entirety of the cache appears to get wiped. This is probably intended behavior, so I've simply set the cache to a smaller size, which seems to more or less resolve the issue.
  10. Is there a way to set the cache cleanup/expiration time to be higher or infinite? Essentially, I have a large expandable cache set, but with time the cache shrinks as it automatically removes data, presumedly if it's not accessed soon/often enough. I'd like the cache to remain at it's maximum level until I see fit to clear it myself, or there after no longer any files in the drive to cache. Is this possible? Perhaps with one of the other cache modes besides expandable?
  11. Yes, it's probably only a partial log of it, but I submitted the archive to your dropbox submission link (labelled with this account name).
  12. I caught it happening on 861. As you can see a 2+ minute read operation on one chunk... I've attempted to trace this (I didn't get the start, but It should have recorded this read). Upon completion of the connection, it jumped back up to its normal one to two dozen parallel write operations (most in a variant of the SharedWait state) I'll hopefully be switching to a faster transit VPS shortly, in an effort to disprove network misconfiguration as the cause. I realize also, that this is in part a limitation in the program utilizing the clouddrive, as it seems to wait until all (or most) of the burst of operations complete before starting the next wave, so even a relatively slow 20 second read can have blocking implications on additional writes. However, a fast fix for the worst offenders (multi minute connections) would be quite beneficial.
  13. My, you're right, it is, thank you. Wonder why the update checker never gave me it though.
  14. I too have noticed this is a common user oversight in the current design. If I can make a suggestion, I think the windows 7 screen resolution slider (sadly now gone) is a decent case study of how this can be cleanly implemented, by listing only the extremes and common middle options. Obviously using a slider has limitations to fine granularity, so for users not inclined to max out sliders, the box should still be type-able. I suspect a majority of users would fall into one of these common drive sizes 1, 10, 50, 100, 256, 512GB, or 1, 10, 100, or, 256TB. Probably most heavily dictated by available storage options from each provider.
  15. I just now noticed that setting in the wiki as well, it isn't listed in the default config, I'm going to experiment with some variations of that setting as a solution. As for recording it, I've only ever noticed it twice, and that was just luck of glancing at the technical log at the right time and noticing that it had dropped to one single read request, which upon a detailed look, showed the slow speed and the 1min+ connection timer. I'll try and create logs, but I might not have much luck locking it down. Minor side point while I have your attention, a brand new windows VPS running almost exclusively 854+rtorrent+rclone rarely has unexpected reboots during peak disk i/o. The problem seems to be described in issue 27416, ostensibly fixed a month ago, but in a seemingly unreleased version 858. Can we expect a new RC soon? The issue tracker seems to imply internally you're already past version 859 even.
×
×
  • Create New...