Jump to content
Covecube Inc.

Firerouge

Members
  • Content Count

    30
  • Joined

  • Days Won

    1

Everything posted by Firerouge

  1. With the new hierarchical chunk organization, shouldn't this now be technically possible?
  2. Whoa, that's way slower than I expected. You're seeing only about 5.5TB migrated per day! What sort of system specs, or resource consumption are you seeing, does it seem bottlenecked by anything other than Google's concurrency limit?
  3. It's certainly possible but cloud hosting can be prohibitively expensive if you intend to get a system capable of hardware transcoding (a computational must for more than one or two high resolution streams) along with the bandwidth capacity. Furthermore, you'll probably want to look at providers who has locality to your client streaming locations. It's also important, since you mention using a (sketchy) seebox host, that you don't attempt to download torrents directly into your cloud drive. You will almost certainly fragment the filesystem and nullify the capabilities of the p
  4. New beta looks to have major improvements to the migration process, make sure you're on it before reporting any additional bugs or doing something crazy like delete local cache. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off. * [Issue #28410] Output a descriptive Warning to the log when a
  5. I'm still holding off patiently on the conversion, it sounds like it works, but waiting to get a better idea at the of the time it takes by drive data size. I've noticed that without any changed settings these past few days I've gotten a couple yellow I/O error warnings about user upload rate limit exceeded (which otherwise haven't been problems), and I've noticed gdrive side upload throttling at a lower than normal concurrency, only 4 workers at 70mbit. I'm guessing some of these rate limit errors people may be seeing in converting are transient from gdrive being under
  6. I'm guessing this latest beta changelog is referencing the solution to this .1305 * Added a progress indicator when performing drive upgrades. * [Issue #28394] Implemented a migration process for Google Drive cloud drives to hierarchical chunk organization: - Large drives with > 490,000 chunks will be automatically migrated. - Can be disabled by setting GoogleDrive_UpgradeChunkOrganizationForLargeDrives to false. - Any drive can be migrated by setting GoogleDrive_ForceUpgradeChunkOrganization to true. - The number of concurrent requests to use when migrating can be s
  7. It's worth mentioning that in low disk space scenarios, the drive will also stop writing entirely. With about 3GBs of space left on the cache hosting disk (with expandable cache set to minimal) it will entirely disable upload IO. This is independant of upload size, so for example, with 3GBs of space on the cache drive left, you'll still be unable to upload a 700MB file. Upload IO is also significantly slowed in the range of only 4-6GBs of space on the cache hosting drive. This is worth noting, as it can lead to scenarios where you're trying to move files off the cache hosting drive i
  8. I actually think I know what I was observing. It would seem that if the cache hosting drive nears (or perhaps hits) it's full capacity, the entirety of the cache appears to get wiped. This is probably intended behavior, so I've simply set the cache to a smaller size, which seems to more or less resolve the issue.
  9. Is there a way to set the cache cleanup/expiration time to be higher or infinite? Essentially, I have a large expandable cache set, but with time the cache shrinks as it automatically removes data, presumedly if it's not accessed soon/often enough. I'd like the cache to remain at it's maximum level until I see fit to clear it myself, or there after no longer any files in the drive to cache. Is this possible? Perhaps with one of the other cache modes besides expandable?
  10. Yes, it's probably only a partial log of it, but I submitted the archive to your dropbox submission link (labelled with this account name).
  11. I caught it happening on 861. As you can see a 2+ minute read operation on one chunk... I've attempted to trace this (I didn't get the start, but It should have recorded this read). Upon completion of the connection, it jumped back up to its normal one to two dozen parallel write operations (most in a variant of the SharedWait state) I'll hopefully be switching to a faster transit VPS shortly, in an effort to disprove network misconfiguration as the cause. I realize also, that this is in part a limitation in the program utilizing the clouddrive, as it seems to wait until all (
  12. My, you're right, it is, thank you. Wonder why the update checker never gave me it though.
  13. I too have noticed this is a common user oversight in the current design. If I can make a suggestion, I think the windows 7 screen resolution slider (sadly now gone) is a decent case study of how this can be cleanly implemented, by listing only the extremes and common middle options. Obviously using a slider has limitations to fine granularity, so for users not inclined to max out sliders, the box should still be type-able. I suspect a majority of users would fall into one of these common drive sizes 1, 10, 50, 100, 256, 512GB, or 1, 10, 100, or, 256TB. Probably most heavily dictated b
  14. I just now noticed that setting in the wiki as well, it isn't listed in the default config, I'm going to experiment with some variations of that setting as a solution. As for recording it, I've only ever noticed it twice, and that was just luck of glancing at the technical log at the right time and noticing that it had dropped to one single read request, which upon a detailed look, showed the slow speed and the 1min+ connection timer. I'll try and create logs, but I might not have much luck locking it down. Minor side point while I have your attention, a brand new windows V
  15. That's true, but so will a flexible cache, which queues up writes ontop of the existing cache, and if the cache drive itself gets within 6GB of being full it'll throttle. Where as the fixed queue will shrink the existing cache until it's all a write queue, before throttling. My cache is 15GB smaller than the 60GB free SSD it is on, so for a flexible cache I'd only get about 9GBs of queued writes before throttling, where as the fixed queue can dedicate all 45GB of the cache to writing (at the loss of all other cached torrent data), before throttling. Better still, since that initial preall
  16. I should add that the fixed cache type is another setting directly benefiting torrenting. From the CoveCube blog "Overall, the fixed cache is optimized for accessing recently written data over the most frequently accessed data." A new torrent is likely to have the majority of seeding requests, so fixed is the best cache if you're continually downloading new torrents. Plus I prefer the predictable size of the drive cache when performing a large file preallocation.
  17. When a drive is first created, the last advanced setting, cluster size, dictates maximum drive size. Any sizes over 10TB you're required to type into the cloud drive size setting box. If you want the maximum size, simply type 256TB.
  18. Most every read operation finishes so quickly that it's almost impossible to even see the connection speeds for them in the log. Occasionally, maybe one read per 100gigs, I'll get an incredibly slow read operation download. Occasionally taking over a minute to download the 20MB chunk (longest I've seen was a minute 50), with speeds around 200-500kb/s. These slow reads tend to block other operations for the program I'm using. This is pretty bad. To try and circumvent this. I edited the IoManager_ReadAbort line in advanced settings, down from 1:55, to :30 seconds. However, this comm
  19. And I agree, it wasn't a controlled test by any means, other tools were using the drive at the time I tried to defrag it. I haven't given it a second attempt. I needed to create a new disk anyway, and preallocate eliminates the fragmentation problem. Similar point with minimum download, my initial drive configuration had a 1MB min, my new one uses 5, which hopefully should perform better (fewer API requests as well). Hopefully final builds better guide users on setting these, or ideally configures more dynamically by need. Speaking of which, the any other tips from the advanced con
  20. Don't disable it for hashing, watching the technical log shows that (atleast rtorrent) hashes files by requesting them 1MB at a time. And only the next meg after the previous read finishes. Furthermore, each 1MB request shows a download speed, implying each meg from the CD chunk is being downloaded independently. Hashing rates skyrocket with the prefetch settings I've used vs no prefetcher. One thing I'm certain on is that the prefetcher currently only queries subsequent chunk numbers. This is obvious from the technical logs as well. It has some clever logic for existing cached blocks
  21. My config setting is different for seeding (longer time window, more data fetched than one block if using a sequential drive) what I gave was my hashing config. Since we both have 1MB triggers, we both should cache after the client loads the first meg to give to the peer, you are correct that a longer wait time (while having more false positives) will allow for prefetching blocks to slower peer connections. But that impact seems minimal, particularly on scrambled drives, the minimum download size should result in caching slightly more than you need with each read, and if connection speed
  22. I recently changed quite a few settings, that have greatly improved performance. rtorrent downloads about twice as fast now Overall two settings are crucial: Having the torrent client preallocating files (so that chunks are sequential). This solves many problems, specifically the prefetcher not fetching useful chunks Optimal prefetch settings, the breakdown of this is: 1MB prefetch trigger = the size the torrent client attempts to hash at a time 20MB read ahead = the provider block size the fs was setup with (might want this 1MB lower, as this actually flows into the next chunk, or possib
  23. You've hit exactly upon my goal. Long term seeding without long term storage costs. I'm trying to perform that feat from within the same instance that downloads, important as the drive can only be mounted on any one PC at a time. This instance is a high bandwidth unmetered, but tightly SSD capacity capped VPS, both of these are important as you'll see later. The crux of the problem I'm trying to get addressed here is, performing hash checks on torrents sent straight to the drive from a torrent client (an important distinction from files placed on a clouddrive which were first downloaded en
  24. I'd say it's far from a great drive to torrent too, but in a pinch it works. To recap the state of the windows torrent clients. All hash impossibly slow (never let a torrent client close with a partial download, ever) Rtorrent hashs a tiny bit quicker, but has download speeds under half of what should be sustainable (probably cygwin overhead). qbitorrent will slowly get more and more disk overloads, before locking up at 100% overload after a few hours. Vuze will download, pause while it flushes to disk, and writes an unusually large amount of extra data, overall probably the s
  25. I can't directly answer this. But from the sounds of it, you may be best suited with a thirdparty backup software, that could potentially utilize the clouddrive as the backup destination.
×
×
  • Create New...