Jump to content
  • 0

Directly Utilizing Clouddrive for Torrents


Firerouge

Question

First off, I want to say that I'm a trial user of 854, and am deeply impressed by this software; however, there is one major issue standing in the way of my use case.

 

I have tried 4 different torrent clients, downloading and uploading directly from a CloudDrive. They all work reasonably for a small amount of parallel torrent downloads, with overall download speed decreasing by about 66%. Reasonable overhead, and reasonable needing to limit the maximum simultaneous files being processed.

 

 

However, when something does go wrong (inevitably, and frequently if too many torrents are downloading) the clients will want to hash the files, to make sure nothing is corrupted.

 

 

This however does not work, or at least not well. Hashing rate is in an order of magnitude slower than if a file is stored locally.

This flaw is so pronounced, that it is quicker to simply redownload a torrent from scratch.

 

 

rtorrent (linux based but running in cygwin) performs best, with what seems to be optimizations that allow it to skip large parts of the hashing process.

Still, a ~12gig file downloaded to 50% will take a day to fully hash check; furthermore, you can copy the same file locally and hash check it before the other client checking directly from the clouddrive can get to 5%

This is not a solution however, as I have torrents whose partial download state exceed total local storage capacity.

 

 

This shouldn't be the case, in practice, the entirety of the written torrent blocks will have to be downloaded and each checked. However, it's quite clear that the prefetcher is not managing to cache the file before the client needs it, or another problem is inplay.

 

I suspect, from limited insights, that the clients are not hashing the torrent blocks in sequential block order (fairly sure of this), and are instead skipping around throughout the file, which may be confusing the prefetcher and/or cache.

Furthermore, and this is just a guess, I believe that since torrent blocks may (usually) don't align with the provider chunk size, it may frequently download a chunk, check only one of the contained blocks of a possible 2 or more contained in that chunk (based on both block and chunk size), and then discards it from the cache due to the potentially considerable wait before the torrent client randomly comes back and checks the neighboring blocks stored in any given chunk.

 

It's also possible that the root of the problem may lie in that the clients expect a sparse filesystem, which (and I'm unclear of the details on this) allows it to know to skip hash checking of blocks that are zeroed (unwritten yet). It's possible that clouddrive doesn't handle this sparse storage, and is actually writing out all zeros to the clouddrive, and also requiring the torrent client to check them. I'm further inclined to believe the allocation of zero space is to blame, as when copying with explorer a file downloaded halfway, the transfer status doesn't count transfer progress down from the size on disk (actual data downloaded), but rather the completed file size (the size it should be if fully present).

 

Also, the problem could have it's roots in the fact that the torrent client doesn't download a file's blocks sequentially, and may take quite awhile to complete any given download, which (and this is purely a guess) causes the blocks to get mixed in with other downloads and be scattered across many different non-sequential chunks.

 

All things considered, the prefetcher manages to get cache hits in the range of 60-85%, with about 2Gigs utilized at any given time, and set with a 10mb trigger, 20mb read ahead, and 300sec expiration.

Link to comment
Share on other sites

23 answers to this question

Recommended Posts

  • 0

It's also worth elaborating more on performance of the clients. As they all (rtorrent, utorrent, deluge, qbittorrent) follow a similar and fairly odd behavior.

 

For starters, even with Background I/O turned off, any form of clouddrive reads substantially impacts performance, up to causing a complete stall in the torrent clients if it is too many parallel reads.

 

When first started, the client will download quite quickly, as high as the max link speed (20MiBs for me) but will very quickly slow down to my sustained clouddrive speeds (about 5-7MiBs).

 

 

During that initial burst, the upload que can get rather long, many gigs in length. Though the throttling does not seem to be in response to this.

I've had an upload que of over 7gigs in length, which continued to download at my normal sustained speeds.

 

Further throttling occurs randomly, with most dramatically the speeds dropping down to under 1MiBs for ~10 seconds bursts, before returning back up to sustained speeds for the next minute or so. Upload que length is usually under 100MBs in size while this occurs. This happens moreso when other applications are performing read operations (specificly an rclone copy from a stablebit GDrive clouddrive to ACD, which has caused complete crashes in excess of 4 uploaders), and quantity of read operations also seems to impact how much the speed fluctuates, with it most commonly just fluctuating between 3-5MiBs.

 

I find that splitting downloads across 2 different clients gives me the greatest overall download speeds, as often, when one throttles down, the other will surge up, though frequently both will throttle down simultaneously.

 

This is what the qbittorrent reports right when a throttling occurs. The throttling does seem to get worse with time. To a point where the client will stop downloading, reporting continously a 100% read overload.

9NFFgyf.png

 

Overloading the disk write (and read) cache, while annoying, with the underlying issue somewhere that makes it lockup overtime is, for me, secondary to fixing hashing...

Link to comment
Share on other sites

  • 0

Yeah, torrents is about the worst case scenario for CloudDrive.  I wouldn't torrent directly to/from it if at all possible.  You COULD try upping the readahead to something larger like 40MB read ahead, and a much larger cache (100GB+) that might help.

Link to comment
Share on other sites

  • 0

Yeah, torrents is about the worst case scenario for CloudDrive.  I wouldn't torrent directly to/from it if at all possible.  You COULD try upping the readahead to something larger like 40MB read ahead, and a much larger cache (100GB+) that might help.

 

You're absolutely right, it most certainly is, with many simultaneous file handles, and read write operations, it may be the true ultimate test of clouddrive.

 

My current configuration, writing and reading directly to it is functional (as long as I avoid crashes which force rehashing).

 

I'm unfortunately limited to an 80Gb SSD, with 40Gigs for cache, and the remainder is swap and upload que. These sizes can't practically go any larger without risking hitting the 5Gb free space limit that throttles clouddrive.

 

This configuration has worked for over 1Tb of torrenting data in the past 4 days, so in general I'm happy with the results.

 

 

As for those cache settings, I found that the 2 chunk read ahead I've set (20Mb) seems more than sufficient, and that any changes to the prefetcher (including turning it off) has little impact on file hashing performance. It fairly consistently begins the next 20Mb prefetch before, or immediately following the completion of the prior. I tried all the way up to 400Mb read ahead, but it simply slowed down everything by wasting the slower download bandwidth for lots of unnecessary chunks.

Link to comment
Share on other sites

  • 0

If i be honest. I'm thinking that using clouddrive for torrenting is abusing much cloud storage providers, very high API calls.

 

Like i understand covecube pays for API calls that we using in clouddrive. We don't use API limit that is set our user. Because rclone users all the time complaining API ban for 24 hours. Or i get it wrong ?

Link to comment
Share on other sites

  • 0

If i be honest. I'm thinking that using clouddrive for torrenting is abusing much cloud storage providers, very high API calls.

 

Like i understand covecube pays for API calls that we using in clouddrive. We don't use API limit that is set our user. I'm thinking that, because rclone users all the time complaining API ban for 24 hours.

 

You're right, it's a borderline use case. Though torrenting has many very useful legal uses for distributing files as it's often the fastest method, and I don't believe that specific bans should be made against this application, it is more I/O random than abnormally intensive compared to alternative downloading methods (except for my use case due to high quantity of usage).

 

With 13/10 threads, the clouddrive client rate limits only a portion of the time, and usually only when the connection has both in and out traffic flow.

 

I find that RClone is very inefficient, and that is likely the cause of those API bans. When I utilize RClone, pulling from a stablebit clouddrive, it often crashes other programs that are using the drive, as it seems to starve all I/O operations.

 

I'm all for using my own API key (and limits), especially if it would allow me greater performance. It has been floated here before, but I believe the covecube developers are happy with the pooled API keys they are using at the moment and don't wish to release that capability (which then may be abused).

Link to comment
Share on other sites

  • 0

If i be honest. I'm thinking that using clouddrive for torrenting is abusing much cloud storage providers, very high API calls.

 

Like i understand covecube pays for API calls that we using in clouddrive. We don't use API limit that is set our user. Because rclone users all the time complaining API ban for 24 hours. Or i get it wrong ?

 

Much of this is wrong or a bit misinformed.

 

Using a CloudDrive as a torrent drive does not result in any additional API calls. It will result in additional reads and writes to the cache drive, but CloudDrive will still upload and download the chunks with the same amount of API usage as any other use.

 

Beyond this, rClone results in API bans because it neither caches filesystem information locally, nor respects Google's throttling requests with incremental backoff. CloudDrive does both of these things, and will do so regardless of its use-case--torrents or otherwise.

 

In any case, CloudDrive DOES work for torrents. In particular, it makes a great drive to hold long-term seeds. The downside, as observed, is that hash checks and such will take a long time initially, but once that's completed you should notice few differences as long as your network speeds can accommodate the overhead for CloudDrive. 

Link to comment
Share on other sites

  • 0

In any case, CloudDrive DOES work for torrents. In particular, it makes a great drive to hold long-term seeds. The downside, as observed, is that hash checks and such will take a long time initially, but once that's completed you should notice few differences as long as your network speeds can accommodate the overhead for CloudDrive. 

 

I'd say it's far from a great drive to torrent too, but in a pinch it works. To recap the state of the windows torrent clients.

 

All hash impossibly slow (never let a torrent client close with a partial download, ever)

 

Rtorrent hashs a tiny bit quicker, but has download speeds under half of what should be sustainable (probably cygwin overhead).

 

qbitorrent will slowly get more and more disk overloads, before locking up at 100%  overload after a few hours.

 

Vuze will download, pause while it flushes to disk, and writes an unusually large amount of extra data, overall probably the slowest client, though it seems to share the same minor hashing optimization as rtorrent.

 

Transmission, uTorrent and Deluge all work fairly similarly to above, I haven't yet done detailed performance testing as they have a bad habits of crashing and needing hashing of even completed downloads.

 

 

 

None of them properly utilize the upload que, as it will never exceed a small amount of queued operations

 

I've come to being quite certain that the clients are causing severe fragmentation, that harms prefetching performance. For this reason, I believe the single most important setting any of these clients should have enabled is preallocate storage. This creates a considerable write que, but decreases out of sequence chunks and improves initial downloading speeds (file is cached already), use with caution, as starting multiple torrents simultaneously can be too much for the cache to handle

Link to comment
Share on other sites

  • 0

I'd say it's far from a great drive to torrent too, but in a pinch it works. To recap the state of the windows torrent clients.

 

 

Right. Like I said, it was a great drive to hold long-term seeds. You're talking about the inarguable issues that it has downloading torrent content. I'm talking about long-term storage for seeding for months or even years after a download. Many old torrents are rarely downloaded and poorly seeded. A CloudDrive, particularly paired with one of the unlimited cloud providers, can host that content essentially indefinitely. Incomplete downloads are not an issue, since seeds are already downloaded. Hash checks take time, but like any server-based storage solution, proper management can minimize if not eliminate the need for them once they're hashed once.

 

Ultimately, though, if you just want a drive to sit there and store content and seed it to your trackers CloudDrive works just fine. The simple solution is obviously to just download to a local drive, upload the completed content to your CloudDrive, hash it (once), and seed from there forever more. 

Link to comment
Share on other sites

  • 0

You're talking about the inarguable issues that it has downloading torrent content. I'm talking about long-term storage for seeding for months or even years after a download. Many old torrents are rarely downloaded and poorly seeded. A CloudDrive, particularly paired with one of the unlimited cloud providers, can host that content essentially indefinitely. 

 

You've hit exactly upon my goal. Long term seeding without long term storage costs. I'm trying to perform that feat from within the same instance that downloads, important as the drive can only be mounted on any one PC at a time. This instance is a high bandwidth unmetered, but tightly SSD capacity capped VPS, both of these are important as you'll see later.

 

The crux of the problem I'm trying to get addressed here is, performing hash checks on torrents sent straight to the drive from a torrent client (an important distinction from files placed on a clouddrive which were first downloaded entierly), does not work.

 

Incomplete downloads are not an issue, since seeds are already downloaded. Hash checks take time, but like any server-based storage solution, proper management can minimize if not eliminate the need for them once they're hashed once.

Ultimately, though, if you just want a drive to sit there and store content and seed it to your trackers CloudDrive works just fine.

 

The fact that hashing straight from the clouddrive takes about a day per 20Gb with 4 cores means no amount of infrastructure optimization or upgrades will make it scale. Particularly since most clients also restrict the hashing process to one torrent at a time.

 

This actually has one important implication on your simple solution

 

The simple solution is obviously to just download to a local drive, upload the completed content to your CloudDrive, hash it (once), and seed from there forever more. 

 

This is the ideal pipeline, since quantity of torrents is limited by bandwidth emptying the upload queue
add_torrent -> (download->upload_que)(upload->clouddrive) -> seed
The only time data is written to local disk is when placing it in the clouddrive upload queue. This allows for torrents larger than the entirety of local storage capacity.
 
 
 
While this is fine and dandy, hashing may be required to be performed, either from fastresume corruptions, or unscheduled shutdowns. This simply can't be done on an online copy. So a new process must be performed every time hashing is required:
torrent_needing_checking -> (download->local) -> client_hash -> (upload->clouddrive or delete&symlink) -> seed

This requires the entirety of the torrent to be stored in local disk space until the torrent is completely finished checking. That means that paralleling the first half of the process is storage limited, torrents larger than local capacity can't be fixed, and you are practically restricted to one or two parallel files being checked.

 

This works alright as I've said before (see individual client problems in my post earlier), but if anything goes wrong during a download, or afterwards the process must be performed entirely anew for every faulty torrent, which is tedious to do manually, harder to do programmatically, and not compatible with my requirement of working with files larger than local storage.

 

Thanks to stablebit's seamless nature in explorer, provided by cache and upload queing, files many times greater than local storage capacity should be simple to store and use directly from the cloud, but the hashing problem makes any direct torrenting (especially large ones) impractical for anything beyond very lite home usage.

 

This style of usage (to my knowledge) can only be done with stablebit or by fusing an rclone mount with a caching system, which is yet to be heavily documented, and is still under development for windows.

Link to comment
Share on other sites

  • 0

My seed drive can hash FAR more than 20GB/day. Now I'm just wondering about your settings. What are they?

 

Mine are as follows:

 

10MB Chunk Size, 10MB minimum download, 25GB expandable cache (on an SSD), full encryption, 10 d/l threads, 5 upload threads, uploading throttled at 25mbps, 1MB prefetch trigger, 10mb forward, 180 prefetch window

 

Server is 1gbps downstream, 250mbps upstream. Server also runs Plex (on a different CloudDrive for media storage which also seeds.)

 

My server can hash a full remuxed bluray movie in about 30-45mins. That's around 25-35GB. 

Link to comment
Share on other sites

  • 0

Much of this is wrong or a bit misinformed.

 

Using a CloudDrive as a torrent drive does not result in any additional API calls. It will result in additional reads and writes to the cache drive, but CloudDrive will still upload and download the chunks with the same amount of API usage as any other use.

 

Beyond this, rClone results in API bans because it neither caches filesystem information locally, nor respects Google's throttling requests with incremental backoff. CloudDrive does both of these things, and will do so regardless of its use-case--torrents or otherwise.

 

In any case, CloudDrive DOES work for torrents. In particular, it makes a great drive to hold long-term seeds. The downside, as observed, is that hash checks and such will take a long time initially, but once that's completed you should notice few differences as long as your network speeds can accommodate the overhead for CloudDrive. 

Thanks for clearing things.

Link to comment
Share on other sites

  • 0

My seed drive can hash FAR more than 20GB/day. Now I'm just wondering about your settings. What are they?

 

I recently changed quite a few settings, that have greatly improved performance. rtorrent downloads about twice as fast now

 

Overall two settings are crucial:

  1. Having the torrent client preallocating files (so that chunks are sequential). This solves many problems, specifically the prefetcher not fetching useful chunks
  2. Optimal prefetch settings, the breakdown of this is:
  • 1MB prefetch trigger = the size the torrent client attempts to hash at a time
  • 20MB read ahead = the provider block size the fs was setup with (might want this 1MB lower, as this actually flows into the next chunk, or possibly the exact torrent client chunk size of the file you're hashing) This can (should) be higher if you know a torrent to have it's data stored in sequential chunks in the clouddrive, but if that is not the case, the additional prefetched data will not be useful.
  • 3 second trigger = roughly the longest any read request should take. You want this low enough that apps trickle accessing data don't have enough time to read the trigger amount and cause a useless prefetch, but high enough that the hash check has time to read the 1MB. 1 second works as well for me

 

The remainder of my settings are the established optimal plex settings, same as yours in all other ways, except 5MB min download and different thread counts.

 

The solution really shouldn't depend on users reconfiguring the cache depending on whatever scenario they're undergoing though. This should be an easy change to the cache. If you see a chunk being read 1MB at a time, maybe you should just automatically cache all full chunks following those initial 1MB automatically. Even better if the prefetcher was file boundary and chunk placement aware, and could pull the files next chunk even if it wasn't the sequentially next one.

Link to comment
Share on other sites

  • 0

Your prefetcher settings are probably too conservative depending on what you're trying to accomplish.

 

Mine are set up this way because I want CloudDrive to start pulling down content to the local disks when someone starts actively downloading one of my seeds. As such, I want it to respond at a much lower rate than 1MB in 3 secs because many people download from my seeds much slower than that. I don't want CloudDrive to have to poll the content from the provider for every bittorrent chunk someone needs. I'd like to have it on the drive so that their download can be independent (relatively) from the amount of downstream bandwidth that my server has access to at any given moment. I also see no reason for you to limit yourself to a single CD chunk of prefetch unless your storage situation is so dire that you simply do not have any overhead to spare.

Link to comment
Share on other sites

  • 0

Your prefetcher settings are probably too conservative depending on what you're trying to accomplish.

 

Mine are set up this way because I want CloudDrive to start pulling down content to the local disks when someone starts actively downloading one of my seeds. As such, I want it to respond at a much lower rate than 1MB in 3 secs because many people download from my seeds much slower than that.

 

My config setting is different for seeding (longer time window, more data fetched than one block if using a sequential drive) what I gave was my hashing config. 

Since we both have 1MB triggers, we both should cache after the client loads the first meg to give to the peer, you are correct that a longer wait time (while having more false positives) will allow for prefetching blocks to slower peer connections.

 

But that impact seems minimal, particularly on scrambled drives, the minimum download size should result in caching slightly more than you need with each read, and if connection speed are really slow CD probably isn't the bottleneck.

 

 

I also see no reason for you to limit yourself to a single CD chunk of prefetch unless your storage situation is so dire that you simply do not have any overhead to spare.

 

The limit to a single CD chunk is since, if the files are nonsequential, the next chunk will contain a totally different and useless file. More is better only if data was preallocated, or written to CD after a full download locally first.

 

Moral of the story, always preallocate. There are still significant additional improvements that could be made to the cache and prefetcher to better support this type of usage (as mentioned in my earlier post)

Link to comment
Share on other sites

  • 0

My config setting is different for seeding (longer time window, more data fetched than one block if using a sequential drive) what I gave was my hashing config. 

Since we both have 1MB triggers, we both should cache after the client loads the first meg to give to the peer, you are correct that a longer wait time (while having more false positives) will allow for prefetching blocks to slower peer connections.

 

But that impact seems minimal, particularly on scrambled drives, the minimum download size should result in caching slightly more than you need with each read, and if connection speed are really slow CD probably isn't the bottleneck.

 

 

 

The limit to a single CD chunk is since, if the files are nonsequential, the next chunk will contain a totally different and useless file. More is better only if data was preallocated, or written to CD after a full download locally first.

 

Right. So, what's important is not the 1MB part, but the 1MB in relation to the time window you've set. YOUR setup will only prefetch if 1MB of data is requested in less than 3 seconds. That's a pretty big request, particularly for a torrent client--where many downloads are still measured in the KB/sec range. But you say you have different settings for seeding, so I guess that's fine. I honestly think I would just disable the prefetcher for hashing files. I'm not sure if it really adds anything there.

 

In any case I think you're both dramatically overestimating the importance of file data being stored in sequential chunks, and underestimating the intelligence of the CloudDrive prefetch algorithms. I think you're making assumptions about the nature of the prefetcher that may not be true, though, until documentation is completed we can probably only speculate.

 

For what it's worth, you can defragment a CloudDrive--if you just want to eliminate the problem altogether. 

Link to comment
Share on other sites

  • 0

Right. So, what's important is not the 1MB part, but the 1MB in relation to the time window you've set. YOUR setup will only prefetch if 1MB of data is requested in less than 3 seconds. That's a pretty big request, particularly for a torrent client--where many downloads are still measured in the KB/sec range. But you say you have different settings for seeding, so I guess that's fine. I honestly think I would just disable the prefetcher for hashing files. I'm not sure if it really adds anything there.

Don't disable it for hashing, watching the technical log shows that (atleast rtorrent) hashes files by requesting them 1MB at a time. And only the next meg after the previous read finishes. Furthermore, each 1MB request shows a download speed, implying each meg from the CD chunk is being downloaded independently. Hashing rates skyrocket with the prefetch settings I've used vs no prefetcher.

 

 

In any case I think you're both dramatically overestimating the importance of file data being stored in sequential chunks, and underestimating the intelligence of the CloudDrive prefetch algorithms. I think you're making assumptions about the nature of the prefetcher that may not be true, though, until documentation is completed we can probably only speculate.

 

For what it's worth, you can defragment a CloudDrive--if you just want to eliminate the problem altogether. 

One thing I'm certain on is that the prefetcher currently only queries subsequent chunk numbers. This is obvious from the technical logs as well. It has some clever logic for existing cached blocks, but it does not find the next chunk number for a file, simply the next chunk in the CD. In my experience, the prefetcher will never prefetch the correct file if it does not use chunks numbered sequentially.

 

Though likely only Alex could give us the definite answer on how it works at the moment.

 

I actually tried windows disk defrag, but for me it caused the drive to disconnect on version 854 during the analyze step.

Link to comment
Share on other sites

  • 0

If your minimum download is set higher it should not be able to download only 1MB at a time. Mine is set to 10MB, for example. It simply cannot download less than a full chunk on that particular drive at a time. That's one of the reasons that I find your hashing prefetcher settings to be a bit redundant. If you're not prefetching any more than one chunk at a time, the minimum download could handle that setting all by itself.

 

No disk usage should ever dismount your drive. That indicates other problems. Specifically, the drive dismounts because of read/write errors to the provider. If it's happening during heavy usage it's probably related to timeouts from your system I/O. Adjust the relevant setting in the config file in your CloudDrive directory and see if that helps. See my guide here, near the bottom, for specifics: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/

Link to comment
Share on other sites

  • 0

If your minimum download is set higher it should not be able to download only 1MB at a time. Mine is set to 10MB, for example. It simply cannot download less than a full chunk on that particular drive at a time. That's one of the reasons that I find your hashing prefetcher settings to be a bit redundant. If you're not prefetching any more than one chunk at a time, the minimum download could handle that setting all by itself.

 

No disk usage should ever dismount your drive. That indicates other problems. Specifically, the drive dismounts because of read/write errors to the provider. If it's happening during heavy usage it's probably related to timeouts from your system I/O. Adjust the relevant setting in the config file in your CloudDrive directory and see if that helps. See my guide here, near the bottom, for specifics: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/

 

And I agree, it wasn't a controlled test by any means, other tools were using the drive at the time I tried to defrag it. I haven't given it a second attempt. I needed to create a new disk anyway, and preallocate eliminates the fragmentation problem.

 

Similar point with minimum download, my initial drive configuration had a 1MB min, my new one uses 5, which hopefully should perform better (fewer API requests as well).

 

Hopefully final builds better guide users on setting these, or ideally configures more dynamically by need.

 

Speaking of which, the any other tips from the advanced config? LocalIo_ReleaseHandlesDelayMS particularly looks interesting.

Link to comment
Share on other sites

  • 0

That setting is specifically for drives hosted on the local disk and shares provider. It won't do anything in the overwhelming majority of use cases.

 

In general, the advanced settings target either very specific scenarios, or the quirks of particular providers. I don't really recommend messing with them unless there is a problem.

 

I DO think Alex should consider revisiting the default number of I/O failures before dismounting the drive, as I'm not sure I've ever had a system where that did not need to be adjusted out of the box. I've also never experienced any negative side-effects (timeouts and system freezes) by raising it to a reasonable number like 8 or so. So I'm not sure what's really going on there. 

 

Aside from that value, though, I don't really think there are any general tweaks to be made that shouldn't actually be fixed as a matter of development. That is, if you find an issue that is regularly fixed by tweaking something like the ioManager settings, for example, it's probably better to just send it in as a bug report and open a ticket so Alex can take a look at why the software isn't handling the issue automatically. 

Link to comment
Share on other sites

  • 0

I should add that the fixed cache type is another setting directly benefiting torrenting.

 

From the CoveCube blog "Overall, the fixed cache is optimized for accessing recently written data over the most frequently accessed data."

 

A new torrent is likely to have the majority of seeding requests, so fixed is the best cache if you're continually downloading new torrents. Plus I prefer the predictable size of the drive cache when performing a large file preallocation.

Link to comment
Share on other sites

  • 0

Yes and no. I'd say this sorta depends on the use case too.

 

Remember that a fixed cache will also throttle all write requests once the cache is full too. So you'll only be able to write to the drive at the speed of the upstream connection. If you're using a torrent client to interact directly with the drive that could slow everything down overall.

 

That being said, if we're talking about a drive that is predominately tuned for seeding I think you're right. 

Link to comment
Share on other sites

  • 0

Remember that a fixed cache will also throttle all write requests once the cache is full too. So you'll only be able to write to the drive at the speed of the upstream connection. If you're using a torrent client to interact directly with the drive that could slow everything down overall.

 

That's true, but so will a flexible cache, which queues up writes ontop of the existing cache, and if the cache drive itself gets within 6GB of being full it'll throttle. Where as the fixed queue will shrink the existing cache until it's all a write queue, before throttling.

My cache is 15GB smaller than the 60GB free SSD it is on, so for a flexible cache I'd only get about 9GBs of queued writes before throttling, where as the fixed queue can dedicate all 45GB of the cache to writing (at the loss of all other cached torrent data), before throttling.

 

Better still, since that initial preallocation write queue has replaced a portion of the cache (where as flexible doesn't necessarily retain any of the recent write queue in cache after uploading) downloads usually are immediately faster, as they'll modify more zeroed chunks straight from local cache.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...