Jump to content

Firerouge

Members
  • Posts

    31
  • Joined

  • Days Won

    1

Everything posted by Firerouge

  1. I can't directly answer this. But from the sounds of it, you may be best suited with a thirdparty backup software, that could potentially utilize the clouddrive as the backup destination.
  2. You'll probably have to say what provider you're using.
  3. You're right, it's a borderline use case. Though torrenting has many very useful legal uses for distributing files as it's often the fastest method, and I don't believe that specific bans should be made against this application, it is more I/O random than abnormally intensive compared to alternative downloading methods (except for my use case due to high quantity of usage). With 13/10 threads, the clouddrive client rate limits only a portion of the time, and usually only when the connection has both in and out traffic flow. I find that RClone is very inefficient, and that is likely the cause of those API bans. When I utilize RClone, pulling from a stablebit clouddrive, it often crashes other programs that are using the drive, as it seems to starve all I/O operations. I'm all for using my own API key (and limits), especially if it would allow me greater performance. It has been floated here before, but I believe the covecube developers are happy with the pooled API keys they are using at the moment and don't wish to release that capability (which then may be abused).
  4. You're absolutely right, it most certainly is, with many simultaneous file handles, and read write operations, it may be the true ultimate test of clouddrive. My current configuration, writing and reading directly to it is functional (as long as I avoid crashes which force rehashing). I'm unfortunately limited to an 80Gb SSD, with 40Gigs for cache, and the remainder is swap and upload que. These sizes can't practically go any larger without risking hitting the 5Gb free space limit that throttles clouddrive. This configuration has worked for over 1Tb of torrenting data in the past 4 days, so in general I'm happy with the results. As for those cache settings, I found that the 2 chunk read ahead I've set (20Mb) seems more than sufficient, and that any changes to the prefetcher (including turning it off) has little impact on file hashing performance. It fairly consistently begins the next 20Mb prefetch before, or immediately following the completion of the prior. I tried all the way up to 400Mb read ahead, but it simply slowed down everything by wasting the slower download bandwidth for lots of unnecessary chunks.
  5. It's also worth elaborating more on performance of the clients. As they all (rtorrent, utorrent, deluge, qbittorrent) follow a similar and fairly odd behavior. For starters, even with Background I/O turned off, any form of clouddrive reads substantially impacts performance, up to causing a complete stall in the torrent clients if it is too many parallel reads. When first started, the client will download quite quickly, as high as the max link speed (20MiBs for me) but will very quickly slow down to my sustained clouddrive speeds (about 5-7MiBs). During that initial burst, the upload que can get rather long, many gigs in length. Though the throttling does not seem to be in response to this. I've had an upload que of over 7gigs in length, which continued to download at my normal sustained speeds. Further throttling occurs randomly, with most dramatically the speeds dropping down to under 1MiBs for ~10 seconds bursts, before returning back up to sustained speeds for the next minute or so. Upload que length is usually under 100MBs in size while this occurs. This happens moreso when other applications are performing read operations (specificly an rclone copy from a stablebit GDrive clouddrive to ACD, which has caused complete crashes in excess of 4 uploaders), and quantity of read operations also seems to impact how much the speed fluctuates, with it most commonly just fluctuating between 3-5MiBs. I find that splitting downloads across 2 different clients gives me the greatest overall download speeds, as often, when one throttles down, the other will surge up, though frequently both will throttle down simultaneously. This is what the qbittorrent reports right when a throttling occurs. The throttling does seem to get worse with time. To a point where the client will stop downloading, reporting continously a 100% read overload. Overloading the disk write (and read) cache, while annoying, with the underlying issue somewhere that makes it lockup overtime is, for me, secondary to fixing hashing...
  6. First off, I want to say that I'm a trial user of 854, and am deeply impressed by this software; however, there is one major issue standing in the way of my use case. I have tried 4 different torrent clients, downloading and uploading directly from a CloudDrive. They all work reasonably for a small amount of parallel torrent downloads, with overall download speed decreasing by about 66%. Reasonable overhead, and reasonable needing to limit the maximum simultaneous files being processed. However, when something does go wrong (inevitably, and frequently if too many torrents are downloading) the clients will want to hash the files, to make sure nothing is corrupted. This however does not work, or at least not well. Hashing rate is in an order of magnitude slower than if a file is stored locally. This flaw is so pronounced, that it is quicker to simply redownload a torrent from scratch. rtorrent (linux based but running in cygwin) performs best, with what seems to be optimizations that allow it to skip large parts of the hashing process. Still, a ~12gig file downloaded to 50% will take a day to fully hash check; furthermore, you can copy the same file locally and hash check it before the other client checking directly from the clouddrive can get to 5% This is not a solution however, as I have torrents whose partial download state exceed total local storage capacity. This shouldn't be the case, in practice, the entirety of the written torrent blocks will have to be downloaded and each checked. However, it's quite clear that the prefetcher is not managing to cache the file before the client needs it, or another problem is inplay. I suspect, from limited insights, that the clients are not hashing the torrent blocks in sequential block order (fairly sure of this), and are instead skipping around throughout the file, which may be confusing the prefetcher and/or cache. Furthermore, and this is just a guess, I believe that since torrent blocks may (usually) don't align with the provider chunk size, it may frequently download a chunk, check only one of the contained blocks of a possible 2 or more contained in that chunk (based on both block and chunk size), and then discards it from the cache due to the potentially considerable wait before the torrent client randomly comes back and checks the neighboring blocks stored in any given chunk. It's also possible that the root of the problem may lie in that the clients expect a sparse filesystem, which (and I'm unclear of the details on this) allows it to know to skip hash checking of blocks that are zeroed (unwritten yet). It's possible that clouddrive doesn't handle this sparse storage, and is actually writing out all zeros to the clouddrive, and also requiring the torrent client to check them. I'm further inclined to believe the allocation of zero space is to blame, as when copying with explorer a file downloaded halfway, the transfer status doesn't count transfer progress down from the size on disk (actual data downloaded), but rather the completed file size (the size it should be if fully present). Also, the problem could have it's roots in the fact that the torrent client doesn't download a file's blocks sequentially, and may take quite awhile to complete any given download, which (and this is purely a guess) causes the blocks to get mixed in with other downloads and be scattered across many different non-sequential chunks. All things considered, the prefetcher manages to get cache hits in the range of 60-85%, with about 2Gigs utilized at any given time, and set with a 10mb trigger, 20mb read ahead, and 300sec expiration.
×
×
  • Create New...