Jump to content

danjames9222

Members
  • Posts

    17
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by danjames9222

  1. Please help me get to the bottom of why I'm experiencing slow, in relative terms, compared to my dedicated server 1Gb connection.

     

    post-5641-0-67534300-1495219103_thumb.png

     

    Average speeds, this is while something is uploading at 130Mb.

     

    With 20 threads I am seeing 100 - 140Mb on a machine with gigabit up / down. With 10 threads, I see 60Mb max at all times. That seems a long way off from 900 potential?

     

    2 x CloudDrives in a Drivepool with 2 x file duplication (OS + 1 cache on one drive, other drive cache on another drive, both HDD, both encrypted

     

    Network adapter is set to Public, yet firewall won't let me edit to allow to allow for public networks for some reason. Would this have an impact?

     

    I noticed there are firewall rules that don't have it set to allow public connections. Do I need to enable any?

     

    post-5641-0-04128200-1495219352_thumb.png

     

    Min download size: 20MB

    Chunk Cache Size: 100MB

     

    post-5641-0-00734200-1495219310_thumb.png

  2. Hi,

     

    My VPS keeps crashing overnight for some reason and takes the server completely offline, I assume it is blue screening. I have reason to believe it's CloudDrive / Drivepool causing it. I submitted my logs for both CloudDrive + Drivepool earlier via my email danjames92@hotmail.com.

     

    CloudDrive 1.0.0.870 / Drivepool 2.2.0.651 BETA / Server 2012 R2 with all patches.

  3. Ok well forget that idea.

     

    I've now uploaded around 900GB of content to Drive 1.

     

    I've detached and reattached Drive 1 and 2 to the VPS.

     

    The Drivepool is now balancing but it's only sent around 85GB in the last 8 hours. 

     

    Is this normal with 700Mb up / 500Mb down? It is a VPS but it has an SSD although doesn't seem particularly fast. OS, 2 drive caches are on the same SSD (unavoidable) Are there any settings that can make it quicker?

     

    I would've thought a line with 700Mb up would provide a quicker upload?

     

    Attached is the config I am using at the moment. No prefetch. Anything else I can do?

     

    Min download size = 20MB

    Chunk cache size = 100MB

    post-5641-0-21850700-1494799743_thumb.png

  4. Yup, exactly.

     

    Another idea.

     

    SE - Drive 1

    EDU - Drive 2

     

    Drivepool = Drive 1 + 2 x2 file duplication

     

    I am currently uploading to Drive 1. Drive 2 is not mounted anywhere.

     

    Once the upload to Drive 1 is done, to save downloading the data from Drive 1 to upload to Drive 2 on the VPS, can I sync the data from Drive 1 to Drive 2 using the sync feature? Won't this duplicate the drive exactly so it won't have it's own encryption key? What if there is already some data on Drive 2?

  5. I just now noticed that setting in the wiki as well, it isn't listed in the default config, I'm going to experiment with some variations of that setting as a solution.

     

     

    As for recording it, I've only ever noticed it twice, and that was just luck of glancing at the technical log at the right time and noticing that it had dropped to one single read request, which upon a detailed look, showed the slow speed and the 1min+ connection timer. I'll try and create logs, but I might not have much luck locking it down.

     

     

     

     

    Minor side point while I have your attention, a brand new windows VPS running almost exclusively 854+rtorrent+rclone rarely has unexpected reboots during peak disk i/o. The problem seems to be described in issue 27416, ostensibly fixed a month ago, but in a seemingly unreleased version 858. Can we expect a new RC soon? The issue tracker seems to imply internally you're already past version 859 even.

     

    861 is the latest. You can download betas below.

     

    http://dl.covecube.com/CloudDriveWindows/release/download

  6. Well, if you detach the drive, and it's part of a pool, it will show up as missing in StableBit DrivePool. 

     

    However, if you detach both drives, then it won't be an issue, as it will "unmount" the pool after the last disk is "removed" from the system. 

     

     

     

    And if you're mounting the CloudDrive disks on the VPS, then there shouldn't be any issues with this. 

     

    So unmounting from both, mounting one at home, uploading, then detaching and reattaching both drives to the VPS again is ok?

  7. You'd configure your download client to download the completed files to a different location.   And that's it. 

     

    (I actually do this with DrivePool, as I have a dedicated "temp" drive)

     

    I worked it out. By default it'll go to C:\$User\Downloads by default anyway then move it to the clouddrive after. :)

  8. Hi,

     

    I have 2 Google Drive accounts with 2 x file duplication in a Drivepool.

     

    I have a load of content I want to upload from home to the cloud.

     

    To save uploading the large files twice, can I mount ONE of these drives at home, upload the content, then detach, reattach to the VPS and have the VPS do the replication without corrupting my drivepool?

  9. just wanted to pop in here and say wrt Sonarr, it works much better if you download to a local temp, non Clouddrive folder, then have Sonarr move the completed / renamed files to a Clouddrive folder.

     

    How exactly do you do that?

  10. Well, one thing that has helped others is to set the "minimum download size" to a higher value. This can only be set when creating or attaching the drive, and is limited to the storage chunk size. 

     

    This setting increases the "units" we use, so we grab more data at once, but it means that the initial load of that data can take longer.  (eg, it can increase latency).   

     

    What I'm trying to understand is to make the streaming more efficient.

     

    I have 75Mb down, 17 up.

     

    I want 2 Google Drives replicated and have them in a drivepool together with 2x replication.

     

    Is there a way to make it so that I can send files to be uploaded to one part of the pool and that is mirrored to the other without uploading the data twice? i.e. once it's uploaded to the cloud in one drive, it can be mirrored using the clouds resources rather than my bandwidth?

     

    At the moment I have a tv section pointed to just one drive rather than the pool itself as I hear this is more efficient?

     

    I want to be able to stream full 40 - 50Mb average bitrate files with no issue and just trying to find the best way to do that.

     

    Also is there a reason everytime I view files on one drive or another (or the pool directly) it takes forever and generally freezes windows explorer? Disk activity seems pegged at 100%.

  11. There really shouldn't be a need to do this. 

     

    prefetching happens when sequential reads occur.  And this is a rolling prefetching. Meaning, that as you continue to read the data, it will continue to prefetch data.  So, as you read, it will grab that 150MB ahead.  

     

    Setting this too high means that everything time you read "x" amount of data (the "prefetch trigger"), it will download the "forward" amount.  This means that you could very easily get into situations where it's downloading large chunks of data that NEVER get used.  That's wasteful.  

     

    I think I understand what you are saying.

     

    I have 2 google drive accounts in a drivepool with file replication on. How would I best set this up to make my streaming situation better?

     

    Only set one account to have a precatch rather than both?

     

    Server is running Windows, on and SSD with 50GB fixed cache per grdrive account ,wired gigabit etc.

  12. Hi,

     

    My internet is a bit limited with only 75Mb down and 17up.

     

    When streaming via Plex, I can get shows to start (i.e. on the first playback without it timing out as the cache builds) if I set it to 150MB but what would be useful would be if I could get it to rise to 400MB automatically after a few minutes of playback. Is this a feasible request in terms of programming? I'm not a developer and have no idea how feasible this is but it would help me immensely.

     

    Many thanks,

     

    Dan.

  13. Just so I don't mess anything up with my setup.

     

    2 virtual machines.

    1 downloading machine.

    1 server machine.

     

    2 Gsuite accounts replication x2 in 1 drivepool.

     

    Downloading machine uses Sonarr to download files.

     

    Need to get files to CloudDrive / Pool on other server virtual machine automatically.

     

     

    If I map a folder located inside a Drivepool while being connected via CloudDrive on a different virtual machine, will it mess up the pool?

×
×
  • Create New...