Jump to content

Bowsa

Members
  • Posts

    61
  • Joined

  • Last visited

Posts posted by Bowsa

  1. On 4/29/2019 at 7:41 PM, Christopher (Drashna) said:

    Decrease the prefetcher values.  As is, it won't be triggered often, and it will grab too much data at once. Basically, it won't work right. 

    Set the trigger to 5MB (or even 1), and set the forward to 200MB. 

     

    The forward should be roughly 75% of Download threads * minimum download size.  This way, there is a buffer for other tasks to run. 

    I adjusted these settings, and had set these parameters based on another admin post I saw on the forum regarding prefetching.

    I have

    Download Threads: 10

    Upload Threads: 13

    Upload Threshold: 10 MB

    Minimum Download Size: 35 MB

    _____________________________________________

    Prefetcher

    Trigger: 20 MB

    Forward: 500 MB

    Time Window: 30 Seconds

    _____________________________________________

    Minimum Recommended download speed: 979 mbps

     

  2. Adding on to this:

    My Pool Consists of x3 8TB HDDs, and x2 500GB SSDs. I currently have 2 TB "Free", but can't define the pool as a cache drive and have to dedicated another SSD I have dedicated as cache for the role, but its 500GB of storage limits the speed at which I can upload files.

    For example if I'm uploading a bunch of files, it has to wait to clear out (upload) the SSD and then write more files which are effectively throttled. I guess I'm going to have to clear at least a good 4-6 TB of data, and assign a drive letter to one of the mechanical drives so I can set it as a cache drive for efficient uploads.

  3. These are my current StableBit Cloud Drive settings for use with Google. I was wondering what's the best way to optimizing downloads for streaming 4K (60-70GB) 50-60mbps birate content?

    Doing Speedtest, my downloads and upload speeds are regularly 930-950mbps.  What's the best way of optimizing both for the FASTEST download/upload in the shortest time?

    Capture.PNG

  4. Scenario: A Pool of x3 (8-TB) HDDS, and 2 500GB SSDs as a "cache" drive part of the pool; Free space 1.2+ GBS

    Obtaining and unpacking multiple 70+ GB Files runs into "space" problems a lot of times with NZBGet because it downloads to the smaller drive within the pool and tries to unpack to it. Thus it effectively acts as a roadblock for further downloads from running correctly.

    Is there any way to define "free-space" for certain programs, as double the file size for unpacking reasons?

    This is ridiculous

  5. On 4/3/2018 at 5:58 AM, darkly said:


    If DrivePool is distributing data between multiple gdrive accounts, does that also increase the potential read/write speed of the drive?
    My current setup is a 256TB CloudDrive partitioned into 8 32TB partitions. All 8 partitions are pooled into 1 DrivePool. That DrivePool is nested within a second DrivePool (so I can add additional 8-partition pools to that in the future). Basically: Pool(Pool(CloudDrive(part1,part2,part3,...,part8))). Hypothetically, if I did something like Pool(Pool(CloudDrive_account1(part1,part2,part3,...,part8)),Pool(CloudDrive_account2(part1,part2,part3,...,part8),Pool(CloudDrive_account3(part1,part2,part3,...,part8)), would I see a noticeable increase in performance, OTHER than the higher upload limit?

    For what purpose do you have One Large Drive with multiple partitions, instead of multiple drives pooled together?

    Is there any benefit?

  6. On 4/12/2019 at 4:44 PM, Christopher (Drashna) said:

    Well, StableBit DrivePool does have the hierarchical pooling. So you could create a pool of local disks, and a pool of clouddrive disks, and pool those together. 

    Hello Christopher, thank you for the response, but it doesn't answer my question. Due to data loss a few weeks back, I've started over and created a smaller CloudDrive in case I have to use CHKDSK again one day. In addition I have upload-verification enabled to protect against data loss.

    My issue is that the 10TB drive is about to fill-up, and I wanted to opt against re-sizing the drive to avoid the situation I faced earlier which is 3 DAYS of chkdsk running to diagnose the disk (susceptible to power surge interruptions, and such). Ideally I'm going to be creating another 10TB CloudDrive, but my question is how does pooling work for multiple CloudDrives?

    Would I create "Pool #1" Pool My "full" 10TB Drive, and NEW 10TB drive with everything targeting the "pooled cloud drive"?

    In this situation, how does writing to this "pooled" cloud disk work in the distribution of chunks? For example if Drive #1 has 9tb/10TB full, and Drive #2 has 0tb/10TB will it split the chunks between both cloud drives, or will it target one of the two.

    Considering the above case, does it have an impact on IO/Thread/API Performance?

    Lastly, if my new "CloudDrive Pool" is my new "main drive", does that mean that CHKDSK is something that would have to be run on the Pool, or the individual drives?

  7. This seems to be fixing it.. but I too had suffered at the last outage. This time I created a 10TB Drive (so I can CHKDSK Easier), and also have UPLOAD VERIFICATION turned on. It cuts upload speeds in half... but it's worth not losing data.

    EDIT: Btw, what backup provider are you using. I'm using Backblaze... but it's really slow.

  8. Hey Guys Previously I had set my Cloud Drive to 16TB and CHKDSK took days to complete. My current drive, is the max "default" (not manually entered) value in the GUI of 10TB. I want to prevent a repeat of losing all my data, so I'm being extra careful this time. I have upload verification turned on, and have the drive set to 10TB in-case I have to scan it one day.

    My question is going forward, what practices would you guys recommend for expansion and redundancy? I have a license to DrivePool also, but was wondering how I would go about optimizing my library?

    For example, once this drive fills up, should I "Resize" the drive to a larger value, or simply create another Virtual Drive and Pool them together under Drive-Pool?

    If pooling together Cloud Drives is possible, then how does the distribution of chunks work?

    Are some chunks written to VHDD 1, and VHDD2, and if so how does that maintain data integrity if some chunks are in one Drive, and some are in another?

    If the above is not the case, are files assigned chunks and then a drive and balanced in the CloudDrive DrivePool, for example File1.AVI 6GBs, File2.AVI 3GBs, is File 1 broken into chunks then assigned to VHDD1, and FIle2 broken into chunks and assigned to VHDD2, or is it FIle 1 has 50% of its chunks in VHDD1 and 50% of its chunks in VHDD2?

    Does pooling together CloudDrives Increase/Decrease ---> Download/Upload/IO performance?

    Do they all contribute to the same upload cap?

    After all these questions, is there a practical way of creating a virtual pool/raid, that serves as an instant "backup" to your main CloudDrive/CloudDrivePool?

  9. 23 hours ago, Christopher (Drashna) said:

    The write throttling is based on the free space remaining on the drive.  So.... increase that, and it should help. 

    That or switch to the fixed size/proportional cache type. 

    Ok, I got a Dedicated 500GB NVME/SATA Drive to serve as a Cache for my Local DrivePool, and for my CloudDrive. That way I can set-and-forget transferring files without having to worry about throttling. My Upload speeds are 400-500 MBps with upload-verification on. It's higher without it on, but after losing 5TB of data... better safe than sorry.

  10. On 4/2/2019 at 5:09 PM, Christopher (Drashna) said:

    Yup.

     

    Likely, you're getting throttled.  We don't know that until the upload completes.  And if we get a throttling response, the thread will wait a bit and then retry.  And that "wait a bit" will increase exponentially (IIRC) each time this happens.

    Most likely, that's what you're seeing. 

     

    On 4/2/2019 at 5:09 PM, Christopher (Drashna) said:

    Yup.

     

    Likely, you're getting throttled.  We don't know that until the upload completes.  And if we get a throttling response, the thread will wait a bit and then retry.  And that "wait a bit" will increase exponentially (IIRC) each time this happens.

    Most likely, that's what you're seeing. 

    I'm pretty sure I haven't crossed the 750GB today, but I'm getting

    "[IoManager:57] Error performing Write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host."

  11. Hey guys, I'm finally filling in the VERY LAST drive space in my computer and got an SSD Drive for two purposes (to have more personal space for software), and a dedicated cache for the CloudDrive. My question is the following: when you're Transfering files to the "Cloud Drive" it's not immediately transferring it to this magical drive, but your defined cache drive... THEN uploading it. The problem with this for me is that it places limits on how much data I can transfer at a time.

    I would like to set-and-forget some files and to do that I need a chunk of space for the transfer process in the first place. So for example, if I only have 150 GBS on my Main Drive Available, I can only transfer 150gbs of data at a time UNTIl it finishes uploading.

    Would detaching, and re-attaching my drive but instead choosing a separate drive to alleviate this issue? That way I can have a dedicated Cache Drive, and be able to upload more than 150gbs of data at  at time.

  12. There are occasions where I leave a substantial amount of Data Uploading, I check and the program is uploading at 0 MBPS when I open my window with a few gigs left. It isn't until a few seconds, that it starts uploading again.

     

    Anyone experience similar?

  13. I appreciate the response, and although I’ve been sparse on details, after the process completed no recovery of my files took place. 

    I’ve since deleted my drive, and will probably go the physical drive route unless someone has success here backing up their CloudDrive data?

  14. After losing more than 200 painstakingly created files due to the Google Drive Fiasco... I can't in good conscious recommended this software anymore.  I see no reports of Data Loss to the people who went the RClone route, but I do see multiple reports in this forum, mine included. I've lost more than a year's worth of data (in hours), time I don't to spare anymore, due to the way this application interacts with google drive. Newer folders showed up as "corrupted", while older files are gone in their entirety.

    I did a CHKDSK, which did nothing and I'm just completely apathetic to the whole thing right now. I don't know If I want to invest all this time again to make myself whole once more, but I'd recommend staying away from this product if you value your data... and time.

  15. I haven’t done anything to my drives, but I know there was a problem google last night. I saw the software threw a few problems and dismounted and remounted... but finally a few files are lost and now I have to go through the trouble of doing it again. 

    Any solution in sight?

  16. On 3/27/2018 at 4:04 PM, Digmarx said:

    No. Still get a maximum of around 30-35MB/sec copying to the cloud drive versus 160MB/sec copying to the cache disk itself.

    [edit] I should probably add that I'm getting similar performance transferring uncached files off the cloud drive as well. I have a 1000/500 fiber connection. The second time transferring the same file (ie when it's been cached) performance is much better, around 120MB/sec

    Yeah this is the only annoyance for me... that and not being able to transfer files to the CloudDrive un-attended because it dismounts

  17. Hey, guys, I had a CloudDrive on Google with multiple Terabytes of Data, and while attached it no longer is displaying my files. 

    Here are the actions I took before this happened.

    1) Ordered SSD to use as SSD Cache Drive for DrivePool

    2) Installed SSD, but wasn't being detected --> restarted 2 times until I saw drive available in device manager, but not assigned letter/partition

    3) On the 3rd restart, opening Disk Management (W10) Prompt came up for unidentified drive asking if I wanted MBR or GPT --> Opted for GPT, but drive did not appear in Disk Management, 2nd restart produced the same thing

    4) Manually assigned drive through DiskPart utility, first Selected drive, set as MBR --> Active, then format and assign | Changed mind, and cleaned, set as GPT and manually formatted a small space so it would appear in DiskManagement

    5) Drive appeared with 100MB Allocated, and the rest unallocated ---> deleted all volumes, formatted and assigned drive Letter

    6) Annexed SSD to DrivePool and everything seemed fine and dandy, so I unassigned the Drive Letter

    7) Noticed that after restarting my computer 5-6 times, the Cloud Drive wasn't being mounted anymore

    8) Drive displayed mounted but wasn't in explorer. Re-authorized and restarted... nothing

    9) Detached Drive, RevoUninstall CloudDrive --> restarted Computer

    10) Installed again, attached drive/encryption... drive still didn't appear in Explorer

    11) Checked Disk Management and saw that the last disk "7" was unallocated/unassigned... trying assigning Drive Letter without formatting but it didn't work with CloudDrive

    12) Had to format "CloudDrive" drive so a drive letter would show up and Mount on Explorer Window

    13) RevoUninstalled --> Restart --> Install latest Beta

    14) Attached Drive "mounting, looking for data on Provider" ---> Mounted, but no files accessible

    15) My Drive is mounting with my encryption key, and CloudDrive displays the Data occupying the CloudDrive along with the free space, but explorer window says otherwise

    16) Ran CHDISK through Properties --> Tools, No errors? Drive mounts, but no files show up

    The drive is being mounted and it's apparently looking for the Data, but it's not displaying said files and the occupation of the drive in the Virtual Drive that's created. Creating another drive works fine, but I'm trying to access the files I painstakingly already uploaded... When I check the CloudPart folder the cache files that are numerically named and allocated with space are small. 

    I don't know what's going on.. but I just lost all my data for no reason at all?

  18. 20 minutes ago, Christopher (Drashna) said:

    Are you on 1.0.2.976?
    If not, use that version:
    http://dl.covecube.com/CloudDriveWindows/release/download/StableBit.CloudDrive_1.0.2.976_x64_Release.exe

     

    If you are, and you're still having this issue, try decreasing the read and write threads to 5. 

     

    As for the SQLite error, that's ... odd, and may indicate that you're having issues with your system disk. 
    Specifically, we use SQLite for storing chunk IDs locally.  But that may not be the issue here. But a CHKDSK pass of the drive may be a good idea, just in case.  Especially if the cache is on the system drive. 

     

    If the issue persists, have you run the StableBit Troubleshooter yet?  
    If so, let me know, and what thread/ticket you did so under. 

    I need a ticket/thread number

×
×
  • Create New...