Jump to content

Bowsa

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by Bowsa

  1. I adjusted these settings, and had set these parameters based on another admin post I saw on the forum regarding prefetching. I have Download Threads: 10 Upload Threads: 13 Upload Threshold: 10 MB Minimum Download Size: 35 MB _____________________________________________ Prefetcher Trigger: 20 MB Forward: 500 MB Time Window: 30 Seconds _____________________________________________ Minimum Recommended download speed: 979 mbps
  2. Adding on to this: My Pool Consists of x3 8TB HDDs, and x2 500GB SSDs. I currently have 2 TB "Free", but can't define the pool as a cache drive and have to dedicated another SSD I have dedicated as cache for the role, but its 500GB of storage limits the speed at which I can upload files. For example if I'm uploading a bunch of files, it has to wait to clear out (upload) the SSD and then write more files which are effectively throttled. I guess I'm going to have to clear at least a good 4-6 TB of data, and assign a drive letter to one of the mechanical drives so I can set it as a cache drive for efficient uploads.
  3. These are my current StableBit Cloud Drive settings for use with Google. I was wondering what's the best way to optimizing downloads for streaming 4K (60-70GB) 50-60mbps birate content? Doing Speedtest, my downloads and upload speeds are regularly 930-950mbps. What's the best way of optimizing both for the FASTEST download/upload in the shortest time?
  4. Scenario: A Pool of x3 (8-TB) HDDS, and 2 500GB SSDs as a "cache" drive part of the pool; Free space 1.2+ GBS Obtaining and unpacking multiple 70+ GB Files runs into "space" problems a lot of times with NZBGet because it downloads to the smaller drive within the pool and tries to unpack to it. Thus it effectively acts as a roadblock for further downloads from running correctly. Is there any way to define "free-space" for certain programs, as double the file size for unpacking reasons? This is ridiculous
  5. I think it's trash.. Has been running for 2 week and it hasn't backed up shit.
  6. For what purpose do you have One Large Drive with multiple partitions, instead of multiple drives pooled together? Is there any benefit?
  7. Hello Christopher, thank you for the response, but it doesn't answer my question. Due to data loss a few weeks back, I've started over and created a smaller CloudDrive in case I have to use CHKDSK again one day. In addition I have upload-verification enabled to protect against data loss. My issue is that the 10TB drive is about to fill-up, and I wanted to opt against re-sizing the drive to avoid the situation I faced earlier which is 3 DAYS of chkdsk running to diagnose the disk (susceptible to power surge interruptions, and such). Ideally I'm going to be creating another 10TB CloudDrive, but my question is how does pooling work for multiple CloudDrives? Would I create "Pool #1" Pool My "full" 10TB Drive, and NEW 10TB drive with everything targeting the "pooled cloud drive"? In this situation, how does writing to this "pooled" cloud disk work in the distribution of chunks? For example if Drive #1 has 9tb/10TB full, and Drive #2 has 0tb/10TB will it split the chunks between both cloud drives, or will it target one of the two. Considering the above case, does it have an impact on IO/Thread/API Performance? Lastly, if my new "CloudDrive Pool" is my new "main drive", does that mean that CHKDSK is something that would have to be run on the Pool, or the individual drives?
  8. I have symmetrical 1000 gbps, and have seen my downloads reach 850-960 mbps. Are you set-up correctly with the threads?
  9. This seems to be fixing it.. but I too had suffered at the last outage. This time I created a 10TB Drive (so I can CHKDSK Easier), and also have UPLOAD VERIFICATION turned on. It cuts upload speeds in half... but it's worth not losing data. EDIT: Btw, what backup provider are you using. I'm using Backblaze... but it's really slow.
  10. Hey Guys Previously I had set my Cloud Drive to 16TB and CHKDSK took days to complete. My current drive, is the max "default" (not manually entered) value in the GUI of 10TB. I want to prevent a repeat of losing all my data, so I'm being extra careful this time. I have upload verification turned on, and have the drive set to 10TB in-case I have to scan it one day. My question is going forward, what practices would you guys recommend for expansion and redundancy? I have a license to DrivePool also, but was wondering how I would go about optimizing my library? For example, once this drive fills up, should I "Resize" the drive to a larger value, or simply create another Virtual Drive and Pool them together under Drive-Pool? If pooling together Cloud Drives is possible, then how does the distribution of chunks work? Are some chunks written to VHDD 1, and VHDD2, and if so how does that maintain data integrity if some chunks are in one Drive, and some are in another? If the above is not the case, are files assigned chunks and then a drive and balanced in the CloudDrive DrivePool, for example File1.AVI 6GBs, File2.AVI 3GBs, is File 1 broken into chunks then assigned to VHDD1, and FIle2 broken into chunks and assigned to VHDD2, or is it FIle 1 has 50% of its chunks in VHDD1 and 50% of its chunks in VHDD2? Does pooling together CloudDrives Increase/Decrease ---> Download/Upload/IO performance? Do they all contribute to the same upload cap? After all these questions, is there a practical way of creating a virtual pool/raid, that serves as an instant "backup" to your main CloudDrive/CloudDrivePool?
  11. Is this over-kill then? I have these settings to ensure I upload my files as fast as possible (1000/1000) whilst also being able to Stream 4K and high-bitrate content with no hiccups.
  12. Ok, I got a Dedicated 500GB NVME/SATA Drive to serve as a Cache for my Local DrivePool, and for my CloudDrive. That way I can set-and-forget transferring files without having to worry about throttling. My Upload speeds are 400-500 MBps with upload-verification on. It's higher without it on, but after losing 5TB of data... better safe than sorry.
  13. Bowsa

    Schrodingers Upload?

    I'm pretty sure I haven't crossed the 750GB today, but I'm getting "[IoManager:57] Error performing Write I/O operation on provider. Retrying. Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host."
  14. Hey guys, I'm finally filling in the VERY LAST drive space in my computer and got an SSD Drive for two purposes (to have more personal space for software), and a dedicated cache for the CloudDrive. My question is the following: when you're Transfering files to the "Cloud Drive" it's not immediately transferring it to this magical drive, but your defined cache drive... THEN uploading it. The problem with this for me is that it places limits on how much data I can transfer at a time. I would like to set-and-forget some files and to do that I need a chunk of space for the transfer process in the first place. So for example, if I only have 150 GBS on my Main Drive Available, I can only transfer 150gbs of data at a time UNTIl it finishes uploading. Would detaching, and re-attaching my drive but instead choosing a separate drive to alleviate this issue? That way I can have a dedicated Cache Drive, and be able to upload more than 150gbs of data at at time.
  15. Bowsa

    Schrodingers Upload?

    There are occasions where I leave a substantial amount of Data Uploading, I check and the program is uploading at 0 MBPS when I open my window with a few gigs left. It isn't until a few seconds, that it starts uploading again. Anyone experience similar?
  16. I appreciate the response, and although I’ve been sparse on details, after the process completed no recovery of my files took place. I’ve since deleted my drive, and will probably go the physical drive route unless someone has success here backing up their CloudDrive data?
  17. After losing more than 200 painstakingly created files due to the Google Drive Fiasco... I can't in good conscious recommended this software anymore. I see no reports of Data Loss to the people who went the RClone route, but I do see multiple reports in this forum, mine included. I've lost more than a year's worth of data (in hours), time I don't to spare anymore, due to the way this application interacts with google drive. Newer folders showed up as "corrupted", while older files are gone in their entirety. I did a CHKDSK, which did nothing and I'm just completely apathetic to the whole thing right now. I don't know If I want to invest all this time again to make myself whole once more, but I'd recommend staying away from this product if you value your data... and time.
  18. Bowsa

    Data corrupted..?

    How do I safely unmount the drive to run chkdsk?
  19. Bowsa

    Data corrupted..?

    I can't even DELETe my corrupt folders.. ffs
  20. Bowsa

    Data corrupted..?

    I haven’t done anything to my drives, but I know there was a problem google last night. I saw the software threw a few problems and dismounted and remounted... but finally a few files are lost and now I have to go through the trouble of doing it again. Any solution in sight?
  21. Try the StableBit.CloudDrive_1.1.0.999 Beta http://dl.covecube.com/CloudDriveWindows/beta/download/
  22. Yeah this is the only annoyance for me... that and not being able to transfer files to the CloudDrive un-attended because it dismounts
  23. Hey, guys, I had a CloudDrive on Google with multiple Terabytes of Data, and while attached it no longer is displaying my files. Here are the actions I took before this happened. 1) Ordered SSD to use as SSD Cache Drive for DrivePool 2) Installed SSD, but wasn't being detected --> restarted 2 times until I saw drive available in device manager, but not assigned letter/partition 3) On the 3rd restart, opening Disk Management (W10) Prompt came up for unidentified drive asking if I wanted MBR or GPT --> Opted for GPT, but drive did not appear in Disk Management, 2nd restart produced the same thing 4) Manually assigned drive through DiskPart utility, first Selected drive, set as MBR --> Active, then format and assign | Changed mind, and cleaned, set as GPT and manually formatted a small space so it would appear in DiskManagement 5) Drive appeared with 100MB Allocated, and the rest unallocated ---> deleted all volumes, formatted and assigned drive Letter 6) Annexed SSD to DrivePool and everything seemed fine and dandy, so I unassigned the Drive Letter 7) Noticed that after restarting my computer 5-6 times, the Cloud Drive wasn't being mounted anymore 8) Drive displayed mounted but wasn't in explorer. Re-authorized and restarted... nothing 9) Detached Drive, RevoUninstall CloudDrive --> restarted Computer 10) Installed again, attached drive/encryption... drive still didn't appear in Explorer 11) Checked Disk Management and saw that the last disk "7" was unallocated/unassigned... trying assigning Drive Letter without formatting but it didn't work with CloudDrive 12) Had to format "CloudDrive" drive so a drive letter would show up and Mount on Explorer Window 13) RevoUninstalled --> Restart --> Install latest Beta 14) Attached Drive "mounting, looking for data on Provider" ---> Mounted, but no files accessible 15) My Drive is mounting with my encryption key, and CloudDrive displays the Data occupying the CloudDrive along with the free space, but explorer window says otherwise 16) Ran CHDISK through Properties --> Tools, No errors? Drive mounts, but no files show up The drive is being mounted and it's apparently looking for the Data, but it's not displaying said files and the occupation of the drive in the Virtual Drive that's created. Creating another drive works fine, but I'm trying to access the files I painstakingly already uploaded... When I check the CloudPart folder the cache files that are numerically named and allocated with space are small. I don't know what's going on.. but I just lost all my data for no reason at all?
×
×
  • Create New...