Jump to content
Covecube Inc.

KiLLeRRaT

Members
  • Content Count

    23
  • Joined

  • Last visited

  • Days Won

    2
  1. KiLLeRRaT

    Auto retry

    I still think it's a good idea to have auto reconnect. The way this would work is still to have the same setting for failures in the config, and to let it disconnect as normal. We then want it to essentially click the retry button, maybe an hour or two later after it was disconnected by itself... I don't see how that will cause the system lockup, compared to if I was to manually click the retry button. What's everyone's thoughts on this? Edit: Also, I think recommending that people up the CloudFsDisk_MaximumConsecutiveIoFailures WILL make their system more flaky, because there will be guys that put unrealistic values in there to try and get around the disconnect, where they actually just need a auto reconnect in x hours setting.
  2. Ouch, I wish it was simpler. I will test it using a small dummy disk, clone it, delete the old disk, and get the new disk in place... I guess I will leave this as the 'last resort' if something really bad happens... Right now I just need to know if it will be reliable. I guess testing this will give me the answers. Thanks
  3. Hi, just a quick question, can I make a copy of the drive contents in my Google Drive folder, and attach it as a new drive? Do I need to create a new Guid for it somewhere other than renaming the Google Drive folder? Reason I'm asking is, rclone supports server side copying, so I can make a duplicate copy of my entire drive once a month and mount it later on in case of any corruption on my main drive like what happened with a few of us at the start of the month. Thanks!
  4. Hey guys, Good to see that the releases are getting more and more solid. I just had a thought while looking at my transfers/prefetching. What if you had the prefetch forward value automatically scale based on the hit rate of the prefetch? Perhaps you can provide a range, e.g. between 1 MB and 500 MB based on hit rate. For streaming movies, a large prefetch works well (e.g. 150 MB, or even more). The hit rate will be good, thus growing the prefetch size more and more. If I later then start looking through photos that are only 3 MB each, the hit rate will get pretty terrible, and thus scale back the prefetch again to say 1 MB again. Just thought this could improve the general bandwidth use, and ultimately performance.
  5. Hey, Would it be useful to have a sticky on here, where it's posted when a new beta version comes out, along with what people run into and find in that version? Basically a copy of the changes.txt, but people can then post comments on it etc... e.g. The issue we had where drives become corrupt... That way it's quick to realize that you probably shouldn't update to the update until it's resolved...? Just a thought!
  6. Hey Chris, Yeah I haven't lost anything important, I have quite a number of backups of my important stuff on multiple services and also multiple physical drives on separate locations. I was just hopeful with this since I just sorted out my TV Show library and took a while to get sorted out and working the way I wanted, so was hoping I wouldn't have to do it over again heh. Thanks for the help and so on though, loving the product so far!
  7. Heh still stays RAW... I guess I better dust off the old local disks and plug in the array controller :\ See what I can recover here. I'm just thinking of a way to prevent this in the future without having local disks around... Any suggestions other than the rclone suggestion by Akame?
  8. I think my drive is too far poked. I tried it but it remained RAW. I was part way through a backup to another cloud drive. That one recovered fine, but I'm now missing a load of stuff that wasn't backed up. I wonder if it's a good idea to create a full backup of your drives, and detaching them, then maybe once a while, connect it, and sync it. Like a backup, although it will be on the same provider... But it'd mitigate these issues.
  9. Is there a way to have the application automatically discard the duplicate chunks that were made after a certain date? Or automatically keep/use the oldest file if it finds duplicates? That may fix my problem, then I can just restore my Google drive to say the 28th of Jan or something, and voila?
  10. Any update on this yet? I'm still stuck without my files
  11. That sounds pretty painful! Let's hope they can give us a quick easy fix for this via an update!
  12. When trying to enter some folders for me it says that it's corrupt. Also getting corruption errors when I try to robocopy stuff from my cloud drive across to my machine.
  13. Excellent, thanks, I will give this a shot and let you know how it goes!
  14. Hi Chris, I have just uploaded the drive trace. I'm having an issue where my download is BARELY getting 3mbps using 8 download threads I have a 130mbit connection. Let me know if that helps.
×
×
  • Create New...