Jump to content

KiLLeRRaT

Members
  • Posts

    25
  • Joined

  • Last visited

  • Days Won

    2

KiLLeRRaT last won the day on June 8 2017

KiLLeRRaT had the most liked content!

Recent Profile Visitors

1085 profile views

KiLLeRRaT's Achievements

Member

Member (2/3)

6

Reputation

  1. I also upgraded to 1315 and it looks like everything is working the way that it should. I have not had issues that @darkly have reported where gets the insufficient bandwidth errors. My disk upgrade went instantly, even through I was getting the API limit error before. I did revert back to standard API keys (not using my own) before upgrading. I haven't yet tried 1316 yet. Cheers,
  2. Hi All, I've recently noticed that my drive has 40GB to be uploaded, and saw there are errors in the top left. Clicking that showed What's the deal here, and is this going to be an ongoing issue with growing drives? My drive is a 15TB drive, with 6TB free (so only using 9TB). Attached is my drive's info screen. EDIT: Cloud Drive Version: 1.1.5.1249 on Windows Server 2016. Cheers,
  3. KiLLeRRaT

    Auto retry

    I still think it's a good idea to have auto reconnect. The way this would work is still to have the same setting for failures in the config, and to let it disconnect as normal. We then want it to essentially click the retry button, maybe an hour or two later after it was disconnected by itself... I don't see how that will cause the system lockup, compared to if I was to manually click the retry button. What's everyone's thoughts on this? Edit: Also, I think recommending that people up the CloudFsDisk_MaximumConsecutiveIoFailures WILL make their system more flaky, because there will be guys that put unrealistic values in there to try and get around the disconnect, where they actually just need a auto reconnect in x hours setting.
  4. Ouch, I wish it was simpler. I will test it using a small dummy disk, clone it, delete the old disk, and get the new disk in place... I guess I will leave this as the 'last resort' if something really bad happens... Right now I just need to know if it will be reliable. I guess testing this will give me the answers. Thanks
  5. Hi, just a quick question, can I make a copy of the drive contents in my Google Drive folder, and attach it as a new drive? Do I need to create a new Guid for it somewhere other than renaming the Google Drive folder? Reason I'm asking is, rclone supports server side copying, so I can make a duplicate copy of my entire drive once a month and mount it later on in case of any corruption on my main drive like what happened with a few of us at the start of the month. Thanks!
  6. Hey guys, Good to see that the releases are getting more and more solid. I just had a thought while looking at my transfers/prefetching. What if you had the prefetch forward value automatically scale based on the hit rate of the prefetch? Perhaps you can provide a range, e.g. between 1 MB and 500 MB based on hit rate. For streaming movies, a large prefetch works well (e.g. 150 MB, or even more). The hit rate will be good, thus growing the prefetch size more and more. If I later then start looking through photos that are only 3 MB each, the hit rate will get pretty terrible, and thus scale back the prefetch again to say 1 MB again. Just thought this could improve the general bandwidth use, and ultimately performance.
  7. Hey, Would it be useful to have a sticky on here, where it's posted when a new beta version comes out, along with what people run into and find in that version? Basically a copy of the changes.txt, but people can then post comments on it etc... e.g. The issue we had where drives become corrupt... That way it's quick to realize that you probably shouldn't update to the update until it's resolved...? Just a thought!
  8. Hey Chris, Yeah I haven't lost anything important, I have quite a number of backups of my important stuff on multiple services and also multiple physical drives on separate locations. I was just hopeful with this since I just sorted out my TV Show library and took a while to get sorted out and working the way I wanted, so was hoping I wouldn't have to do it over again heh. Thanks for the help and so on though, loving the product so far!
  9. Heh still stays RAW... I guess I better dust off the old local disks and plug in the array controller :\ See what I can recover here. I'm just thinking of a way to prevent this in the future without having local disks around... Any suggestions other than the rclone suggestion by Akame?
  10. I think my drive is too far poked. I tried it but it remained RAW. I was part way through a backup to another cloud drive. That one recovered fine, but I'm now missing a load of stuff that wasn't backed up. I wonder if it's a good idea to create a full backup of your drives, and detaching them, then maybe once a while, connect it, and sync it. Like a backup, although it will be on the same provider... But it'd mitigate these issues.
  11. Is there a way to have the application automatically discard the duplicate chunks that were made after a certain date? Or automatically keep/use the oldest file if it finds duplicates? That may fix my problem, then I can just restore my Google drive to say the 28th of Jan or something, and voila?
  12. Any update on this yet? I'm still stuck without my files
  13. That sounds pretty painful! Let's hope they can give us a quick easy fix for this via an update!
  14. When trying to enter some folders for me it says that it's corrupt. Also getting corruption errors when I try to robocopy stuff from my cloud drive across to my machine.
×
×
  • Create New...