Jump to content
Covecube Inc.

JulesTop

Members
  • Content Count

    46
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by JulesTop

  1. I would not delete the cache. I have a 270TB drive with 55TB used and on 1307 I am upgrading at about 10% per 24 hour period. Which is faster than before. I would stay the course if I were you.
  2. I'm enjoying your diary entries as we are in the same boat... not laughing at you... but laughing/crying with you! I know that this must be done to protect our data long term... it's just a painful process.
  3. I don't believe that this is the case. Clouddrive would organize it all as one single cloudrive and everything would be in the same directory. I do the same exact thing you do, but with 55TB volumes in one clouddrive, and they were all in the same folder.
  4. I don't think that applies because you can't 're-authorize' the drive once the process begins... at least no way that I can tell.
  5. Oh dear... Did you submit a ticket to support? This will make sure @Alex can take a look and roll out a fix if it's on the software end. I don't think he's on the forums too often.
  6. I'm currently at 12.02%, and I started on June 24th at 6:45am... so it's been over 2 days. I have generally larger files... from 15GB to 60GB, but I'm not sure what affect that has.
  7. I was wondering if this would happen as more and more people begin the upgrade process... The issue is, I don't think you can re-authorize the drive once the upgrade process begins... at least I don't see a way to do this. Otherwise, I would switch back to my own key.
  8. I'm going through the process now. It started at 6:45am this morning and is currently at 2.62% complete... I think this will take some time. But I don't think it actually downloads/re-uploads the chunks, it just moves them. I have 55TB in google drive BTW. Also, as far as I can tell, the drive is unusable during the process, and there is no way of pausing the process to access the drive content. However, it will be able to resume from where it left off after a shutdown. Having said all that, I have to say thanks to @Alex and @Christopher (Drashna) for mobilizing quickly on this, before it becomes a really serious problem. At least right now, we can migrate on our schedule, and until then there is a workaround with the default API keys.
  9. You called it. I just tried uploading a small file to the content folder directly from the Google drive web interface and got an 'upload failure (38)', same as before. It looks like this restriction rollout is definitely happening. It will be fantastic if @Alex has a solution coming soon! But also good that the default keys are keeping us going for now.
  10. I was on the latest stable (I think the last 4 digits we're 1249) when this occured. After a couple of days, I updated to the latest Beta to try and solve it. Also, I realized that at some point I removed my personal gdrive API credentials from the config file, and this is about the time the issue resolved. I just put back my personal API credentials and re-authorized and the issue came back. I yet again switched back to 'null' (the default stablebit API credentials) and the issue went away again... I wonder if there are higher API limits with the Stablebit credentials. Maybe @Christopher (Drashna) can shed some light.
  11. So the issue has been "fixed". There must be a bug or something, but not sure if it's on Google's side or stablebit. I deleted the file that I figured it was stuck trying to upload. The file I deleted was about 20GB, and there was over 65GB in queue to upload. As soon as I deleted the 20GB file from my computer (which was on the clouddrive), my upload queue seemed to just resume. It didn't drop by 20GB... It just resumed. Anyway, everything seems to be totally back to normal.
  12. I can download from the gdrive without issue. I can also upload to the root of the gdrive without problem (manually). When I tried adding a file to the clouddrive content folder manually (via the gdrive web interface), it fails. It looks like there is an issue with the content folder... Maybe @Alex or @Christopher (Drashna) can help.
  13. BTW, I really appreciate the help. Here it is. I wonder if there is a way of counting the number of chunks in google drive... since that is really what adds up
  14. In that case... there is definitely something wrong... I don't believe I should be anywhere near that as Stabelbit is the only thing that manages my google drive. You can see in the attachment that my root directory only has one folder, and deeper to that, it's all managed by stablebit clouddrive.
  15. My windows recycling bin? I went there and it is empty. Just in case, I just changed the settings for my cloud drive to permanently be deleted without being stored in the recycling bin. I also went to my actual Google drive we interface, however, the bin is empty. I'm still having the issue... It's weird, I deleted a bunch of files and still have the issue. I have it setup with 20MB chunks... So my drive would likely have to be humongous to reach the file limit.
  16. I doubt I hit my file and folder limit. The reason I say that is that I deleted a few folders and their files, and nothing changed. I still get this error and my uploads are hanging. I'm wondering if this is happening to anyone else...
  17. I just started getting the same thing today. Based on the coincidence of both of us having the issue in the same day, this may be a Google issue.
  18. To reauthorize, you shouldn't have to look at the providers list. Although, it's not in the list because you have not enabled 'experimental providers'. See below. To reauthorize, you need to click on 'manage drive' as shown below.
  19. Hi @Historybuff, The wiki is at the following link. http://wiki.covecube.com/StableBit_CloudDrive_Q7941147 I did this a while ago, so I don't remember if there are extra steps, but it's all free and works great. Once you change the key in the json file, you just need to re-authorize the clouddrive you want to use that key.
  20. JulesTop

    Drive Duplication

    Those are all awesome ideas!
  21. I also just reverted to 1145. I'm sure Alex is working on it. Looks like a lot of features were added. I especially liked being able to duplicate pinned data only on existing pools... gives them a layer of protection while I re-upload the whole drive to a fully duplicated cloud drive. I was having issues with uploading and downloading in 1165. I never noticed it when accessing the files, but there were always critical errors in the dashboard... Also, my uploads were not stable for some reason, always jumping around in speed and never stable at 70Mbps... 1145 it is until the team addresses some of the issues. Thanks @Christopher (Drashna) for the tip on reverting to 1145!!
  22. JulesTop

    Drive Duplication

    It would be awesome if after turning ON drive duplication, it would notice that it's missing parts and would automatically 'repair' so to speak... I'm sure it's more complicated than that... I'm currently copying everything over to a new drive that has duplication turned ON from the get-go... but unfortunately takes twice as long as duplicating a drive that has already been populated as it would only be half the data being uploaded.
  23. Yeah, Kinda... I essentially want ordered file placement regardless of file placement rules. However, during balancing, the files are then moved based on file placement rules. So, If I have 2 folders in my hybrid pool, and I want folder 1 to be in the cloud and folder 2 to be on the HDD (space permitting). I would like that no matter where I want my files placed, they all get moved to the HDD first and the files are then organizes to their preferred locations when balancing. When I use the 'Ordered File Placement' plugin, it totally work unless I also apply file placement rules. Even if I have 'File placement rules respect real time file placement limits set by the balancing plugins', the files seem to move directly to the cloud (which fills up the cache and then slows down the file move). So, right now, what I did is remove all of my file placement limits, which results in both folders being mixed across the cloud and HDDs, however, when I move files, I can at least get full speed as it fills the HDDs first. So I guess what I'm looking for is almost the functionality of the SSD optimizer, but without the SSDs being emptied during balancing.
×
×
  • Create New...