Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Today
  2. Seems to be a known issue with Win10 insider builds of 2004.... Since MSFT moved to the Iron Branch, away from the Manganese Branch. Looks like we are stuck until MSFT fixes whatever is breaking software raid! I am trialling DrivePool, because i had the same issue with Drive Bender, which stopped working when i updated my plex server... I had to roll back to an earlier release which worked fine (19635 - still the Manganese branch... now MSFT has moved to the Iron branch, things are broken again) Another post on even Storage Spaces broken : https://mspoweruser.com/the-windows-10-may-2020-update-may-be-causing-major-hard-drive-management-issues/
  3. Seems to be a known issue with Win10 insider builds of 2004.... There is another post - https://community.covecube.com/index.php?/topic/5374-win10-build-broke-drivepool-how-to-fix/ Looks like we are stuck until MSFT fixes whatever is breaking software raid! I am trialling DrivePool, because i had the same issue with Drive Bender, which stopped working when i updated my plex server... I had to roll back to an earlier release which worked fine (19635 - still the Manganese branch... now MSFT has moved to the Iron branch, things are broken again) Another post on even Storage Spaces broken : https://mspoweruser.com/the-windows-10-may-2020-update-may-be-causing-major-hard-drive-management-issues/
  4. I use x2 duplication over the whole pool, I expect I won't get a lot out of the SSD optimiser then, but I'll set it up anyway just for the sake of making the server marginally better, for example when we finish a holiday and upload many gigs of photos and holiday videos it may help. I think in which case I'll put it in one pool and save myself the hassle of multiple pools.. I have 2 strategies for backups in addition to x2 duplication, I have the amazon photos desktop app running, but it is a rubbish application and often takes a lot of poking to make it work, I've also added a removable HDD bay to run backups of the most important data to. I'm considering stablebit cloud drive application but I don't know how helpful it will be yet.
  5. Upgraded to this version and my drive instantly mounted. If I understand the logs correctly, the new version simply doesn't bother moving chunks if the drive is under the 490k limit? I'm assuming upgrading part way through the prior version moving the files hasn't broken anything but I am entirely not sure.
  6. New beta looks to have major improvements to the migration process, make sure you're on it before reporting any additional bugs or doing something crazy like delete local cache. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off. * [Issue #28410] Output a descriptive Warning to the log when a storage provider's data organization upgrade fails.
  7. New update: now at 67% (on an 8TB drive) and the web UI has been showing files being moved as well. So there is hope yet.
  8. Yesterday
  9. To answer your question..I deleted the main Cache file which for me is stored on my C:/ Drive. It starts with Clouddrive and is followed by your drive UID. as SRCRIST said....it is very risky and my drive may be totally corrupt when it finishes. That was a risk I was okay with taking. You make your own decision. I did get a message back from Christopher on the Covecube team. They looked at my path of issues and whats happened plus troubleshooter data and his message was this... "We have found a bug in the migration code, that may be the problem here.We should have a fix out soon, that should be much faster (or even instantaneous) and handle failure better. "
  10. It sounds like we might be stumbling on some buggy cache code, between this and the previous notes from Chase about deleting the cache. Make sure you guys are submitting actual tickets and troubleshooter dumps as well, so Alex and Christopher can take a look at the related code and your logs.
  11. The cache also includes information that has been modified but not yet uploaded. Everyone should be *very* careful before simply deleting the local cache. Note that any modifications that have not yet been uploaded to the cloud provider will be permanently lost, and you could potentially corrupt your drive as a result. I believe that you are theoretically correct as long as none of the information in the cache has yet to be uploaded, but extreme caution should be used before following this example. Anyone who deletes their cache with data in the "to upload" state will, A), definitely lose that data for good and, B), potentially corrupt their drive depending on what that data is (read: file system data, for example).
  12. Decided to try for the beta as well, now that I have holiday to babysit the process. 2 drives started doing the upgrade (Drive is upgrading), 2 are stuck in "Drive initializing" and the 5 others seemed to work fine. Due to Windows updates, I restarted the server after checking this thread and after seeing some weird issues (the mounted ones started saying it can't find the file when looking in performance settings, I forgot the phrasing of it) and found a bit of a grizzly sight. The 5 drives that were "fine" are now unable to be mounted and my cache drive is completely filled to the brim (44k free). This could be due to some automation filling the earlier mounted drives, but I have never seen it be filled below the 5 GB limit. Now, they most likely cannot be mounted, hopefully, due to no space left on the cache and thus not possible to write to it, but as I don't have a functioning drive or any means to actually empty the cache drive, I'm stuck in a bit of a place where I am unsure if everything is lost or I just have to be patient. Log says: [CloudDrives] Cannot mount cloud part xyz. It was force unmounted. I'll try for patience for now. So, be careful, out there.
  13. I appreciate your response! I have been looking into other cloud drive software for my main purpose, but I am also impressed by this company and will possibly buy from them for other needs. Thank you!
  14. Thank You! You are right. Now I understand better.
  15. nice find. what is that file called? and where can i find it? Right now 2/6 drives are queued to perform recovery and the rest is in "Drive cannot be mounted" mode Edith: did you refer to the cache files stored at the designated CD cache drive? If so, there are over 100 files. How did you know which one to delete?
  16. I may have fixed my drive that is stuck in "queued for recovery". But damn did it make me nervous and I'm far from out of the woods. In my understanding of the system the "cache" is just a local file of your pinned data and a COPY of the last #GB of data that was received or transferred between your machine and the cloud drives. It also seems that it holds the information for the CloudDrive program to say...yep this is a drive that we have mounted. So I shut down the windows service for CloudDrive and deleted the cache file for my problem drive. After restarting my computer when CloudDrive opened up it only showed the two working drives as mounted and the problem drive was available to mount. Sweet! Mount that sucker. Seems to be working. It now says that the drive is upgrading and I get a percentage actually shown. Albeit it is way slower compared to last time I went through this stage. Probably because this latest Beta has less I/O interactions. It did NOT give me the option to reauth so I assume this drive is still using the StableBit API. At this rate it should complete by Monday-ish?
  17. Well that's a 2 years old version - and there have been numerous changes/bug fixes since then if you want to stay on the stable version 2.2.3.1019 is available from October 2019 - which is the latest or move to the beta and see if your problem goes away or not
  18. I'm still holding off patiently on the conversion, it sounds like it works, but waiting to get a better idea at the of the time it takes by drive data size. I've noticed that without any changed settings these past few days I've gotten a couple yellow I/O error warnings about user upload rate limit exceeded (which otherwise haven't been problems), and I've noticed gdrive side upload throttling at a lower than normal concurrency, only 4 workers at 70mbit. I'm guessing some of these rate limit errors people may be seeing in converting are transient from gdrive being under high load.
  19. Uh. Yes. When this process first started I was on the default API. At that time I had two drives going through the update. In the middle of that process I added my own API but since you aren't able to reauthorize during the update it didn't actually switch to my API. When my first drive finished I was able to reauth. My second drive that is still having issues has never been able to reauth so my understanding is that it is still on the default API. The third drive that I mounted and said it was updating without showing a % was after I added my own API and therefore authorized on my API.
  20. I have an enclosure with 4x8TB drives in it. I formatted them all to NTFS. I added a drive to a pool and it got stuck at 95%. I had to abort. There was a 2TB drive created in Disk Management that says that it's RAW. This drive won't mount in Windows Explorer. It lists an error if you try to access it. I tried this all over again to see if the behavior was consistent. Everything happened exactly the same including another drive being created that I can see in Disk Management that is 2TB RAW. At this point, I'm unable to add any drives to a pool, and I have two ghost drives (E: and F:) that are RAW that I can't format or delete. Where to go from here? EDIT: Ok, so I went ahead and reset Drivepool, and then tried to add a drive to a pool again. This time it "finished." But it still created a RAW partition that I can't access and no pool seems to have been created. But when I try to add that same disk to a pool again it says "can't add the same disk to a pool twice." Any idea what I could be doing wrong here?
  21. I had the wisdom 2 years ago when one of the ssds in my server corrupted and took almost a full 50TB of data in a clouddrive with it, that i would mirror all my data over multiple clouddrives and multiple google accounts, i am very happy with my past self at this stage. I will start a staged rollout of this process on my drives and keep you updated if i find any issues.
  22. Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God's because this morning I awoke with back pain and two of my drives mounted and functioning. The original drive which had completed the upgrade, had been working, and then went into "drive initializing"...is now working again. The drive that I had tried to mount and said it was upgrading with no % given has mounted (FYI 15TB Drive with 500GB on it). However my largest drive is still sitting at "Drive queued to perform recovery". Maybe one more night in the dogs bed will complete the offering required to the Data God's End Diary entry. (P.S. Just in case you wondered...that spoiled dog has "The Big One" Lovesac as a dog bed.. In a pretty ironic fashion their website is down. #Offering)
  23. So I think it is a matter of use case and personal taste. IMHO, just use one Pool. If you're going to replace the 5900rpm drives anyway over time anyway. I assume you run things over a network. As most are still running over 1Gbit networks (or slower), even the 5900rpm drives won't slow you down. I've never used an SSD Optimizer plugin but yeah, it is helpful for writing, not reading (except for the off-chance that the file you read is still on the SSD). But even then it would need a datapath that is faster than 1Gbit all the way. What you could do is test a little by writing to the disks directly outside the Pool in a scenario that resembles your usecase. If you experience no difference, just use one Pool, makes management a lot easier. If anything, I would more wonder about duplication (do you use that?) and real backups.
  24. Last week
  1. Load more activity

Announcements

×
×
  • Create New...