Jump to content
Covecube Inc.

vashp2029

Members
  • Content Count

    7
  • Joined

  • Last visited

  1. As others have described, I recently tried to move my CloudDrive to another PC by detaching and attaching on the other side. The drive was working fine with no data loss before detaching it, and I didn't force detach (allowed it to properly detach). Once attached to the new PC, it shows valid usage statics, both in the CloudDrive UI and in the Explorer (x TB of y TB used). However, when I actually open the drive in the explorer, it says that there are no files. Now I've tried to go through taking the disk offline and unpinning and all that, which was suggested on the other threads, but it didn't change anything for me. However, running R-Undelete, it can see all the files that should be there, but the explorer can't see. I also accidentally stumbled onto this: when trying to set the drive as "read-only" in CloudDrive, it reports that it is unable to do that because the drive is part of a DrivePool (which it was). Therefore, it looks like CloudDrive can also see at least the pool folder in the drive. Since that's the case, I don't know why Explorer can't see any of these... I had the data replicated on two separate GDrive accounts, and both drives are doing the same thing, so I've effectively lost all the data unless I can fix this issue. The files are there but I can't access them. I've also tried running chkdsk /f /r, and I've tried resizing one of the two drives down to 50 TB (original size was 64 TB) incase the reason Windows was having issues was that the NTFS volume is too large. When I did this, nothing changed except that explorer correctly reported that I was using a larger percentage of my (now smaller) drive. Edit: I should note that most of the file recovery programs I've used can see all of the files that should be there, but I can't use them to recover because I have ~25 TB of data with no local drive large enough to recover to. Is there a program that can recover in place? I imagine the problem isn't even the files themselves, but some kind of index or partition table that should tell Windows where to look, but I don't know how to recover that. Also, the recovery programs see all the proper files within seconds, so they're not performing a full bit by bit scan and guessing what the files are based on the data.
  2. Yep, tried all of that and the drive still doesn't show any folders or files in explorer. Windows now says that there is some space being used, but once I go inside, it says it's empty.
  3. I'm one of the many people that lost a ton of data on GDrive recently, but luckily I had it all duplicated on another GDrive account. Is there an easy way to duplicate the entire CloudDrive to another account? If I were to share all the raw CloudDrive files on one Google Drive account to another, then write a Google Script to copy the entire folder so the new account gets ownership of the exact copy, would I be able to mount the new copy via CloudDrive? I'm trying to figure out if there's a way to get the duplicate done completely on Google's infrastructure so I don't have to download 40 TB of data and re-upload all of it.
  4. Chris, I'm trying to go through this now and no matter how many times I clear local cache, it doesn't seem to want to get rid of the last 43.8 MB. It also shows 632 KB waiting to upload but doesn't upload anything.
  5. Is it possible/likely that we will have API access to DrivePool/CloudDrive in the near future? My setup has only a single SSD, and I've attached multiple CloudDrives to the system. The CloudDrives are put into a DrivePool with duplication enabled. Since all the caches are in a single hardware SSD, tasks that require reading/writing from the hard drive slow to a halt when duplication begins. This generally isn't an issue for me, but I'm also mirroring everything in the DrivePool directly to a Google Drive via Drive File Stream for unencrypted access. I run this sync once a week, but it can take a while so I'd like to be able to tap into DrivePool via a script and tell it to hold off on duplication and balancing until the script completes mirroring. I know this sounds complex and a niche use case, so I'm thinking it's probably not high on the list of priorities to incorporate API access, if at all, but I wanted to ask. Thanks!
  6. I have two pools setup with three CloudDrives currently, and I'm wondering if there's any need to use balancing, and if so, which balancers. I have the first pool (I'll call it the "slave" pool) made up of two CloudDrives on separate GDrive accounts (I'll call them the "slave" accounts). I have a second pool ("master" pool) made up of a single CloudDrive on a different account and the "slave" pool. The "master" pool is set to duplicate 2x, while the "slave" pool has no duplication. So basically, my "master" CloudDrive gets all the files, and the "slave" CloudDrives get half of the duplication each. I hope that makes sense... Anyway, I'm wondering if I need balancers enabled at all since I have no real hardware to worry about. If it would be recommended to have some balancers enabled, which ones, and on which pool? So for example, should I enable balancers on the "slave" pool and the "master" pool, or just the "master"?
  7. I googled and searched the forums for something similar but haven't seen it so I hope this is the proper place to ask: is it possible to have DrivePool and CloudDrive integrate even more in the near future so that a pool composed of CloudDrives will allow splitting a single file's chunks across different CloudDrives? The main purpose would be to achieve higher than 75 mbps uploads by being able to upload to multiple GDrive accounts simultaneously, but there's the added benefit of further privacy because none of the accounts would contain a complete file (although this is a bit of a reach since a fully encrypted drive doesn't leave much reason to further secure anything). Just as an example, I'm rebuilding my Plex library and I have access to 3 GDrive Suite accounts that I can split the data between. Normally, as a new file is uploaded, the entire file is pushed to one account (assuming no duplication). Since I have to limit uploads to 75 mbps, it creates a bit of a bottleneck on my VPS which means I have to pause further downloads and other space-intensive applications until CloudDrive is able to make some room by finishing some amount of the upload. If CloudDrive was able to split a single file between two accounts (half of the chunks on one drive, half on another), it would double the rate at which we could upload.
×
×
  • Create New...