Jump to content
  • 0

Reindexing Google Internal Server Errors


srcrist

Question

So my service crashed last night. I opened a ticket and sent you the logs to take a look at, so we can set that aside. But while it was reindexing it got one of the Internal Server Error responses from Google Drive. Just one. Then it started reindexing the entire drive again starting at chunk 4,300,000 or so. Does it really have to do that? This wouldn't have been a big deal when this drive was small...but this process takes about 8 hours now every time it has to reindex the drive, and it happened at around the halfway mark. Four hours of lost time is frustrating. Does it HAVE to start over? Can it not just retry at the point where it got the error? Am I missing something?

 

Just wanted to see what the thought was on this. 

Link to comment
Share on other sites

Recommended Posts

  • 0

Other than this reindexing issue (assuming it'll get fixed eventually) and gdrive's upload caps (got plenty of time on my hands), any serious downsides to what I'm doing then? Basically, if there's no good reason to switch TO that method, are there any good reasons to get OFF OF this method?

 

Well, for the peace of mind you could go with 55TB drives so that chkdsk is sure to work. Beyond that, though, I don't think there is a huge benefit. I don't use ReFS only because I had a ReFS drive corrupt to raw from a windows patch and lost several TB of data. But if ReFS is working for you then chkdsk is unnecessary--ReFS is far more inherently resilient than NTFS. One thing that is somewhat of an issue with the multiple drive solution is that, with the way that CloudDrive presently handles caching, multiple drives sharing the same cache drive can slow things down depending on the situation. It also multiplies the amount of cache *space* required as well, since you need an appropriately sized cache for each drive. I wish, at the least, that the download cache could be shared, so that it would be essentially the same as one cache for a single larger drive. Just some things to think about. It does work though. No complaints about performance. I'm using 1 92TB drive (the remnants of the old 256TB drive), and 2 55TB drives, all combined into one DrivePool drive. It'll be a long while before I need space, but I'll probably just add another 55TB drive if I do. 

 

If you haven't already, could you open a ticket about the issue at https://stablebit.com/Contact ?

 

And if you have, PM me the ticket number, so I can bump it.

 

 

 

Yeah, the new limits have been frustrating, to say the least. 

 

Yeah, Christopher. I'll open a ticket and PM you tomrrow. I downgraded back to 914 though, since it takes hours every time the system reboots and I don't want to have the server down that long. I haven't tested it on 914 yet. I'll do so before I open the ticket.

 

EDIT: BETA 914 also enumerates after every reboot. BETA 906 does not. If you are having this problem, downgrading to 906 does not enumerate even after a hard reboot for me. You may want to downgrade for now--particularly if you have a large drive that takes hours to enumerate. 

Link to comment
Share on other sites

  • 0

Well, for the peace of mind you could go with 55TB drives so that chkdsk is sure to work. Beyond that, though, I don't think there is a huge benefit. I don't use ReFS only because I had a ReFS drive corrupt to raw from a windows patch and lost several TB of data. But if ReFS is working for you then chkdsk is unnecessary--ReFS is far more inherently resilient than NTFS. One thing that is somewhat of an issue with the multiple drive solution is that, with the way that CloudDrive presently handles caching, multiple drives sharing the same cache drive can slow things down depending on the situation. It also multiplies the amount of cache *space* required as well, since you need an appropriately sized cache for each drive. I wish, at the least, that the download cache could be shared, so that it would be essentially the same as one cache for a single larger drive. Just some things to think about. It does work though. No complaints about performance. I'm using 1 92TB drive (the remnants of the old 256TB drive), and 2 55TB drives, all combined into one DrivePool drive. It'll be a long while before I need space, but I'll probably just add another 55TB drive if I do. 

 

I think the biggest concern I have is that I'm running Windows Server 2012 (R2 is not compatible on my hardware. Last time I had it, it worked for a while until an update made it boot loop and the OS was unrecoverable.), which, if I've understood what I've read correctly, uses an older version of ReFS that was updated in either R2 or 2016. I'm not too filesystem savvy so I'm not sure on the technical details, but I'm guessing if it was updated, there's a good reason for it . . . It wouldn't be hard to switch to 55TB drives pooled but I'm concerned how that might affect my performance as well with so many drives reading and writing from cache.

 

Here's an option for me though. What if I created another g suite account and made another drive of the same size off of its gdrive. Could I pool the two together and have it mirror everything between the two drives? Am I correct in my assumption that I'd have to move everything again as the files are currently on the first drive itself and not the pool? That would at least save me if one of the drives did go bad like in your case, right?

Link to comment
Share on other sites

  • 0

@Darkly.

 

You can actually use a single large CloudDrive disk.  The caveat is that you'd have to manually partition it.

 

When creating a drive, under the "Advanced settings", there is an option to format the drive and assign a drive letter.  Uncheck this option.

http://stablebit.com/Support/CloudDrive/Manual?Section=Creating%20a%20Drive

 

Once the drive is created, it will show up as an uninitialized drive in Disk Management.  Initialize the the drive, and create multiple partitions, each one smaller than 60TB. 

 

This will create multiple "drives", but will use the single disk.  You can then pool these together.

 

this will minimize disk usage, as you still only have the single drive mounted.  

 

 

 

 

This would work as the issue with the larger size is per volume (aka per partition).  So, creating multiple partitons/volumes on the disk, it would bypass the issue, and allow you to only need the single CloudDrive disk. 

Link to comment
Share on other sites

  • 0

So reading and writing to this pooled disk would only trigger one set of cache read/writes in the CloudDrive service? Interesting. What about in conjunction with the last thing I said? Could I, for example, have 2 gdrive accounts, set up a 500TB clouddrive in each, configure each clouddrive with 10x50TB partitions, pool each set of 10 partitions together into 2x500TB clouddrives so that any data written to either one is split between all 10 partitions, then pool the two 500TB clouddrives into a 1PB clouddrive that copies all data onto both of the two 500TB sides on two different gdrives?

 

EDIT:

It just occurred to me, does creating multiple partitions inside one clouddrive address the original issue of this thread, the time it takes to reindex a large drive due to the indexing process starting over every time its interrupted? Is each partition indexed separately?

Link to comment
Share on other sites

  • 0

Ok, just consolidating some stuff here.

What I'd like to know is if there's a way to have a large cloud drive that is expandable in the future, but also avoid having multiple caches, and avoid having ridiculously long reindexing times. From what I can tell, that leaves two possible solutions, depending on how the software currently works, and what changes could be implemented in the future:

  1. Create a cloud drive with multiple partitions all added to the same pool as suggested previously. Expand the disk as more space is needed, adding more partitions, and in turn adding those partitions to the pool.
    • This solution only works if a cloud drive with multiple partitions indexes each partition separately so progress can actually be made if indexes fail and restart (my edit in the previous post)
    • OR if what's been discussed above in this thread is implemented and indexing doesn't start over from the beginning with every failure
  2. Create multiple cloud drives and add those to a pool. Create more cloud drives and add those to the pool as well as needed.
    • This solution only works if some way is implemented for multiple cloud drives to share allocated space for a cache
      • An example problem if this is not the case: I have 3 cloud drives pooled of 50GB each. I have in each drive a 10GB file. I want my cache to be no larger than 15GB. If I set each drive's cache at 5GB, I have 15GB of total cache, but not one of the 3 10GB files can ever be fully cached locally.

So, with the current functionality of the software, do either of these solutions work? Is there any updates in progress that implement changes that would make either of these work?

Link to comment
Share on other sites

  • 0

I am getting this  same issue on 929.  Super frustrating...I love this software very much, but it is crashing once or twice a week, and then I have to reboot, and then it is an hour or so before everything downloads completely.

 

Where do I grab 930 to see if it fixes everything?  I checked for an update in the software, but it didn't say an update was available.


Crap -- and now it seems to be stuck in an infinite loop, "Synchronizing Cloud Drives..."

 Help!

Link to comment
Share on other sites

  • 0

I figured out how to grab 930, but the problem still persists.  Even on a restart, it is stuck in an infinite loop "Synchronizing cloud drives..." and the UI bounces back and forth between "This drive is attempting recovery" and "This drive is connecting".

 

Submitted a ticket on this too.

 

 

I would just go ahead and do the ticket process to troubleshoot. Whatever problem that is, it isn't the same thing I was having. Mine was just enumerating the files every time the system rebooted. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...