Jump to content
Covecube Inc.
  • 0
tomm1ed

CloudDrive not releasing Reserved clusters after upload

Question

Running Windows 2012 R2 Essentials fully patched

CloudDrive 1.0.0.687 with a 1TB OneDrive (Office 365)

Cache set tot 1GB Expanding on a drive with  a little over 50GB free.

 

When I start uploading to the drive it starts filling up the cache drive which is to be expected. However I also expected CloudDrive to release this extra reserved space after the upload had finished. Unfortunately it does not:

 

C:\Windows\system32>fsutil fsinfo ntfsinfo g:

NTFS Volume Serial Number :       0x0810ed7310ed67e0

NTFS Version   :                  3.1

LFS Version    :                  2.0

Number Sectors :                  0x0000000057544fff

Total Clusters :                  0x000000000aea89ff

Free Clusters  :                  0x0000000000f16f40

Total Reserved :                  0x0000000000974b50

Bytes Per Sector  :               512

Bytes Per Physical Sector :       512

Bytes Per Cluster :               4096

Bytes Per FileRecord Segment    : 1024

Clusters Per FileRecord Segment : 0

Mft Valid Data Length :           0x0000000000400000

Mft Start Lcn  :                  0x00000000000c0000

Mft2 Start Lcn :                  0x0000000000000002

Mft Zone Start :                  0x000000000a8ba8a0

Mft Zone End   :                  0x000000000a8c70c0

Resource Manager Identifier :     9FB4D91C-1370-11E6-80C1-70106F3E90B9

 

As soon as I detach the CloudDrive the reserved space is released and I can re-attach the drive and everything is still there. What am I doing wrong? (or have I run into a bug of CloudDrive perhaps?

 

Share this post


Link to post
Share on other sites

18 answers to this question

Recommended Posts

  • 0

Same issue with 1.0.0.702. Any insight here? I can understand that the clusters stays reserved until everything has been uploaded and verified but they are never freed unless I detach and reattach. Do I need to have a dedicated drive the same size as the CloudDrive?

Share this post


Link to post
Share on other sites
  • 0

This isn't an issue with StableBit CloudDrive per say. 

 

We're not doing anything "out of the ordinary" really, but NTFS is taking up a massive amount of reserved space.  We're not exactly sure why, as this essentially "shouldn't be happening".   It is something that Alex (the developer) does want to look into, but it's not a high priority. 

 

The reason it's not, is that after a while or after rebooting, the reserved space should drop back down to reasonable levels.  So the issue does "fix itself". 

Share this post


Link to post
Share on other sites
  • 0

This isn't an issue with StableBit CloudDrive per say. 

 

We're not doing anything "out of the ordinary" really, but NTFS is taking up a massive amount of reserved space.  We're not exactly sure why, as this essentially "shouldn't be happening".   It is something that Alex (the developer) does want to look into, but it's not a high priority. 

 

The reason it's not, is that after a while or after rebooting, the reserved space should drop back down to reasonable levels.  So the issue does "fix itself". 

Left it over night and lo and behold the reserved space did indeed clear up :D thanks for the response Christopher

Share this post


Link to post
Share on other sites
  • 0

Left it over night and lo and behold the reserved space did indeed clear up :D thanks for the response Christopher

 

Glad to hear it.  

 

And yes, it is absolutely annoying and frustrating. Because we're not doing anything "weird" here. We *are* using "sparse files", but doing so in a completely supported way. 

 

Soo basically, it's a weird glitch/behavior with NTFS and sparse files. We're aware of it, and we're sure Microsoft is too.

Share this post


Link to post
Share on other sites
  • 0

Is this why my new SSD drive I bought specifically for a cache keeps filling up and dismounting my drives? Right now the drive only has 400MB on it if I select all folders (hidden folders included) -> right click -> properties. But if I click "My Computer" it shows the drive as having only 4GB free (It is a 120GB SSD) and every morning when I wake up my cloud drives are dismounted (after a nightly backup). The nightly backup is not backing up more than 120GB or anywhere near it.

 

If this is the cause, there is no way I can fix it? This did not happen when the cache was on my (slow) RAID array, but there was about 2TB free on that.

Share this post


Link to post
Share on other sites
  • 0

I'm not sure about dismounted here, but if the actual disk space used is not matching here, the "fsutil fsinfo ntfsinfo x:" command (where "x:" is the drive being used by the cache) will report the "reserved" space. 

 

And yeah, this seems to be very common. Letting it sit or rebooting the system do seem to help, though. 

 

 

Share this post


Link to post
Share on other sites
  • 0

I'm not sure about dismounted here, but if the actual disk space used is not matching here, the "fsutil fsinfo ntfsinfo x:" command (where "x:" is the drive being used by the cache) will report the "reserved" space. 

 

And yeah, this seems to be very common. Letting it sit or rebooting the system do seem to help, though. 

 

If we are talking the same issue (sorry if not) my new SSD cache drive slowly fills up, with little activity, until it is full (though it isn't full at all if you check the folders on the drive as mentioned), and then Cloud Drive forcefully dismounts my 2 drives due to the cache drive being out of space.

 

Did not see this when the cache was on the RAID array, but again, several TBs free. Seeing this every few days with the cache moved to a 120GB SSD, however.

 

Makes Cloud Drive terribly unpredictable and unstable.

Share this post


Link to post
Share on other sites
  • 0

Microsoft Windows [Version 6.1.7601]

Copyright © 2009 Microsoft Corporation.  All rights reserved.
 
C:\Users\Administrator>fsutil fsinfo ntfsinfo z:
NTFS Volume Serial Number :       0x70dc1bd6dc1b9586
Version :                         3.1
Number Sectors :                  0x000000000df937ff
Total Clusters :                 0x0000000001bf26ff
Free Clusters  :                 0x0000000001becaca
Total Reserved :                0x0000000002002160
Bytes Per Sector  :               512
Bytes Per Physical Sector :       512
Bytes Per Cluster :               4096
Bytes Per FileRecord Segment    : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length :           0x0000000000200000
Mft Start Lcn  :                  0x00000000000c0000
Mft2 Start Lcn :                  0x0000000000000002
Mft Zone Start :                  0x00000000000c0200
Mft Zone End   :                  0x00000000000cca00
RM Identifier:        A2098C65-70BC-11E6-9CFE-005056C00008
 
 
Looks like my cache drive was fully filled with reserved data and my drives were forcefully dismounts last night. It has been sitting like this for hours and the reserved amount has not gone down.
 
xRfCHRE.jpg

Share this post


Link to post
Share on other sites
  • 0

Is there no workaround for this? Waiting seems to do nothing, the cache drive fills up before any clusters are released, at least, and I cannot reboot my server every 24 hours. Was buying a 120GB SSD to use as a cloud drive cache a complete waste of money?

Share this post


Link to post
Share on other sites
  • 0

Is there no workaround for this? Waiting seems to do nothing, the cache drive fills up before any clusters are released, at least, and I cannot reboot my server every 24 hours. Was buying a 120GB SSD to use as a cloud drive cache a complete waste of money?

 

At this point, not really.  A reboot or waiting.   As the issue is with NTFS.  

 

However, Alex does plan on looking into the issue to see if there is something we can do to help prevent this from happening.

 

 

But for now, "was it a waste"?  I wouldn't say it was a waste, as SSDs are always nice to have.

However, if the reserved space is causing issues (and it sounded like it was), then yeah, it may not be usable for StableBit CloudDrive right now.

Share this post


Link to post
Share on other sites
  • 0

At this point, not really.  A reboot or waiting.   As the issue is with NTFS.  

 

However, Alex does plan on looking into the issue to see if there is something we can do to help prevent this from happening.

 

 

But for now, "was it a waste"?  I wouldn't say it was a waste, as SSDs are always nice to have.

However, if the reserved space is causing issues (and it sounded like it was), then yeah, it may not be usable for StableBit CloudDrive right now.

 

That is very very unfortunate. Is there an issue open for Alex's investigation?

Share this post


Link to post
Share on other sites
  • 0

That is very very unfortunate. Is there an issue open for Alex's investigation?

 

Agreed.  Though, again, the emphasis is that this is an issue with how NTFS is handling things, and may be something outside of our control.  Though the dismounting that you mentioned shouldn't happen...

 

 

I believe this was flagged already, but just in case, I've reflagged it. 

https://stablebit.com/Admin/IssueAnalysis/27177

Share this post


Link to post
Share on other sites
  • 0

Agreed.  Though, again, the emphasis is that this is an issue with how NTFS is handling things, and may be something outside of our control.  Though the dismounting that you mentioned shouldn't happen...

 

 

I believe this was flagged already, but just in case, I've reflagged it. 

https://stablebit.com/Admin/IssueAnalysis/27177

 

But the drive is being forcefully dismounted because the cache drive is "Full" and new data cannot be downloaded from Google, after several of these errors, CloudDrive forcefully dismounts the drive, isn't this expected behavior?

Share this post


Link to post
Share on other sites
  • 0

But the drive is being forcefully dismounted because the cache drive is "Full" and new data cannot be downloaded from Google, after several of these errors, CloudDrive forcefully dismounts the drive, isn't this expected behavior?

Yup.  And that's why I posted the Issue (was a major focus in the posting). 

 

 

Specifically, I have this in in the ticket: 

 

However, it seems to be filling up the disk, and then causing CloudDrive to dismount due to a lack of free space.

Share this post


Link to post
Share on other sites
  • 0

Not sure if this is relevant to this particular thread but I was having a similar issue with my "cache" drive filling up. I specifically had a Local cache size set to 1.00 GB and left the Cache type setting at it's default; Expandable (recommended). I wanted my local cache to stay at or around 1.00 GB because the only other drive I have in my server is the main OS drive which is a 160GB SSD and it's the only drive that was available for me to select as a cache drive in the UI when I created the Cloud Drive volume. I only use the Cloud Drive to store backup's so to have a large local cache for storing frequently accessed files didn't apply to me and it's not needed. When I started my backup it blew through that 1 GB very quickly and continued to grow well beyond the 1 GB setting. My "chunks" are set to 100 MB and since my upload bandwidth couldn't keep up with how fast the backup was making chunks, naturally it pilled up but I wanted the Cloud Drive to throttle the backup, not the other way around where disappearing drive space and ultimately low drive space causes the CloudDrive to throttle itself. Something that was never suggested in this thread or others I came upon with a similar issue did either... my fix was to set the Cache type to Fixed. Now my local cache hovers around 1 - 1.2 GB. Once the chunks are uploaded the cache is flushed and the Local cache size stays capped at where I set it.

 

/cheers

Share this post


Link to post
Share on other sites
  • 0

Any updates on this? Sitting on 80GB of reserved clusters that are about to knock all of my drives offline (due to the SSD filling up). There has been no activity on the cloud drives since mount other than pinning data (which seems to run over and over an awful lot), and the reserved clusters have not decreased once this week since checking them every time I am on the server, they always increase, even after days of no activity to the cloud drive.

 

I understand this is an NTFS issue, as per our previous conversation, but issue# 27177 was created to investigate if the issue could be avoided I think.

 

PS: I've tried every cache combination available I think, currently using a 1GB fixed cache on each of 2 drives, yet still 80GB of sparse files clogging my drive :( :(

 

I invested in CloudDrive and, after promising (but SLOWWW) results, invested in a dedicated 120GB SSD cache specifically for CloudDrive. Speeds increased DRAMATICALLY, and then I ran into this :(

 

Edit: currently running .725 btw

Share this post


Link to post
Share on other sites
  • 0

Not sure if this is relevant to this particular thread but I was having a similar issue with my "cache" drive filling up. I specifically had a Local cache size set to 1.00 GB and left the Cache type setting at it's default; Expandable (recommended). I wanted my local cache to stay at or around 1.00 GB because the only other drive I have in my server is the main OS drive which is a 160GB SSD and it's the only drive that was available for me to select as a cache drive in the UI when I created the Cloud Drive volume. I only use the Cloud Drive to store backup's so to have a large local cache for storing frequently accessed files didn't apply to me and it's not needed. When I started my backup it blew through that 1 GB very quickly and continued to grow well beyond the 1 GB setting. My "chunks" are set to 100 MB and since my upload bandwidth couldn't keep up with how fast the backup was making chunks, naturally it pilled up but I wanted the Cloud Drive to throttle the backup, not the other way around where disappearing drive space and ultimately low drive space causes the CloudDrive to throttle itself. Something that was never suggested in this thread or others I came upon with a similar issue did either... my fix was to set the Cache type to Fixed. Now my local cache hovers around 1 - 1.2 GB. Once the chunks are uploaded the cache is flushed and the Local cache size stays capped at where I set it.

 

/cheers

 

You want to set a "fixed" size cache then. 

 

The Expandable means that it will expand past the limit until the underlying disk is mostly full.  That's not what you want, from the sounds of it. 

 

The "fixed" will hard limit the size, and throttle the speed of the CloudDrive disk when it gets full.  Though, I'd recommend 10GB for the cache, if you're using fixed size (otherwise, it will likely throttle immediately and always).  The fixed size will NEVER grow past the specified size. 

 

Any updates on this? Sitting on 80GB of reserved clusters that are about to knock all of my drives offline (due to the SSD filling up). There has been no activity on the cloud drives since mount other than pinning data (which seems to run over and over an awful lot), and the reserved clusters have not decreased once this week since checking them every time I am on the server, they always increase, even after days of no activity to the cloud drive.

 

I understand this is an NTFS issue, as per our previous conversation, but issue# 27177 was created to investigate if the issue could be avoided I think.

 

PS: I've tried every cache combination available I think, currently using a 1GB fixed cache on each of 2 drives, yet still 80GB of sparse files clogging my drive :( :(

 

I invested in CloudDrive and, after promising (but SLOWWW) results, invested in a dedicated 120GB SSD cache specifically for CloudDrive. Speeds increased DRAMATICALLY, and then I ran into this :(

 

Edit: currently running .725 btw

 

No, sorry.  

 

Right now, Alex has been going through StableBit DrivePool and StableBit Scanner tickets, as we have a large backlog of issues for these products, and dealing with a big issue with StableBit CloudDrive (specifically, the Google Drive provider). 

 

I'll mention this to him again, but it may be a while still. 

Specifically, there may not be anything we can do about the reserved size, as this really is an issue with sparse files and NTFS.  Basically, it's a file system issue.  Any investigation will basically being if we can send commands to the file system to flush the data and return it to the normal size. 

 

 

But, I'll push this, as much as I can.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...