Jump to content
  • 0

Files deleting themselves..


darkly

Question

I probably won't be able to reproduce this anytime soon, but some directories entirely disappeared from my drive. I think they were files in the upload queue, but I can't be sure. I had 1TB+ in the upload queue this morning before I got API banned with around 600GB left. It's been sitting at this for a while now until the upload can resume, but I was just accessing a few directories here and there, and found that some folders were completely missing. I know every folder was present when I first completed the file transfer (not upload) because I checked the properties of the entire directory against the properties of the original source and they had the same size and number of folders/files.

I'm not going to have this much  in queue any time soon so it's going to be rather difficult to replicate.

 

 

EDIT

I've been rescanning my library in plex every day and I keep seeing random files disappearing. 4 episodes from one season of a show, the first episode of a season of another show, 3 entire seasons of another show, etc. I confirmed the total number of files copied immediately after copying, so somewhere in the time it's sitting in the upload queue to when it finally gets uploaded, files are getting deleted.

 

EDIT 2
I sent in a file system log. I know the instructions say "It is important that you perform this step and perform it quickly after reproducing the issue" but as I don't know what triggers the issue or when it happens, my only choice was to leave it running for a while. Hopefully it captures something that catches your eye. I've had a handful of files disappear again today as well.

 

EDIT 3

Just adding that I know for a fact it's not just files in the upload queue anymore. Some of the ones that disappeared were from my first batch which I didn't only copy over, but also let upload completely before verifying that I had the same number of files as the original.

Link to comment
Share on other sites

Recommended Posts

  • 0

CloudDrive just doesn't interact with the drive at the filesystem level though. It is not even, generally, aware of what files are being accessed on the drive. Is there some issue with the drive integrity? Does chkdsk report damage or anything? CloudDrive just creates a drive image. Files going missing without any sort of problem with the integrity of that image points to a problem outside of CloudDrive. It just doesn't work that way. If CloudDrive was losing data, it wouldn't be in the form of files. It would be random sectors and clusters of the drive. 

Link to comment
Share on other sites

  • 0

Not sure what else would be deleting files then. There's little to no software installed on this computer other than plex, some management tools, and clouddrive. All I'm sure of is that I copied files over, and after they were transferred (and after they were uploaded fully as well) I checked to make sure there were the same number of files as in the original directory. Then a few days went by, and I started noticing the missing files.

This drive is the rather sizable ReFS volume I was referring to in other posts, therefore no chkdsk. One of the reasons why this is such a big deal for me.

Link to comment
Share on other sites

  • 0

srcrist is correct, StableBit CloudDrive doesn't actually do anything with files.

 

It creates the block based drive, and manages the raw disk data (including encryption).  It doesn't read, write or modify files on the disk.  With one exception: pinning will read the file system for NTFS info and directory entries. 

 

Otherwise, something else would have had to be accessing the data and deleting it. 

 

The security auditing thing that srcrist would be a really good idea.

Link to comment
Share on other sites

  • 0

Couldn't this issue be caused by errors in writing the raw disk data to gdrive (or issues retrieving it and putting it back together properly)? Almost every day random files or sometimes entire directories of files are just disappearing. I have nothing installed on this system that could do anything like this. Whenever larger batches of data go missing, I usually find out pretty soon after I have some issue with the program like drives refusing to write to the cloud for extended periods, drives erroring out and dismounting, drives failing to detach, drives reindexing, etc. This last batch I lost over 400 files. It's not just the files gone either, the entire directory that contained them is missing as well. This is immediately after remounting the drive following a restart of the machine due to clouddrive being unresponsive (but no, these weren't newly added files sitting in an upload queue. These files have been on gdrive since the first import over a week ago).

 

EDIT: The number of files that actually disappeared is upwards of 700.

 

EDIT 2: Interestingly, out of my two drives, only this one (ReFS) has lost data. The other drive (NTFS partitioned into 8 parts and pooled using DrivePool ) has not lost any files (to my knowledge). Though the ReFS drive has thousands of more files than the NTFS drive does.

Link to comment
Share on other sites

  • 0

You could also try running Procmon for a while to capture which processes are deleting anything.  The only thing you need to capture are filesystem events and make sure to check "Drop filtered events" under the Filter menu.  Then after running for some time, stop capturing (or continue, it's up to you) and search for "Delete: True".  The first entry you find should be the result of a SetDispositionInformationFile operation.  Right click on the cell in the Detail column and select "Include 'Delete: True'".  This will filter every deletion event.  Search in the Path column for an instance of a file you didn't expect to be deleted.  In the Process Name column you will find which process set the file for deletion.

 

If you have no idea how to use Process Monitor, there are plenty of quick tutorials on the web.  Good luck.

Link to comment
Share on other sites

  • 0

No, because all of the data is cached first.  It's kept locally, and then it's uploaded.

 

Also, it's uploaded in chunks, not as individual files.  So, if GDrive was deleting stuff, you'd see some nasty behavior (file system corruption, raw disk, etc).  

 

It really does sound like something is deleting the files.  And chiamarc is right, procmon is probably the best solution here, as it will sit and watch what is going on.  And then you may be able to see the process that is causing this. 

 

 

Otherwise, if you think something is triggering this, then enable logging and try to reproduce it. 

http://wiki.covecube.com/StableBit_CloudDrive_Drive_Tracing

Link to comment
Share on other sites

  • 0

The issue is that as far as I can tell, the files are missing AT THE MOMENT the drive is remounted. I don't recall ever seeing a file go missing while the drive was mounted. If something were going on during the mounting process (yes, I know. Not possible. but IF. Like what if it was a physical drive. I put files on there, disconnected the drive, then put it into another computer, deleted 5 files, and mounted it back to the first computer) there would never be any delete events to catch in any auditing or logging software. I'll try to see if I can catch anything in procmon, but from the software I have on this machine and how it's set up, I just don't see there being anything else deleting these.

I'll update if I find something (or if I don't)

Link to comment
Share on other sites

  • 0

Just wanted to chime in here that I experienced something similar to this earlier this year and made a post about it. Whenever I added files, random files/folders would disappear from the drive. I can also confirm that it wasn't just files in the upload queue disappearing and definitely not some automated software deleting my files since I didn't have anything like that running. I still don't know exactly what caused this but I've since deleted the entire drive and made a new one since I figured something was wrong with the drive and didn't want to keep looking for random missing files. So far I haven't come across the problem again.

Link to comment
Share on other sites

  • 0

Just posting my exact testing process here. Since most (well all that I clearly remember at this point) of the times I found files deleted were after remounting the drive, I'm going to do this until I can pinpoint the issue:

 

  1. Get file/folder count and total size of drive (of course waiting til this fully updates each time because someone's bound to comment about this) from properties
  2. Screenshot this alongside the current date and time.
  3. Detach the drive.
  4. Attach the drive.
  5. Get file/folder count and total size of drive again (again, of course waiting for it to read the whole drive).
  6. Compare with screenshot.
  7. Check procmon for any events on that drive's path (currently filtered to only show deletes).
  8. Repeat.

If anyone sees any holes in this, let me know.

Link to comment
Share on other sites

  • 0

I just found something interesting. The cloud drive that is losing data has only 1 directory at the root. If I look at the properties of that directory, it says its size is 3.71 TB. If, on the other hand, I look at the properties of the disk itself, it reads as having 5.15 TB of used space. Is that normal?

Link to comment
Share on other sites

  • 0

I just found something interesting. The cloud drive that is losing data has only 1 directory at the root. If I look at the properties of that directory, it says its size is 3.71 TB. If, on the other hand, I look at the properties of the disk itself, it reads as having 5.15 TB of used space. Is that normal?

 

 

yes, that can be normal.  VSS Snapshots, for instance, will take up a lot of disk space and appear to be invisible.  

 

Also, "orphaned data" could do that, as well (orphaned, meaning it's no longer "used" by the file system but never marked as "unused")

 

Also, ... well, anything that applies to "other" data n DrivePool could account for this as well. 

Link to comment
Share on other sites

  • 0

yes, that can be normal.  VSS Snapshots, for instance, will take up a lot of disk space and appear to be invisible.  

 

Also, "orphaned data" could do that, as well (orphaned, meaning it's no longer "used" by the file system but never marked as "unused")

 

Also, ... well, anything that applies to "other" data n DrivePool could account for this as well. 

Well, I don't have shadow copies on, I haven't deleted any data myself that would cause anything to be no longer used by the filesystem, and this drive hasn't been touched by drivepool. The only thing that would account for this is whatever mystery thing is deleting my data.

Link to comment
Share on other sites

  • 0

Well, I don't have shadow copies on, I haven't deleted any data myself that would cause anything to be no longer used by the filesystem, and this drive hasn't been touched by drivepool. The only thing that would account for this is whatever mystery thing is deleting my data.

 

regardless if you have them enabled or not, the drive WILL use them. 

 

Things like CHKDSK actually uses VSS to create shadowcopies, and then checks those.  it's how it scans online.  

 

 

 

And if you have StableBit Scanner installed, it runs a CHKDSK pass once a month (that's the file system scan), so it may be that it created the the shadowcopy, and never removed it.  

 

Also, shadow copies (VSS) are also used by backup software *and* the "Previous Versions" feature.  

 

 

 

 

As for the deleted, moving counts as deleted, as it copies the data and then deletes the old copy. 

And if the data was in the pool, this would happen as well. 

 

But the advice about using procmon or the like is spot on. 

 

Also, I'm not sure it will work here, but .... try this:

 

"fsutil usn readjournal x: > usn.txt"

where "x:" is the clouddrive disk in question,
 
Either check the file yourself, or submit it to us:
Link to comment
Share on other sites

  • 0

darkly

Are you using Sonarr with PLEX?

If you are,  Go into Sonarr and assign a Recycling Bin in Settings (Advanced Settings Shown) At the bottom.

I had that problem also.  Was confused and thought the same thing. Sometimes Sonarr was bad for that by replacing a good file with a bad one.

If not, I have used Cloud with DrivePool and missed a setting and had my files uploaded to the Cloud.  When I disconnected, I was panicking when Sonarr and Plex found holes showing the file unavailable.  It was because Drivepool was set wrong.

You have to ensure in the Balancing area that the Clouddrive will only store Duplicated files.  You must (MUST) uncheck the Unduplicated block for the cloud drive if you want Sonarr and Plex to behave....

Link to comment
Share on other sites

  • 0

Ah yeah, definitely use the recycle bin feature. I've found that it's saved my bacon a few times doing so.

 

As for bad versions of the files, On the "indexers" tab in the settings, enable the "advanced settings" here, as well, and there are terms you can add that it "must not contain", and this is useful for filtering out spam comptent. 

Link to comment
Share on other sites

  • 0

Not using sonarr. Also the drive is ReFS so chkdsk is not a factor. I've also never moved anything out of this drive. Only added files to it

 

You're using ReFS, did you enable file integrity for the entire disk, manually? 

 

If so, then try the following from powershell:

Get-Item -Path 'E:\*' | Get-FileIntegrity

Use the drive letter for the disk in question, rather than E:.  

 

 

Otherwise, try running: 

Repair-Volume -DriveLetter E

Again, where E is the drive letter of the drive in question. 

 

 

Otherwise, grabbing the USN journal info may be helpful

 

Err, it should actually be: 

fsutil usn deletejournal x: > usn.txt

As this should lest when and what files were deleted.   

Link to comment
Share on other sites

  • 0

You're using ReFS, did you enable file integrity for the entire disk, manually? 

 

If so, then try the following from powershell:

Get-Item -Path 'E:\*' | Get-FileIntegrity

Use the drive letter for the disk in question, rather than E:.  

 

 

Otherwise, try running: 

Repair-Volume -DriveLetter E

Again, where E is the drive letter of the drive in question. 

 

 

Otherwise, grabbing the USN journal info may be helpful

 

Err, it should actually be: 

fsutil usn deletejournal x: > usn.txt

As this should lest when and what files were deleted.   

 

First command just gave me the contents of the root of the drive (just one folder) and marked that integrity was enabled for it. didn't list anything else.

 

didn't try the second yet as I'm moving some stuff out of that drive and don't want to mess with the data right now.

 

3rd expects either a /D or /N flag before the pathname

Link to comment
Share on other sites

  • 0

The only other answer I know how to do is Setting the File accounts to ensure so files get moved or deleted under owner and only access the drive with an account that is read only.

 

Trust me, it's a pain in the butt but it works to ensure nothing gets moved or deleted unless it is by the owner.  I had a user on my system trying to delete things just because they were mad at me.  Learned my lesson and kicked them out of my house and off my system.

Link to comment
Share on other sites

  • 0

First command just gave me the contents of the root of the drive (just one folder) and marked that integrity was enabled for it. didn't list anything else.

 

didn't try the second yet as I'm moving some stuff out of that drive and don't want to mess with the data right now.

 

3rd expects either a /D or /N flag before the pathname

 

 

Sorry, the first should have been

Get-Item -Path 'E:\*' | Set-FileIntegrity -Enable $True

As for the fsutil, sorry, yes it does.

 

You'd want the /d flag, I believe. 

 

 

 

The only other answer I know how to do is Setting the File accounts to ensure so files get moved or deleted under owner and only access the drive with an account that is read only.

 

Trust me, it's a pain in the butt but it works to ensure nothing gets moved or deleted unless it is by the owner.  I had a user on my system trying to delete things just because they were mad at me.  Learned my lesson and kicked them out of my house and off my system.

 

 

And yeah, this is a good method to make sure that nothing gets deleted.  But yeah, this is a PITA to setup. 

Link to comment
Share on other sites

  • 0

The thing is, like I've said before, I've never seen files disappear WHILE the drive is mounted. Files and entire directories just never appear again at the time the drive mounts after I dismount or I'm forced to restart my server due to bugs/crashes.

Link to comment
Share on other sites

  • 0

In attempting to copy my files out of this ReFS drive that keeps losing data, I've now encountered the first concrete piece of evidence that something is wrong on the drive. One of the files is refusing to copy over with this error in the windows copy dialog: "An unexpected error is keeping you from copying this file. If you continue to receive this error, you can use the error code to search for help with this problem. Error 0x80070143: A data integrity checksum error occurred. Data in the file stream is corrupt."

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...