Jump to content

darkly

Members
  • Posts

    86
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by darkly

  1. Just wondering, since I run into this a lot when I need to reboot for troubleshooting (such as I need to do now to install that update), I've noticed that if there is an upload queue when CloudDrive shuts down (unsafe shutdowns), it will try to resume uploading when it starts up again. Is there a way to safely shut down cloud drive while there is an upload queue (because by default it waits til all items are done uploading when you try to detach a drive) because it's not the most convenient thing to have to wait for a 500GB upload when I need to restart my server
  2. What I don't get is, some of the data disappearing is data that has been on this disk for AGES. So it's definitely been written to the cloud. In fact, almost all of it is data that had been sitting on there fully uploaded since the beginning.
  3. Reproducing the issue is hard. It seems to happen when I have to force clouddrive to turn off by restarting the computer (due to it being unresponsive), or when clouddrive closes on its own for other reasons and I remount. I have not seen the issue occur just from dismounting and remounting the drive. That itself creates another issue in that, the only way I know to reproduce the issue is the actually close out of clouddrive entirely, which ends the logging. I mentioned this somewhere else, but I *am* running windows server 2012 (not r2). I remember reading somewhere that this used an earlier implementation of ReFS, before it was updated for windows server 2012 r2. Could this have anything to do with this? As I've said, my other two drives have (to my knowledge) never lost data, and they're both NTFS. I'll try to get some time to do some of the other things you suggested as well.
  4. 929 beta on gdrive. as for triggers, basically this whole post I've been talking about files disappearing when I remount after clouddrive glitches/bugs force me to restart the server to make the service and UI operational again. I'm now trying to move the remaining files off the drive and I've run into ~10 such files so far that throw that error while copying.
  5. In attempting to copy my files out of this ReFS drive that keeps losing data, I've now encountered the first concrete piece of evidence that something is wrong on the drive. One of the files is refusing to copy over with this error in the windows copy dialog: "An unexpected error is keeping you from copying this file. If you continue to receive this error, you can use the error code to search for help with this problem. Error 0x80070143: A data integrity checksum error occurred. Data in the file stream is corrupt."
  6. The thing is, like I've said before, I've never seen files disappear WHILE the drive is mounted. Files and entire directories just never appear again at the time the drive mounts after I dismount or I'm forced to restart my server due to bugs/crashes.
  7. First command just gave me the contents of the root of the drive (just one folder) and marked that integrity was enabled for it. didn't list anything else. didn't try the second yet as I'm moving some stuff out of that drive and don't want to mess with the data right now. 3rd expects either a /D or /N flag before the pathname
  8. Not using sonarr. Also the drive is ReFS so chkdsk is not a factor. I've also never moved anything out of this drive. Only added files to it
  9. Well, I don't have shadow copies on, I haven't deleted any data myself that would cause anything to be no longer used by the filesystem, and this drive hasn't been touched by drivepool. The only thing that would account for this is whatever mystery thing is deleting my data.
  10. I just found something interesting. The cloud drive that is losing data has only 1 directory at the root. If I look at the properties of that directory, it says its size is 3.71 TB. If, on the other hand, I look at the properties of the disk itself, it reads as having 5.15 TB of used space. Is that normal?
  11. Just posting my exact testing process here. Since most (well all that I clearly remember at this point) of the times I found files deleted were after remounting the drive, I'm going to do this until I can pinpoint the issue: Get file/folder count and total size of drive (of course waiting til this fully updates each time because someone's bound to comment about this) from properties Screenshot this alongside the current date and time. Detach the drive. Attach the drive. Get file/folder count and total size of drive again (again, of course waiting for it to read the whole drive). Compare with screenshot. Check procmon for any events on that drive's path (currently filtered to only show deletes). Repeat. If anyone sees any holes in this, let me know.
  12. The issue is that as far as I can tell, the files are missing AT THE MOMENT the drive is remounted. I don't recall ever seeing a file go missing while the drive was mounted. If something were going on during the mounting process (yes, I know. Not possible. but IF. Like what if it was a physical drive. I put files on there, disconnected the drive, then put it into another computer, deleted 5 files, and mounted it back to the first computer) there would never be any delete events to catch in any auditing or logging software. I'll try to see if I can catch anything in procmon, but from the software I have on this machine and how it's set up, I just don't see there being anything else deleting these. I'll update if I find something (or if I don't)
  13. Couldn't this issue be caused by errors in writing the raw disk data to gdrive (or issues retrieving it and putting it back together properly)? Almost every day random files or sometimes entire directories of files are just disappearing. I have nothing installed on this system that could do anything like this. Whenever larger batches of data go missing, I usually find out pretty soon after I have some issue with the program like drives refusing to write to the cloud for extended periods, drives erroring out and dismounting, drives failing to detach, drives reindexing, etc. This last batch I lost over 400 files. It's not just the files gone either, the entire directory that contained them is missing as well. This is immediately after remounting the drive following a restart of the machine due to clouddrive being unresponsive (but no, these weren't newly added files sitting in an upload queue. These files have been on gdrive since the first import over a week ago). EDIT: The number of files that actually disappeared is upwards of 700. EDIT 2: Interestingly, out of my two drives, only this one (ReFS) has lost data. The other drive (NTFS partitioned into 8 parts and pooled using DrivePool ) has not lost any files (to my knowledge). Though the ReFS drive has thousands of more files than the NTFS drive does.
  14. I'm running the 929 beta on Gdrive. The issue seems to have resolved itself. It does appear to be a bandwidth issue, but it's odd. I usually don't have problems with my read/write activity like this, e.g. I frequently stack many copy instances, copy while streaming, torrent while copying, etc. But clouddrive seems to refuse to upload anything to gdrive if any other processes are using bandwidth, rather than sharing it. If I'm copying a file, clouddrive stops uploading. If I'm downloading a file, clouddrive stops uploading. If one cloud drive is doing something, all other cloud drives stop uploading. I can't confirm that this is true 100% of the time (if there is other I/O activity, clouddrive stops), but every time I recall checking the converse (if clouddrive is stopped, there is other I/O activity), it's been true. Is this normal?
  15. Data is just not uploading. When I check back every now and then, I see a stack of "thread was being aborted" errors. Also when I try to save I/O performance settings, it says "cloud drive not found.
  16. Not sure what else would be deleting files then. There's little to no software installed on this computer other than plex, some management tools, and clouddrive. All I'm sure of is that I copied files over, and after they were transferred (and after they were uploaded fully as well) I checked to make sure there were the same number of files as in the original directory. Then a few days went by, and I started noticing the missing files. This drive is the rather sizable ReFS volume I was referring to in other posts, therefore no chkdsk. One of the reasons why this is such a big deal for me.
  17. The only other thing that has been accessing this particular drive since it was created is Plex when it scans for new media. And I've never had plex delete files without me explicitly telling it to before.
  18. Ok, just consolidating some stuff here. What I'd like to know is if there's a way to have a large cloud drive that is expandable in the future, but also avoid having multiple caches, and avoid having ridiculously long reindexing times. From what I can tell, that leaves two possible solutions, depending on how the software currently works, and what changes could be implemented in the future: Create a cloud drive with multiple partitions all added to the same pool as suggested previously. Expand the disk as more space is needed, adding more partitions, and in turn adding those partitions to the pool. This solution only works if a cloud drive with multiple partitions indexes each partition separately so progress can actually be made if indexes fail and restart (my edit in the previous post) OR if what's been discussed above in this thread is implemented and indexing doesn't start over from the beginning with every failure Create multiple cloud drives and add those to a pool. Create more cloud drives and add those to the pool as well as needed. This solution only works if some way is implemented for multiple cloud drives to share allocated space for a cache An example problem if this is not the case: I have 3 cloud drives pooled of 50GB each. I have in each drive a 10GB file. I want my cache to be no larger than 15GB. If I set each drive's cache at 5GB, I have 15GB of total cache, but not one of the 3 10GB files can ever be fully cached locally. So, with the current functionality of the software, do either of these solutions work? Is there any updates in progress that implement changes that would make either of these work?
  19. I probably won't be able to reproduce this anytime soon, but some directories entirely disappeared from my drive. I think they were files in the upload queue, but I can't be sure. I had 1TB+ in the upload queue this morning before I got API banned with around 600GB left. It's been sitting at this for a while now until the upload can resume, but I was just accessing a few directories here and there, and found that some folders were completely missing. I know every folder was present when I first completed the file transfer (not upload) because I checked the properties of the entire directory against the properties of the original source and they had the same size and number of folders/files. I'm not going to have this much in queue any time soon so it's going to be rather difficult to replicate. EDIT I've been rescanning my library in plex every day and I keep seeing random files disappearing. 4 episodes from one season of a show, the first episode of a season of another show, 3 entire seasons of another show, etc. I confirmed the total number of files copied immediately after copying, so somewhere in the time it's sitting in the upload queue to when it finally gets uploaded, files are getting deleted. EDIT 2 I sent in a file system log. I know the instructions say "It is important that you perform this step and perform it quickly after reproducing the issue" but as I don't know what triggers the issue or when it happens, my only choice was to leave it running for a while. Hopefully it captures something that catches your eye. I've had a handful of files disappear again today as well. EDIT 3 Just adding that I know for a fact it's not just files in the upload queue anymore. Some of the ones that disappeared were from my first batch which I didn't only copy over, but also let upload completely before verifying that I had the same number of files as the original.
  20. darkly

    Can't use beta

    Windows Server 2012 I've uninstalled since, but I was using 2.2.0.852 I use RDP to access my server remotely Should I install the beta again before running that troubleshooter?
  21. darkly

    Can't use beta

    Just installed the latest beta. GUI is glitched, looks like there is already a pool, but none of the buttons under Manage Pool do anything, the left facing and bottom facing arrow in the bar under the trial banner do nothing, in Disks, it says 2 things, "In order to create a new pool, connect a new disk to the system", and "Adding a disk to a pool doesn't erase or alter the existing files on that disk". I can't get out of this screen. I can't create a new pool, or do anything else but look at this screen and click on buttons that do nothing.
  22. I could see the ability to clone a drive cloud-side and assign a new ID as very useful. Maybe a possible feature in the future?
  23. Is the only way to copy the data from one gdrive clouddrive to another (on the same gdrive account) while keeping both drives mountable simultaneously to go through the local machine then? No way to just copy some stuff on gdrive's end?
  24. Thanks again for the information. I can live with all the errors and such. Honestly the upload pausing issue bothers me the most just cuz I don't have a lot of room for cache and I'm trying to get everything online as quickly as possible. It's weird cuz when it does pause for a while, it keeps intermittently jumping to 16.8 mbps up and then stops again. Always exactly 16.8 mbps. EDIT: Just confirming what you said earlier. it's definitely not the setting. for example, there's 81GB sitting in my "To upload" cache right now and it's jumping beween going off at a few hundred mbps to pausing entirely (except for those 16.8mbps spikes). EDIT 2: You mentioned some advanced settings. Couldn't find any info about this online, but did some digging in the filesystem and found "C:\Program Files\StableBit\CloudDrive\CloudDrive.UI.exe.config". I'm guessing that's what you're talking about as the tag you mentioned is in there. Any other settings worth looking at in there?
  25. I believe what you say, but I guess I'm just confused by what I experienced then. My first day using this software, I downloaded the stable release, copied over about half a terabyte and left it running. At some point, I looked at it, there was an error in the Windows copy dialog, the drive had been detached, and there was an error screen up on the CloudDrive UI. I don't remember exactly what it said, but it was something to do with gdrive and from what I recall, it sounded like some issue with the API or uploading. It must've been like that all night, and by that point I was able to reattach the drive and continue uploading by telling the prompt in the copy window to resume. A few other things: If the upload requests are what's being denied, why do I sometimes see the throttling symbol (the turtle thing) when it's downloading? How much needs to be in the cache for it to start uploading? Sometimes I see it upload with only a hundred megabytes or so, but sometimes I have dozens of gigabytes sitting in there and it sits at 0 bps for like half an hour or something. I mentioned this in the other thread, but what's this "thread was being aborted" error that occurs over and over intermittently but I can resume uploading almost immediately after (therefore must not be an API/upload related ban). I never saw this once on the main release and only started seeing it once I switched to the beta. Sorry, I'm pretty new to all this. I'm trying to figure out how everything works and find a pattern to it, but some things seem really erratic and it's confusing me big time. . . Thanks for taking the time to answer all my questions.
×
×
  • Create New...