Jump to content

modplan

Members
  • Posts

    101
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by modplan

  1. Theyre probably in the Trash. Search still searches the trash. If so you can just empty the trash or wait 30 days for them to get deleted.
  2. Just to note I AM using encryption as well. Just not pushing/pulling quite as many chunks as you are I guess.
  3. me too, maxes out my TWC upload and download with Google Drive.
  4. Don't I remember multiple other people here in these forums talking about getting 900-1000mbps with Google Fiber and Cloud Drive?
  5. Some 5 year old 8 core AMD. Never seen CloudDrive use more than 1-2% while actively uploading/downloading. 20mbps upload, 300mbps download, maxing out either doesn't matter.
  6. Right. I'm not saying I disagree with you, just based on how the current architecture has been described, I'm not sure if it is possible. A solution that might help if all of your files are written sequentially is that block ranges are prioritized equally. For example if you are trying to read chunks 1012-1085 and chunks 8753-9125 at the same time, those would considered separate "files" and prioritized equally. Seems like a logic headache from a code perspective though, and if your drive has random writes, or updated chunks of a file outside of the file's main "chunk range", this algorithm would all fall apart quickly.
  7. The problem with the prefetch logic knowing about files, is that I do not think the Driver has any concept of files, only the underlying blocks so I don't see how that could be done.
  8. Things like this is why I never let the 'To Upload' get big, unfortunately. Hopefully Chris can help you figure something out.
  9. My settings are Prefetch Trigger 5MB Prefect Forward 100MB Prefetch Timeout 300 Just FYI, might want to play around with those settings.
  10. Any way to pause writing (uploading) and then resume it later via script, preferably batch? I would like to pause uploads programmatically while other scheduled tasks are running that need the upload bandwidth, and then resume them later.
  11. Google is limiting you for making too many API calls/sec, not CloudDrive. No way for them to get around this. You could raise your chunk size so that you are making less API calls per second. Google does not seem to care how much bandwidth you are pushing/pulling, just how many API calls you make. With larger chunks you will obviously make less API calls to upload/download the same amount of data.
  12. Seeing the same thing. Turned it off for now until it is fixed.
  13. Well, I'm in the last 2 hours of uploading my dataset, all has been good so far. And bam, Alex's change saved over 900 chunks from being deleted! I saw this occurring when it was at the ~500 mark and flipped on drive tracing. I'm uploading those logs now. It's still trying to delete chunks, up to 1,300 in the time it took me to write this post, but, my chunks are safe, nothing has been deleted from the cloud! Edit: it looks like it is continuously trying to delete these chunks, retrying and failing over and over and over. Will it ever break out of this (possibly) infinite loop and move on to uploading the final 20GB? Edit2: I paused uploading, and then resumed it, and same as in the past, this caused CloudDrive to stop trying to delete chunks, and I'm back to uploading now. Edit3: I'm in the process of verifying every single file via md5 hash with ExactFile, it'll take a while, but I'll post results when I have them.
  14. While I would love this, since CloudDrive seems to be largely build around 1MB+ chunks, I really do not think dedupe would be very effective if this is the level at which dedupe would have to be done at. But maybe CloudDrive's architecture allows for some sort of sub-chunk dedupe? Most dedupe on enterprise arrays are done at the 4k-8k block size level, once you get too far past that it becomes less and less likely the blocks will match exactly and dedupe loses its effectiveness.
  15. I think it will slow the copy to practically nothing, but I'm not sure if that will cause the backup app or windows to timeout the copy or anything. I've never come close to filling the drive my cache is on. Here is a technical deep dive on the cache architecture: http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/
  16. When you are writing data to the drive, the cache will expand as needed. If you write more than 50GB to the drive faster than you can upload, the cache will continually expand as much as possible until the drive the cache is on is almost full. As data is uploaded, it will be deleted from the cache.
  17. I am planning on using ExactFile to do the comparison. It will create an md5 hash for every file in a directory, that can then be ran against and compared to the files in a different directory. I see this in the latest changelog: .536 * Fixed crash on service start. * Never allow NULL chunks to be uploaded to providers that are encrypting their data and that do not support partial writes. I'll upgrade to .536, create a new drive, and get to testing again!
  18. To address each of those in order. 1) Checksum verification is indeed on, and the chunks pass. I see in "Technical Details" when I try to open one of the corrupt files, that it is indeed trying to download the deleted chunks, but the progress sits at 0% and then is discarded (rather quickly) from the details window. I think we discussed this a page or two back and you indicated Alex said this is normal for a deleted chunk. It appears, to me, that CloudDrive EXPECTS these chunks to not exist, it knows it deleted them. 2) I really think, based on what we have discussed and the only ways chunks can get deleted, that the service or driver is thinking these chunks are being zeroed when they really aren't. I'm not sure what could cause this. EDIT: I very much doubt but wanted to make sure: if a file is in use/inaccessible it wouldn't cause some type of collision and cause this? 3) Turning off upload threads would certainly help me keep a better eye on this to collect logs, however this is a home server, that I only connect to via RDP. I could monitor it better while I was home in the evenings, but it would likely take weeks of me turning upload threads on and off to get to the point where this is repro'd and I caught it. Also I believe my 2nd log upload I caught it in the act (by luck), was not much useful gained from that? Has additional logging been put in place since then to increase supportability? 4) Me too and I'm at your disposal to help! I really want to get this archive data safely into the cloud ASAP
  19. Yes all chunks had upload verification turned on 100% of the time (but the chunks were deleted much later after they were successfully uploaded)
  20. Nope, nothing. But the scary part is, unless you log in to your Google Drive account on your browser, and scan the entire history of the "Stablebit CloudDrive" folder looking for deletes like you see in my screenshot above, that should not have taken place, I have no idea how you would know this is going on, it is completely silent. All the files appear to still be there on your drive, but some random chunks are missing so some of them are silently corrupt. Edit: And you can see in my screenshot the chunks were deleted by "StableBit C..." not some other application.
  21. Not doing any kind of application work. Just copying files over from one drive to CloudDrive and randomly through the copy process (usually several hundred GBs in) CloudDrive just starts deleting chunks from the cloud on its own. Any files who had data stored in any of the deleted chunks, become instantly corrupt, but still show up in Explorer as normal.
  22. Not deleting anything, and the FS still thinks the files are there, their backing chunks in the cloud are just gone
  23. Unfortunately it happened again And it was over night (about 20 hours ago) so I am assuming the disk traces have wrapped? Also it is worth noting that both times I was copying FROM the Google Drive Sync Folder TO Cloud Drive. I doubt this matters, but another data point. Again the range deleted was a range that was copied over and uploaded a day or so before. I'm going to run ExactFile on what I've copied so far to compare the MD5s and see exactly what has corrupted.
×
×
  • Create New...