Jump to content
Covecube Inc.

modplan

Members
  • Content Count

    101
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by modplan

  1. Thanks Chris, started the tracing. Not sure how quick I'll be able to catch it, uploading 24/7, but I'll keep an eye on it. No issues with the first 164GB so far.
  2. FYI I'm impatient so I am retrying with a new disk on .533 with the same dataset. Even if I do not see the problem again, I wouldn't be 100% confident the problem is solved, but I'm willing to spend a few days uploading to find out. My question is, should I turn on drive tracing for the entire process? My C: drive is a small SSD, will the logs get massive and use up a lot of space?
  3. Thanks for the response Chris. Not sure I understand, likely my question wasn't clear haha. Do we think in the latest builds this issue is: Not Resolved? Possibly Resolved? Definitely Not Resolved? If Alex hasn't been able to reproduce....maybe we have no clue and none of the above? EDIT: Whoops never mind I think you are saying that Alex was on the road to reproducing this issue via test, hit some bugs that could have caused it, fixed some of those bugs, but had to restart the test and has not been able to repro it since? So we still aren't quite sure where we are regarding it? Anythi
  4. Thanks Chris. So I assume then that while my logs uncovered some other bugs for Alex, the fixes in and up to .518 probably haven't solved this particular issue yet?
  5. Alex have any luck root causing this yet? Anything I can do or any tests I can run? CloudDrive is currently sitting idle for me until this is resolved, since I don't want to dedicate days of uploading again to have the drive become worthless. Willing to dedicate some time doing whatever I can to sort this out if Alex could use anything from me.
  6. modplan

    Public encryption key

    Expanding on that, do we have a data recovery procedure at all yet? Even if we know the key, that wouldn't help me stitch all those chunks back together and recover my files. I really like what Arq Backup does. They have an opensource program that is on github that does nothing but download & decrypt chunks, allowing you to download your files. Handy if the developer suddenly disappears, the app stops working years down the road if/when it is no longer supported or any number of scenarios. The data format is even fully documented. https://sreitshamer.github.io/arq_restore/ Ho
  7. Chris, Saw Alex's update on the issue analysis. Unable to update there. Please let him know there might have been a power outage several days/a week before the deletes started happening. I know we had a power outage during a storm last week. What I can't remember is whether or not I had created this drive and started uploading yet when the outage happened. If the power outage did happen after the drive was created, it would have been after all data was copied to the drive and the drive was marked RO, since I remember monitoring the copy operation and the outage was either before that a
  8. Just for further info. The drive is now completely uploaded. "To Upload" = 0B I was hoping at the end of the upload cycle CD may re-upload the chunks it previously deleted but that does not appear to be the case. To Recap: - ~775GB was copied to the cloud drive and we started uploading - This specific cloud drive was then marked as read only with diskpart and has been that way the entire time since - Some previously-uploaded chunks were deleted by clouddrive on 2 separate occasions during the upload cycle - Each time I noticed deletions happening, upload was paused for a while, on res
  9. Nope nothing else really uses the account, Google Photos on my phone I guess would be about all. And as we can see in the screenshot, the app that initiated the delete was definitely CloudDrive. Glad Alex is taking a look!
  10. Hey Chris, yes I just wanted to be a thorough as possible in my response to aid in getting to the bottom of this. Yes the readonly flag was set on the clouddrive drive itself, via diskpart. I agree, from my limited understanding that this shouldn't cause an issue, but just wanted to provide all the info I thought of. Hopefully the logs tell the full story.
  11. We just went on another chunk deletion spree, starting about 20 minutes ago according to the GDrive GUI. I enabled Drive Tracing for about 5 minutes and let it continue deleting. I then paused uploads and am collecting/uploading logs. Very very interested to see what is shown in the logs.
  12. Possibly relevant AFTER all data had been copied to this drive and was in the To Upload "cache". The drive was marked as read only with diskpart: att vol set readonly I do this semi-often with physical drives that are meant for archive purposes ONLY, in order to prevent any kind of corruption, accidental deletion, etc. They are only marked as RW when data needs to be written, then back to RO they go. This is the first time I have done this on a CloudDrive drive. I do not think setting this NTFS flag should have any interaction with CloudDrive or cause any issues, but I wanted to poi
  13. Thanks Chris. Just to point some things out one by one: The drive was not destroyed, we were just uploading along. As for only zeroes, well I guess that could be a possibility, but only if clouddrive was actively overwriting these previously uploaded chunks with all 0's and I don't see why it would do so. No MIME errors were generated in the GUI (very familiar with this error, believe I was the first to report it) and the chunks were never re-uploaded, theyre gone. The chunks had data, they had been uploaded over 24 hours prior, then I saw tons of writes at 0% and then Google Drive
  14. Google Drive .486 win2k8r2 I've paused upload threads and it has stopped, but about 2500 chunks were just "permanently deleted" I copied about 775GB into a brand new 10TB drive. Over 440GB of that was successfully uploaded and it was chugging along like normal. Nothing changed. Watching technical details I noticed suddenly the upload threads were popping up at 0%, never progressing, then new ones would pop up at 0% and this would continue over and over. No errors were thrown by the GUI, nothing abnormal in the logs. I logged in to the Google Drive web GUI and I see tons of thi
  15. Hey Chris, I've seen this issue as well, normally with lower chunks (often chunk# 55 or 64 in my case). It appears to me that Cloud Drive logs the updates to that chunk and uploads each single update individually, instead of coalescing all of the updates together and uploading a single chunk. Note that I see this after setting "upload threads" to 0, and then setting them back to the normal value after my copy/change/write is done, so the chunk is definitely not being continuously updated during the upload. I would assume all updates to the chunk would then coalesced into a single upload, i
  16. I just got a checksum mismatch on one of my disks. chkdsk shows no errors. What is my course of action here? A few files I checked seem fine, is there some silent corruption going on somewhere? I got this error when changing the label of the drive....however I think I may have seen it before on this drive a while back. Upload verification has been turned on and off multiple times on this drive while we have gone through all the betas, and while I have been tweaking my upload/download threads for maximizing performance to see if I could tolerate upload verification being turned on (it is curren
  17. Thanks thnz, looks like you are right. Which leads to a broader conversation. Is this the right way to do this for drives with large caches? Could a "chunk tracking database" that marks local cached chunks as perfectly uploaded be used to prevent the wholesale re-upload of the cache? Only re-upload the chunks that were not marked in the database as previously successfully uploaded? If someone with somewhat limited bandwidth sets a 500GB cache on a large drive, suffers a power outage, but 498GB of that cache was previously perfectly uploaded, this wholesale re-upload would take weeks.
  18. Sorry, info I know I should have provided. 2k8 R2 .470 (But have seen this several times while using CloudDrive, any time there is an unexpected reboot basically, version does not matter) Drive is 10TB Cache is currently 100GB, I've been playing with different sizes. No files have been restored. In fact this seems to have nothing to do with running a backup, in my experience, any local cache data is moved to "To Upload" after "Drive Recovery" after an unexpected reboot. This is just the only drive I currently have a Cache on. I have seen this on other drives just with standard fil
  19. I see the "ID" column progressing in his video, I assumed this was the Upload ID, but the ID in the "Chunk" column stays the same in his video.
  20. I could certainly be wrong, but I think this is more of a consumer focused product. Does Atmos have an S3 compatible API like NetApp StorageGRID does? I could see CloudDrive maybe implementing a generic S3 API driver, like some other cloud facing apps have done.
  21. Sorry if this has been covered a quick search did not find me what I was looking for. I have a CloudDrive that I send windows server backups to nightly. The full backup size is about 75 GB, but the nightly incremental change is only 6-7GB, easily uploadable in my backup window. I have set the cache on this drive to 100GB to ensure the majority of the "full backup" data is stored in cache, so that when windows server backup is comparing blocks to determine the incremental changes, clouddrive does not have to (slowly) download ~75GB of data for comparison every single night. This
  22. I don't even reboot. If you stop the service before running the update installer, it does not prompt you to reboot and starts the service back up for you.
  23. Yes it sounds like the OP wants to share to his parents which live further away. However this can easily be done by sharing the drive like you say, and then using a VPN service like Hamachi.
×
×
  • Create New...