Jump to content

modplan

Members
  • Posts

    101
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by modplan

  1. Yes I think I posted in this thread, right after the .444 build came out, that I blew away my existing disk that I was seeing this issue on, and created a new encrypted disk, the next day I saw the issue again. Deleting the chunk fixes the issue immediately.
  2. Any update on deleting these chunks when this happens? I hit this on 3 chunks today. Deleted them manually and all went back to normal. The updated chunks are on the local disk, having stablebit delete the chunk on google drive and upload the updated chunk when this error is hit shouldn't be an issue. Note: It appears a permanent delete is necessary (delete from Trash too). It's just an annoyance because my upload ground to a halt for 12 hours before I logged into my server and saw I was hitting this error in an infinite loop and fixed it manually. Edit: I'm using .463, encryption, the new version that appended the null byte, etc all seem to have reduced this from occurring but definitely do not solve the problem completely.
  3. I've seen this with large chunks, I think when existing data had changed, not with new data. Here was my theory: Chunk 10 has some data that has changed. To verify that we do a partial read on chunk 10 at offset X. The data has indeed changed! So Cloud Drive re-uploads chunk 10. Wait a sec, chunk 10 has more data that has changed! We do a partial read of chunk 10 at offset x+1, see the data is different, upload chunk 10 AGAIN. Then do a partial read of chunk 10 at x+2, upload chunk 10 AGAIN, etc etc until all of the 1MB blocks in Chunk 10 that have changed have been updated and chunk 10 has been uploaded over and over. If this is indeed the case, it would be wildly more efficient to calculate the new chunk 10 before uploading any data and upload it a single time. Windows Server Backup caused the above to happen over and over and over to the point that in a whole 24 hours "to upload" would only drop by about 1-2GB but I had really uploaded over 200GB! For that drive I had to go back to 1MB chunks, which solves the above issue. For my streaming media cloud drives, the data never really changes, just new data is added, so I am able to still use 50MB chunks and do not see the above issue.
  4. To be completely fair. The product is still in BETA and is free (you only pay if you pre-order for the discount). They also technically support Amazon Cloud Drive and are using the official Amazon API and are working with Amazon very diligently to make it work, Amazon just needs to fix their crummy service for it to work properly. If I were you I'd be complaining to Amazon for not providing what was advertised vs complaining here. Just my .02
  5. Deleting absolutely resolved the issue. I was stuck in 1 thread, uploading that same chunk over and over, getting the MIME error every time. I found the chunk in Google Drive, deleted it, deleted it from the trash. Within less than a second or two the chunk uploaded successfully and I immediately went back to uploading 10 threads full speed.
  6. I'm the one the filed the ticket. Blew away my drive, upgraded to .444, created a new drive, encrypted, still seeing this issue. I've only seen it happen with windows server backup. Deleting the chunk manually fixed it, the chunk was indeed tagged in Google Drive as "Binary File"
  7. I have no problem with this happening since I understand it is likely a hardware limitation, i don't expect cloud drive to perform miracles Maybe you meant to quote steffenmand and tell him to collect logs?
  8. I had this issue and filed a ticket. They are looking at it. Are you using encryption? I was not originally, I destroyed that drive, made a new one with encryption and I have not seen this issue since. (however it has only been a day or two instead of over a week) My guess was that Google was categorizing the chunk as say....a video, maybe because that chunk happened to contain the header of an actual video file I was uploading. Then, when a block in that chunk changed, and Cloud Drive tried to re-upload that chunk, Google Drive denied the re-upload since it saw the new chunk was no longer categorized as a video (no clue why Google would deny this). I theorized that if I used encryption, Google would never be able to categorize the chunk as a video and the problem may go away. This is all a theory that is working so far in the short time I've tried it.
  9. I see this, but it is only when the source of the files, and the cache drive are the same drive. Basically this causes this queue length on that physical drive to shoot way up, causing IOPs to suffer, and I assume the drive is just too slow for Cloud Drive to be able to read and upload. As soon as the transfer is finished and the queue length drops back down to normal levels uploads resume at normal speed. Probably nothing Cloud Drive can do to prevent this in this particular case.
  10. Thanks for the helpp. Appears that the problem is that the sector size is 1K on the logical drive I am trying to use as cache (hardware raid array) and that is preventing CloudDrive from writing the 512b sectors when I try to create a cloud drive with that sector size. I think I'm going to shelve this until we can figure out if Alex can emulate 512b sectors. I believe that even win2k12 forces 512b for backup, so this issue will affect anyone trying to use windows server backup to a cloud drive that does not want to (or cannot, like me) create a drive with 512b sectors. Edit: Yes using .431
  11. Google Drive. Creating a 512b Cloud Drive fails in formatting saying it cannot write to my cache drive (where there is 2.5TB free) as noted above. Edit: It is worth noting that the USB drive I normally back up to also has 4K sectors (as most new drives do) however its firmware successfully emulates 512b support, so backups succeed.
  12. Win 2k8 R2 There is over 2.5TB of free space on the cache drive. Won't be able to do a bare metal backup or a bare metal restore? What prevents me from installing CloudDrive to another machine, cloning a backup to a new HDD and using that to do a bare metal restore assuming I had a catastrophic failure? Regardless the restore point is moot when backup does not even work.
  13. The internet says that this is because the disk uses 4k sectors and does not properly emulate 512b sectors. I tried creating a 512b cloud drive just for backups but I couldn't even get that formatted, CloudDrive complained it could not write to my cache drive while I was trying to do the format. Any ideas?
  14. Trying to do a bare metal backup of just my C drive (120GB SSD) using windows server backup. I already have this scheduled to run daily to a USB drive that is attached, but would like something off site. I go to "Backup Once" in WSB, select bare metal, pick my 10TB Cloud Drive as the destination (that has about 800GB of other data) and click start. It runs for about 30 seconds before giving me the error (from server manager): This is slightly different than the error WSB actually shows, but is mostly the same and WSB wont let me copy the error message. Has anyone gotten this to work? It happens every time I try. .431 beta
  15. Plex seems to be working great with Google Drive FYI. I just archive old videos (full seasons of shows I have watched, etc) to the cloud. I was using ExpanDrive and it was pretty horrible with scanning, if I accidentally scanned the cloud library it would go for HOURS before crashing ExpanDrive. This is not a problem with CloudDrive because all the metadata is pinned locally so scanning is just as fast as local content. Sure there is about 20 sec delay when starting to play a file, but after that it plays smoothly, and like I said this is all old media I may or may not ever watch again that I just want to keep in my library for myself and others "just in case"
  16. modplan

    Amazon error

    Sounds like amazon throttling you after sustained heavy upload to me...
  17. From my experience (on .419 at the time) there is around a ~100s timeout at which the upload is aborted. This is what I saw at least with 100mb blocks. It would be nice to be able to define this (or have it scale with chunk size), but I then went to 50mb blocks and all is good.
  18. Saw this with 100mb blocks on .419 but 50mb blocks work fine for me. But I have 20mbps upload
  19. Weird. It appears, if I watch the IO threads, if I create a 2.5MB file and "Too Upload" says about 2.5MB, the IO threads report uploading an entire 50MB file.
  20. I've currently settled on chunk sizes of 50MB as being the best performing for me at the moment, but have a question. If I create a 1MB file on my drive, what gets uploaded? A 50MB object that is 49MB of zeroes or do we cache until we see 50MB of new data and then upload?
  21. Do we have any best practice settings for Google Drive yet? I destroyed my old drive and started a new one with 10MB chunks. Seeing much better performance now, fewer errors, but don't want to upload a ton only to learn that using 100MB chunks would perform much better Mainly concerned with sequential reads for video file archiving and maximum upload speed. Happy Thanksgiving all!
  22. I go from 20mbps upload, to 0mbps for a while, sometimes hours. When this occurs CloudDrive asks me to reauth, but reauth does not get it started again. During this time other apps (Plex) that are uploading to Google Drive continue uploading fine. Displaying the IO Threads at this time shows them all at 0kbps but they are being created and closed. It's very weird, like I am tripping some sort of Google Rate limit. Is there a way to figure that our from the service logs? If not I assume I need to enable disk tracing and get you those logs? I'm starting with an 850GB set of files I'm trying to upload to test the viability of this product, but so far I haven't gotten very much uploaded due to the long windows of upload hanging.
  23. I'm seeing lots of "Security Error" in some clouddrive log I found. Is this normal? Is there a better way to pull logs and post them here?
  24. WOOHOO Been testing for a few hours. Seeing pretty slow speeds and some timeouts as well. Is there anything that can be done to increase the speed? I was really hoping Cloud Drive could max out my upload with the multiple threading. Or is Google doing some throttling?
×
×
  • Create New...