Jump to content

modplan

Members
  • Posts

    101
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by modplan

  1. Unless you've encrypted it, it is "mostly" raw drive data. 

     

    As for a recovery tool/procedure, I believe that this has been brought up before, but I'll flag this for Alex, as yes, I think we definitely do need something like that. 

     

    As for the mime error specifically, was this drive created before the 1.0.0.444 version?  

    If so, then the  fix isn't actually applied to the specific drive (this is because it's technically an architectural change and we don't want to "brute force change" this for drives, as it can cause even more issues). 

     

    If you are sure this drive was created with the 1.0.0.444 build or later, then please let me know and if you could enable logging if/when it happens again?

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

     

     

    Was the drive created before the 1.0.0.444 build? 

     

    And if/when this happens again, could you enable logging:

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

    Also, created a ticket for this already, and have marked as critical:

    https://stablebit.com/Admin/IssueAnalysis/25923

     

    Yes I think I posted in this thread, right after the .444 build came out, that I blew away my existing disk that I was seeing this issue on, and created a new encrypted disk, the next day I saw the issue again. Deleting the chunk fixes the issue immediately.

  2. Any update on deleting these chunks when this happens? I hit this on 3 chunks today. Deleted them manually and all went back to normal. The updated chunks are on the local disk, having stablebit delete the chunk on google drive and upload the updated chunk when this error is hit shouldn't be an issue. Note: It appears a permanent delete is necessary (delete from Trash too).

     

    It's just an annoyance because my upload ground to a halt for 12 hours before I logged into my server and saw I was hitting this error in an infinite loop and fixed it manually.

     

    Edit: I'm using .463, encryption, the new version that appended the null byte, etc all seem to have reduced this from occurring but definitely do not solve the problem completely.

  3. I've seen this with large chunks, I think when existing data had changed, not with new data. 

     

    Here was my theory: Chunk 10 has some data that has changed. To verify that we do a partial read on chunk 10 at offset X. The data has indeed changed! So Cloud Drive re-uploads chunk 10. Wait a sec, chunk 10 has more data that has changed! We do a partial read of chunk 10 at offset x+1, see the data is different, upload chunk 10 AGAIN. Then do a partial read of chunk 10 at x+2, upload chunk 10 AGAIN, etc etc until all of the 1MB blocks in Chunk 10 that have changed have been updated and chunk 10 has been uploaded over and over.

     

    If this is indeed the case, it would be wildly more efficient to calculate the new chunk 10 before uploading any data and upload it a single time. 

     

    Windows Server Backup caused the above to happen over and over and over to the point that in a whole 24 hours "to upload" would only drop by about 1-2GB but I had really uploaded over 200GB! For that drive I had to go back to 1MB chunks, which solves the above issue. For my streaming media cloud drives, the data never really changes, just new data is added, so I am able to still use 50MB chunks and do not see the above issue.

  4. To be completely fair. The product is still in BETA and is free (you only pay if you pre-order for the discount). They also technically support Amazon Cloud Drive and are using the official Amazon API and are working with Amazon very diligently to make it work, Amazon just needs to fix their crummy service for it to work properly. If I were you I'd be complaining to Amazon for not providing what was advertised vs complaining here. Just my .02

  5. To make sure, once you've deleted the chunk manually, it fixed the issue?

     

    If it comes back, then definitely let us know.

     

    And even if google is reporting it as a binary file, they are likely reading it to auto-organize it, or at least identify it for their backend (most providers use some sort of deduplication, or at least I'm guessing.... as this can save PBs of space in their data centers, staving on space, drives, cooling, etc).

     

    Deleting absolutely resolved the issue. I was stuck in 1 thread, uploading that same chunk over and over, getting the MIME error every time. I found the chunk in Google Drive, deleted it, deleted it from the trash. Within less than a second or two the chunk uploaded successfully and I immediately went back to uploading 10 threads full speed.

  6. If that's the case, then yeah, there may not be much we can do to fix the issue. There are some optimizations we could do though, so it's worth looking into at least.

     

    If you could, could you do this:

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

     

    I have no problem with this happening since I understand it is likely a hardware limitation, i don't expect cloud drive to perform miracles :) Maybe you meant to quote steffenmand and tell him to collect logs?

  7. I had this issue and filed a ticket. They are looking at it. 

     

    Are you using encryption? I was not originally, I destroyed that drive, made a new one with encryption and I have not seen this issue since. (however it has only been a day or two instead of over a week)

     

    My guess was that Google was categorizing the chunk as say....a video, maybe because that chunk happened to contain the header of an actual video file I was uploading. Then, when a block in that chunk changed, and Cloud Drive tried to re-upload that chunk, Google Drive denied the re-upload since it saw the new chunk was no longer categorized as a video (no clue why Google would deny this). I theorized that if I used encryption, Google would never be able to categorize the chunk as a video and the problem may go away.

     

    This is all a theory that is working so far in the short time I've tried it.

  8. I see this, but it is only when the source of the files, and the cache drive are the same drive. Basically this causes this queue length on that physical drive to shoot way up, causing IOPs to suffer, and I assume the drive is just too slow for Cloud Drive to be able to read and upload. As soon as the transfer is finished and the queue length drops back down to normal levels uploads resume at normal speed.

     

    Probably nothing Cloud Drive can do to prevent this in this particular case.

  9. You're using 1.0.0.431 of StableBit CloudDrive, correct?

     

    Unfortunately, I'm not able to reproduce the drive creation issue here, so it may be something about your system. 

    Have you retried after a reboot?  And do you have any antivirus, backup or disk tools on the system? And could you do this: http://wiki.covecube.com/StableBit_DrivePool_Q2159701

     

    And worst case, enable logging and try to recreate the drive:

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

     

     

     

     

     

    The issue specifically is that the drive emulates the sector size and "lies" to the OS.  THis is done for compatibility, as not everything supports the 4k sector size yet (obviously).

     

    However, the StableBit CloudDrive doesn't emulate the sector size and reports the specific size. 

     

     

    Just in case, I've flagged the issue for Alex, to see if we can add the option to emulate the sector size for older systems (for ease).

     

    Thanks for the helpp. Appears that the problem is that the sector size is 1K on the logical drive I am trying to use as cache (hardware raid array) and that is preventing CloudDrive from writing the 512b sectors when I try to create a cloud drive with that sector size. I think I'm going to shelve this until we can figure out if Alex can emulate 512b sectors.

     

    I believe that even win2k12 forces 512b for backup, so this issue will affect anyone trying to use windows server backup to a cloud drive that does not want to (or cannot, like me) create a drive with 512b sectors.

     

    Edit: Yes using .431

  10. Yeah, this is definitely related to sector size.

     

    https://social.technet.microsoft.com/Forums/windowsserver/en-US/5d9e2f23-ee70-4d41-8bfc-c9c4068ee4e2/backup-fails-with-error-code-2155348010?forum=windowsbackup

     

    (note the reply from the MSFT Employee)

     

     

    You'd want to recreate the drive with the smaller sector size.

     

    However, what provider are you using here?

     

     

     

    As for the troubles, you may be able to do that, but it may not be bootable.  But that may work.  

     

    Google Drive. Creating a 512b Cloud Drive fails in formatting saying it cannot write to my cache drive (where there is 2.5TB free) as noted above.

     

    Edit: It is worth noting that the USB drive I normally back up to also has 4K sectors (as most new drives do) however its firmware successfully emulates 512b support, so backups succeed. 

  11. What OS are you using specifically?

     

     

    And as for the cache, check what drive it's being created on, and make sure there is actually room.

     

     

    Also, keep in mind, you won't be able to bare metal restore with the CloudDrive (without some serious work arounds).

     

    Win 2k8 R2

     

    There is over 2.5TB of free space on the cache drive. 

     

    Won't be able to do a bare metal backup or a bare metal restore? What prevents me from installing CloudDrive to another machine, cloning a backup to a new HDD and using that to do a bare metal restore assuming I had a catastrophic failure? Regardless the restore point is moot when backup does not even work.

  12. The internet says that this is because the disk uses 4k sectors and does not properly emulate 512b sectors. I tried creating a 512b cloud drive just for backups but I couldn't even get that formatted, CloudDrive complained it could not write to my cache drive while I was trying to do the format.

     

    Any ideas?

  13. Trying to do a bare metal backup of just my C drive (120GB SSD) using windows server backup. I already have this scheduled to run daily to a USB drive that is attached, but would like something off site.

     

    I go to "Backup Once" in WSB, select bare metal, pick my 10TB Cloud Drive as the destination (that has about 800GB of other data) and click start. It runs for about 30 seconds before giving me the error (from server manager):

     

    The backup operation that started at '‎2015‎-‎12‎-‎14T22:48:31.145840100Z' has failed with following error code '2155348010' (One of the backup files could not be created.). Please review the event details for a solution, and then rerun the backup operation once the issue is resolved.

     

     

    This is slightly different than the error WSB actually shows, but is mostly the same and WSB wont let me copy the error message.

     

    Has anyone gotten this to work? It happens every time I try. 

     

    .431 beta

  14. I'm having to move my media back to NetDrive at the moment, the speeds offered are just not workable for a smooth stream.

    (I'm not a fan of NetDrive, browsing is much slower, less configurable etc) - but my Plex Playback is the most important for me, and that doesn't seem to be working as well as it should right now.

     

    I really hope this is fixed soon.

     

    Plex seems to be working great with Google Drive FYI. I just archive old videos (full seasons of shows I have watched, etc) to the cloud. I was using ExpanDrive and it was pretty horrible with scanning, if I accidentally scanned the cloud library it would go for HOURS before crashing ExpanDrive. This is not a problem with CloudDrive because all the metadata is pinned locally so scanning is just as fast as local content. Sure there is about 20 sec delay when starting to play a file, but after that it plays smoothly, and like I said this is all old media I may or may not ever watch again that I just want to keep in my library for myself and others "just in case"

  15. After a reboot i havent experienced the complete timeout again.

     

    However i still encounter that the UI shows 0 bits/s up and down, while transferring large files to my virtual drive. Troubleshooting I/O Threads shows nothing active at all. These only starts after full transfer

     

    After around 7.5 gb, the transfer speed goes from 140 mb/s to 0 bytes/s - 5-7 mb/s (seems to be in small intervals it goes up from 0 bytes/s to 5-7 mb/s and then back down to 0 bytes/s).

     

    Is it due to full encryption that there is this limited speed while transferring to my local drive ? The disk with the cache has plenty of space, so it's not a spacing issue.

     

    200 gb takes around 9 hours to transfer to my virtual drive with this speed, before the uploading even starts. (unless it is uploading but just not showing it in the UI.

     

    Version is .422

    Experimental Amazon Cloud Drive

    200 GB Cache

    50 MB chunks

    Full drive encryption

    987 gb space left on drive.

    Sounds like amazon throttling you after sustained heavy upload to me...

  16. What's your upload speed?  And how reliably is your connection?

     

    From my experience (on .419 at the time) there is around a ~100s timeout at which the upload is aborted. This is what I saw at least with 100mb blocks. It would be nice to be able to define this (or have it scale with chunk size), but I then went to 50mb blocks and all is good. 

  17. Saw the new beta and thought i would play with it. .421

    I made a ACD drive with a 10MB chunk and literally couldn't upload a single block it just always throws the error http status conflict.

    The 1MB chunks work fine. Been using them for days with zero errors

     

    Thoughts?

     

    *EDIT*

    Anything larger then 1MB would throw these errors. But smaller blocks i was able to get into the cloud. Is it a potential timeout issue? and these bigger blocks i am not beating the allowed time or something?

    If that is the case why are timeouts happening on data that is actually still in transit.

     

    Saw this with 100mb blocks on .419 but 50mb blocks work fine for me. But I have 20mbps upload

  18. I've currently settled on chunk sizes of 50MB as being the best performing for me at the moment, but have a question.

     

    If I create a 1MB file on my drive, what gets uploaded? A 50MB object that is 49MB of zeroes or do we cache until we see 50MB of new data and then upload?

  19. Do we have any best practice settings for Google Drive yet? I destroyed my old drive and started a new one with 10MB chunks. Seeing much better performance now, fewer errors, but don't want to upload a ton only to learn that using 100MB chunks would perform much better :)

     

    Mainly concerned with sequential reads for video file archiving and maximum upload speed.

     

    Happy Thanksgiving all!

  20. Very nice!

     

    As for the authorization issues, what version are you on exactly?

     

     

     

    This is for the Google Drive provider? 

    If so, if you're only seeing this in the service logs (which I suspect is the case), and not seeing this in the main UI, then it may be "harmless".  Any sort of errors when uploading and downloading are automatically retried.  Only when they happen frequently or otherwise affect the prodiver are they an issue (most likely). 

     

    As for grabbing the logs:

    http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

     

    I go from 20mbps upload, to 0mbps for a while, sometimes hours. When this occurs CloudDrive asks me to reauth, but reauth does not get it started again. During this time other apps (Plex) that are uploading to Google Drive continue uploading fine. Displaying the IO Threads at this time shows them all at 0kbps but they are being created and closed. It's very weird, like I am tripping some sort of Google Rate limit. Is there a way to figure that our from the service logs? If not I assume I need to enable disk tracing and get you those logs?

     

    I'm starting with an 850GB set of files I'm trying to upload to test the viability of this product, but so far I haven't gotten very much uploaded due to the long windows of upload hanging.

  21. WOOHOO

     

    Been testing for a few hours. Seeing pretty slow speeds and some timeouts as well. Is there anything that can be done to increase the speed? I was really hoping Cloud Drive could max out my upload with the multiple threading. Or is Google doing some throttling?

  22. The plan is to release the next beta build of StableBit CloudDrive soon. There are a huge number of fixes and optimizations in it (including to the authorization system).  

     

    Once that's out, then we plan to add more of the providers.

     

    Patiently (anxiously) waiting  :D

×
×
  • Create New...