Jump to content
  • 0

Uploading same chunk multiple times


thnz

Question

I copied a large amount of data to the cloud drive last night, and it's still uploading today. I just had a look at the I/O Threads window and it appears to be uploading the same chunk over and over again, before moving on to the next one.

 

Is this expected behavior under any circumstance? I've caught it happened several times today. Nothing has been written to the drive since last night.

 

Here's what I mean:

http://webmshare.com/play/Gy33A

Link to comment
Share on other sites

15 answers to this question

Recommended Posts

  • 0

It was on .463, and that behavior carried on for just under a day. However, after installing .470 this evening and restarting it went back to uploading as normal. I'm unsure if it was the update or the restart that fixed it, but its been working as expected ever since.

Link to comment
Share on other sites

  • 0

From the video posted, I don't see an issue. The ID is the chunk's ID, and it looks like it's progressing normally.

 

However, if if you were getting "MIME type errors", then this was fixed in the newest beta builds (we download the errant chunk and reupload it). 

 

If it persists in the 1.0.0.470 build, then please do let us know.

Link to comment
Share on other sites

  • 0

Ah, you're right. I completely overlooked that.

 

 

It's hard to tell from the video specifically. But some features such as the upload verification will redownload the file and check it. But that doesn't look like what is going on here.

 

It looks like the upload was failing outright, and then being re-attempted. There are a number of reasons that this could happen, such as the upload failing, or the provider blocking it.  

 

However, if it's completing, then it should be fine. 

 

 

If it does continue and causes issues, then doing this would help us to identify the specific issue:

http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

Link to comment
Share on other sites

  • 0

FWIW nothing out of the ordinary was showing in the log file, and nothing that coincided with it occurring. Just the occasional 'throttling' message, but that appears normally. I did enable disk tracing while it was happening, though AFAIK there was no actual reads or writes during the minute or so I had it enabled - it was just uploading previous writes. I'll zip it up and submit it anyway.

 

From memory, what had happened:

Copied several GBs of files onto the cloud disk via a windows share.

Cancelled the copy prior to completion, and deleted the files that were copied (via windows share).

Restarted the computer immediately after deletion (graceful restart from the start menu). The computer took an abnormally long time to restart, but there was no crashing or anything.

Redone the copy.

I think this is when it started happening, as over the next 18 hours or so 'to upload' had only gone down by a few GB.

 

I don't know if any of this is relevant or not. I haven't tried reproducing it.

 

It lasted until I updated to .470 and restarted yesterday evening (2016-02-24 09:13:02Z in the logfile), after which uploading has resumed at the normal rate.

Link to comment
Share on other sites

  • 0

Please do upload the logs. It may give a better indication as to what was going on (as some of the logging is low level stuff, not shown in the text logs). 

 

 

As for the long restart time, that can be normal in some cases. We have to flush file system data properly before shutting down, and this is much more complicated than a normal disk. So we halt the shutdown process until that can be completed.  Otherwise, it could/would corrupt the file system. 

Link to comment
Share on other sites

  • 0

It's doing it again right now shortly after a restart. The new UI stuff shows a bit more detail. I'll leave drive tracing on for 5mins or so and submit logs again.

 

 

What provider is this for, specifically? 

 

And what version of StableBit CloudDrive are you using (you said 1.0.0.470), and do you know what version you created the drive with? 

Link to comment
Share on other sites

  • 0

Amazon Cloud Drive. Currently .479. I'm unsure what version the drive was made with - the timestamp on the folder is Feb 17 2016. At a guess .460, as I think that was when I started trying it out again.

 

It seemed to come right again about the time I enable disk tracing, and as far as I can tell, has been uploading normally since.

Link to comment
Share on other sites

  • 0

Okay, if it was Amazon Cloud Drive, it may be that the upload verification was the issue. 

 

Specifically, it uploads the file, redownloads it, and checks the hash for the file. If this fails, it will reupload it. 

 

Alternatively, if the upload outright failed, it would retry.

 

The 1.0.0.479 version shows much more detail about what is going on. In fact, you can pause the detail window, and mouse over the file. It should report exactly what is going on. 

 

 

Flagged: https://stablebit.com/Admin/IssueAnalysis/25935

Link to comment
Share on other sites

  • 0

The attachment to post #9 shows what it looks like in the more detailed I/O window in .479. In that case chunk 320 was repeatedly uploading.

 

 

Well, Alex posted a reply. There isn't a "fix" yet, but it's on his mind. 

 

That said, it may be that the specific chunk in question was being repeatedly updated, causing it to be uploaded again and again. 

Link to comment
Share on other sites

  • 0

Well, Alex posted a reply. There isn't a "fix" yet, but it's on his mind. 

 

That said, it may be that the specific chunk in question was being repeatedly updated, causing it to be uploaded again and again. 

 

Hey Chris,

 

I've seen this issue as well, normally with lower chunks (often chunk# 55 or 64 in my case). It appears to me that Cloud Drive logs the updates to that chunk and uploads each single update individually, instead of coalescing all of the updates together and uploading a single chunk. Note that I see this after setting "upload threads" to 0, and then setting them back to the normal value after my copy/change/write is done, so the chunk is definitely not being continuously updated during the upload. I would assume all updates to the chunk would then coalesced into a single upload, instead of uploading the chunk over and over for each previous update done. Just wanted to throw in my .02, I see this when updating the archive attribute on files with backup software. That attribute likely exists for each file in the MFT that is stored on these blocks that get continuously uploaded (64, 55, whatever).

 

Maybe that helps shed some light? My upload is fast, so it is easy to ignore for me, but it does waste time.

 

 

EDIT: Take a cloud drive drive with many files, change some NTFS attributes on some/all files with upload threads set to 0. Turn the upload threads back on and I bet you will see a single, or several chunks, upload over and over again. A good way to repro this.

 

EDIT2: I work in SAN dev for an enterprise SAN provider, I have no insight on how Alex's Windows driver works, but feel free to PM me for more info, I faint hunch as to what is going on here from a SCSI perspective, more than willing to help root cause this and get to the bottom of it (along with any other issues I can help with). But I doubt you guys need my help if you can repro it  :D

Link to comment
Share on other sites

  • 0

Hey Chris,

 

I've seen this issue as well, normally with lower chunks (often chunk# 55 or 64 in my case). It appears to me that Cloud Drive logs the updates to that chunk and uploads each single update individually, instead of coalescing all of the updates together and uploading a single chunk. Note that I see this after setting "upload threads" to 0, and then setting them back to the normal value after my copy/change/write is done, so the chunk is definitely not being continuously updated during the upload. I would assume all updates to the chunk would then coalesced into a single upload, instead of uploading the chunk over and over for each previous update done. Just wanted to throw in my .02, I see this when updating the archive attribute on files with backup software. That attribute likely exists for each file in the MFT that is stored on these blocks that get continuously uploaded (64, 55, whatever).

 

Maybe that helps shed some light? My upload is fast, so it is easy to ignore for me, but it does waste time.

 

 

EDIT: Take a cloud drive drive with many files, change some NTFS attributes on some/all files with upload threads set to 0. Turn the upload threads back on and I bet you will see a single, or several chunks, upload over and over again. A good way to repro this.

 

EDIT2: I work in SAN dev for an enterprise SAN provider, I have no insight on how Alex's Windows driver works, but feel free to PM me for more info, I faint hunch as to what is going on here from a SCSI perspective, more than willing to help root cause this and get to the bottom of it (along with any other issues I can help with). But I doubt you guys need my help if you can repro it  :D

 

Well, thanks for the info! :)

 

And yeah, then it sounds like it's what we think the issue was (as you've pretty much confirmed that). Or at least part of it.

 

Basically, what you've said confirms what Alex suspected, so that's good at least, and gives us a good place to start. :)

 

 

And I've let Alex know about your offer to help, just in case. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...