Jump to content
  • 0

Checksum Mismatch


Kuno

Question

Last night while continuing to upload my stuff to ACD, I apparently had just a few errors.

 

 

Error: Checksum mismatch.  Data read from the provider has been corrupted.  

 

I have data verification turned on, for obvious reasons.  I also have 100/50 internet.  I does appear as if Amazon is not playing nice as I am also dealing with a lot of aborted threads from them.

 

I really enjoy your product and I can't wait until I learn how to use drive pool with all the drives I have.  

 

Oh and I only have it connected to one computer, nothing else.  But I do have netdrive also installed and have ACD mapped to another drive.  Could that be causing an issue?  Even though Stablebit cloud drive creates its own directory on ACD.

post-2665-0-85199600-1475252610_thumb.png

Edited by Kuno
Link to comment
Share on other sites

8 answers to this question

Recommended Posts

  • 0

Checked my logs and it seems Amazon are being dicks ATM.

 

1:15:53.4: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:15:53.5: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:16:12.8: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:16:13.9: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:16:13.9: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:16:33.1: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:16:34.3: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:16:34.3: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:16:43.7: Warning: 0 : Unable to read MFT. File record invalid.
1:16:43.7: Warning: 0 : Unable to read MFTMirr. File record invalid.
1:16:43.7: Warning: 0 : [PinDiskMetadata] Error pinning NTFS data. Unable to read MFTMirr. File record invalid.
1:19:23.0: Warning: 0 : [ioManager:43] Thread abort performing I/O operation on provider.
1:19:23.0: Warning: 0 : [ioManager:43] Error writing range: Offset=241,070,112,768. Length=102,367,232. Thread was being aborted..
1:19:54.3: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:19:55.3: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:19:55.3: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:20:14.6: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:20:15.7: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:20:15.8: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:20:40.5: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:20:41.7: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:20:41.7: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
1:24:53.6: Warning: 0 : [ioManager:43] Thread abort performing I/O operation on provider.
1:24:53.6: Warning: 0 : [ioManager:43] Error writing range: Offset=241,070,112,768. Length=102,367,232. Thread was being aborted..
1:30:24.3: Warning: 0 : [ioManager:43] Thread abort performing I/O operation on provider.
1:30:24.3: Warning: 0 : [ioManager:43] Error writing range: Offset=241,070,112,768. Length=102,367,232. Thread was being aborted..
1:35:55.0: Warning: 0 : [ioManager:43] Thread abort performing I/O operation on provider.
1:35:55.0: Warning: 0 : [ioManager:43] Error writing range: Offset=241,070,112,768. Length=102,367,232. Thread was being aborted..
1:36:35.2: Information: 0 : Comm server stopped on: tcp://127.0.0.1:26525/Comm1
1:36:35.2: Warning: 0 : [RemoteControlKeepAliveListener] Error receiving. A blocking operation was interrupted by a call to WSACancelBlockingCall
1:37:50.2: Warning: 0 : [ChecksumBlocksChunkIoImplementation:43] Checksum mismatch for chunk 2299, ChunkOffset=0x02000800, ReadChecksum=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, ComputedChecksum=0x3a92ab3c91b0297e233a5b48d43e39c820f5409a8b9e4d86ecd8ddf6e825fd1342101fff88f77333926f946a74a2b63296df0cf9ed21b74b07f20f13e90dbe23.
1:37:51.3: Warning: 0 : [WholeChunkIoImplementation] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted.
1:37:51.3: Warning: 0 : [ioManager:43] Error performing I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
 
Thus far my uploads have been aborted by Amazon 108 times.
Link to comment
Share on other sites

  • 0

........ 

 

 

I don't really have good news here. :(

 

"Checksum mismatch" is exactly what it sounds like. It means that the data on the PROVIDER has been corrupted/damaged/altered.  This means that the chunk is no longer valid, and any data in it may be scrambled.   This is essentially a bad block on your hard drive.  

 

You *can* ignore the mismatch, but doing say may compromise your data. 

 

 

 

If this drive was present during the outage a few months ago, a lot of people experienced issues during that time, and you may just now be seeing it.

Link to comment
Share on other sites

  • 0

Thank you so much for your responses. 

 

So dumb question.  Does that mean because I have data verification on, I don't have to worry about this.  The data verification will re-upload the files that did not match checksum and keep trying until it does match, correct?

Or am I being over optimistic?

 

Awesome product I am so glad I bought it.

Link to comment
Share on other sites

  • 0

Yes and no. 

 

Upload verification checks the chunk at the time that it's uploaded, to make sure that it uploaded correctly and matches the checksum at that point in time. 

 

This is more bandwidth intensive, but it means that the data is always checked as it's put into the cloud.  

 

However, things can still happen, and corrupt the data. The data verification/checksumming is meant to identify that, so that you don't get silently corrupted data. 

It just verifies the data, nothing more.  It is possible that we could add ECC to it (eg, parity) but that would be much more resource intensive, as well as larger chunks, in general. 

 

To be blunt, consumer providers are going to be more prone to this corruption than things like Amazon Web Services, Microsoft Azure, and other enterprise solutions.  

 

 

 

If you want, Alex talks about how the "chunking" system works, especially in regards to larger chunks, here:

http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/

 

It may be worth reading, as it may give you a good idea of how the product works and is designed. 

Link to comment
Share on other sites

  • 0

Would verification be more efficient if it were to use server side hashing to ensure file consistency? It might be a good halfway point between just assuming a chunk uploaded successfully and having to redownload it in order to verify. For instance, it looks like Amazon Drive can return a node/file's md5 hash (assuming I'm reading the documentation correctly).

Link to comment
Share on other sites

  • 0

I think that's new  actually.

 

But the problem is the partial read and writes.  It doesn't support a checksum for the parts, just the whole file, and that doesn't really work.

 

Its been that way since at least May of last year - https://web.archive.org/web/20150512201810/https://developer.amazon.com/public/apis/experience/cloud-drive/content/nodes

 

Amazon doesn't support partial writes anyway, so CloudDrive will be uploading the entire chunk regardless. Would be a lot more efficient to simply compare md5 hashes once the upload completes rather than downloading it for verification. 

Link to comment
Share on other sites

  • 0

Well I had to stop using ACD as it got stuck with checksum mismatch errors.  I had 100 megs left to write and the stupid ACD kept giving me CRC errors.  I had over 500 attempts to write my final blocks of data.  

 

I guess I'll just wait patiently until Amazon gets their head out of their asses and officially supports this awesome product.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...