Jump to content
  • 0

I/O Synchronization: Cloud and Local Disk - CloudDrive Amazon S3


mbCcuStl3bt

Question

Hello,

 

System: 

Windows 10 

intel i4770

 

CloudDrive version: 1.0.0.417 BETA 64bit

 

Background/Issue

Created a local CloudDrive pointing to Amazon S3.  I chose full encryption.  The volume/bucket created successfully on S3 as well as the formatting and drive assignment for my local drive.  I could see the METADATA GUID, and a few chunk files were created from the process on the S3 bucket.

 

Next, I upload one file and noticed that additional "chunk" files were created in the bucket.  After the Stablebit GUI status indicated that all file transfers were completed, I deleted the file from my local CloudDrive.  After awhile, I refreshed the S3 bucket and saw that all the chunks were still there...including the new ones that were created when I transferred the file.

 

Here's my newb questions:

1. Am I correct in stating that when I delete a local file which is on my local Stablebit CloudDrive pointing to Amazon S3, it        should remove the applicable "chunks" from Amazon S3?   

2. How can I be sure that when I delete a large file from my local CloudDrive, the applicable S3 storage bucket size reduces accordingly?

3. Am I off base to think that the two file systems should stay in sync?  During my tests, and after waiting quite some time,  the S3 bucket never decreased in size even though I deleted the large file(s).   Which means Amazon boosts their bottom line at my expense.

 

Thanks, and I searched for this but could not find any discussion on point.

Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0

Ah, okay, I think your post here was a bit clearer.

 

As for the chunk not getting changed once you deleted the file .... that's more to do with how NTFS works. In most cases, when you delete a file, the actual contents don't get altered until the sectors are reused. Or zero'ed out. The "pointer" in the file allocation table is just removed. That's how data recovery is able to rebuild files, by reading the raw contents of the disk and piecing it back together.

 

I suppose we could be aggressive about zeroing out unused space, but the issue with that, is that it may significantly increase the bandwidth used, just to do so. I've flagged this for Alex (The developer) for consideration.

However, using defrag utilities that support this may be a good stop-gap measure for the time being. (eg, thin provisioned, or the like). 

Link to comment
Share on other sites

  • 0

Drashna,

 

Thanks for the reply.  You can ignore the updated message I just sent via support.  

 

I guess I do have concerns about the cloud storage becoming too large.  In the end, if I pay for XXGB total, and CloudDrive overwrites the old sectors, then it should not be a factor.  Storage providers like S3 may be a factor since I believe you get charged each month based upon the average storage size in a container.

 

Thanks again for following-up and making a great product.

Link to comment
Share on other sites

  • 0

Well, I'll still respond to it. :)

 

And yes, Amazon S3 and similar enterprise providers charge for the amount stored, as well as amount downloaded.  So after a while, this could be a problem (assuming I'm not missing something)

 

I've actually detailed this here:

http://community.covecube.com/index.php?/topic/1268-cloud-providers/&do=findComment&comment=8445

And S3 changes you roughly $93 per TB per month. So yes, this can add up. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...