Jump to content
Covecube Inc.

Tell

Members
  • Content Count

    17
  • Joined

  • Last visited

  1. I’d like to suggest that Covecube revisits Jottacloud as a possible provider for StableBit CloudDrive. Jottacloud is a Norwegian cloud sync/cloud backup provider with an unlimited storage plan for individuals and a very strong focus on privacy and data protection. The provider has been evaluated before, but I’m not sure what came of it (other than that support wasn’t added). Jottacloud has no officially documented API, but they have stated officially that they don’t oppose direct API access by end users or applications, as long as the user is aware that it isn’t supported. There is also a fairly successful implementation in jottalib, which I believe also has FUSE support.
  2. I’d like to chip in that this is no longer the case. JottaCloud now limits their storage and bandwith usage to “normal individual usageâ€. Specifically:
  3. Thank you for your input! Unfortunately, the ACD software is not related to the issue being discussed in this thread. This thread details a problem that appears when an amount of data greater than the size of the drive the cache resides on is written (copied) to the CloudDrive drive.
  4. After some testing, I can confirm that the .777 build does NOT resolve this issue. I understand that the issue is caused by how NTFS handles sparse files. Why is this issue so "rare"? Is it only a small subset of users that experience this bug? Does not everybody see this happen when they copy more data to cloud drive than the size of the cache? For me, this makes CloudDrive incredibly difficult to use – I'm trying to upload about 8 TB of data, and given a 480 GB SSD as a cache drive, you can do the math on how many reboots I have to do, how many times I have to re-start the copy and how often I need to check the entire CloudDrive for data consistency (as CloudDrive crashes when the cache drive is full). The reserved space is released when the CloudDrive service is restarted. This indicates that the deallocated blocks are freed when the file handles are closed from CloudDrive. To me, it seems like a piece of cake to just make CloudDrive release and re-attach cache file handles on a regular basis – for example, for every 25% of cache size writes and/or when the cache drive nears full or writes are beginning to get throttled.
  5. Actually, there isn't. Their unlimited plan is limited at 10 TB. See https://www.jottacloud.com/terms-and-conditions/ section 15, second paragraph.
  6. I'm currently testing to see if the .777 build har resolved this issue.
  7. Turns out running… net stop CloudDriveService net start CloudDriveService … does actually resolve the issue and release the reserved space on the drive (which in effect was the same as a reboot accomplished), allowing me to continue adding data. It is clear to me that the issue is resolved when CloudDrive then releases/flushes these sparse files. The issue does not re-appear until after I have added data again. While the problem might be NTFS-related, I would claim that this would be relatively simple to mitigate by having CloudDrive release these file handles from time to time so that the file system can catch up on how much space is occupied by the sparse files. It makes sense to me that Windows, to improve performance, might not compute new free space from sparse files until after the file handles are released – after all, free disk space is not a metric that mostly needs be accurate to prevent overfill and not the other direction. TL;DR: The problem is solved by restarting CloudDriveService, which flushes something to disk. CloudDrive should do this on its own.
  8. Drashna and friends, I'm still experiencing the issues, which is preventing me from uploading more than the size of my cache drive (480GB) before I have to reboot the computer. Hosting nothing but the CloudDrive cache, fsutil gives: C:\Windows\system32>fsutil fsinfo ntfsinfo s: NTFS Volume Serial Number : 0x________________ NTFS Version : 3.1 LFS Version : 2.0 Number Sectors : 0x0000000037e027ff Total Clusters : 0x0000000006fc04ff Free Clusters : 0x0000000006fb70c3 Total Reserved : 0x0000000006e6d810 Bytes Per Sector : 512 Bytes Per Physical Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x0000000000240000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x0000000000000002 Mft Zone Start : 0x00000000000c00c0 Mft Zone End : 0x00000000000cc8c0 Since a reboot solves the issue, could it be that CloudDrive needs to release the write handles on the (sparse) files so that the drive manager will let go of the reservation? Did https://stablebit.com/Admin/IssueAnalysis/27122uncover anything? Best, Tell
  9. I ran WinDirStat (as local admin) and the output is as attached, complete with drive properties. (I swapped the 120GB SSD for a 480GB SSD just to give it more wiggle room). I also upped the local cache to 10 GB to see what happens, but no change. Best, Tell
  10. "Used space" on the physical drive is returned to near-zero when I "Detach" the drive from CloudDrive. This also clears the red warning CloudDrive throws. I can then "Attach" the drive again and resume uploading, until the physical cache drive again is full.
  11. Drashna, Thank you for your feedback. Yes, I suspect this is somehow related to the handling of sparse files, as I wrote. VSS is totally disabled on the server in question (the service is not running). On Windows Server, Shadow Copies appear as part of the "Shared Folders" MMC Snap-In (under All Tasks). I've attached a screenshot confirming that the service is disabled in general and on this drive in particular. I also intuitively suspected VSS from doing this (since "Used space" on disk ≠ "Size on disk" of the folder). However, as far as I can reason, if VSS was the culprit, detaching the Cloud Drive should not immediately free all the space again. VSS seems to be ruled out, however. Any other ideas?
  12. Hi all, I've run into a situation where the local cache drive is getting filled up despite having a fixed local cache. Configuration is: StableBit CloudDrive v 1.0.0.634 BETA 4 TB drive created on Amazon Cloud Drive with 30 MB chunk size, custom security profile Local cache set to 6 GB FIXED on a 120 GB SSD (the SSD is exclusive to CloudDrive - there's absolutely nothing else on this drive) Lots of data to upload When the local cache is filled up (6 GB) CD starts throttling write requests, as it should be doing (hooray for this feature, by the way). However, when the total amount of data uploaded is nearing the size of the cache drive, CloudDrive starts slowing down until it completely stops accepting writes and throws a red warning message saying that the local cache drive is full. This is the CloudPart-folder after a session of having uploaded approx 30 GB of data. This is the local cache disk at the same time as the screenshot above. Remember, there is absolutely nothing on this drive other than the CloudPart-folder. Selecting "Performance --> Clear local cache" does nothing. Detaching and re-attaching the drive clears and empties the local drive, reducing the "Used space" to almost nothing, and I can again start filling the cloud drive with data until the cache drive runs full again. As is obvious, a discrepancy exists between the amount of data reported as "Used space" on the SSD and the "Size on disk" of the CloudPart folder. My guess is that this is some sort of bug related to the handling of NTFS sparse files. Any ideas?
  13. I created a test with a 200 GB disk. I the 200 GB disk on Amazon Cloud Drive, with a local cache size of 1 GB and the other options set at their defaults except for not formating and assigning a drive letter. I then formatted the it as a block device with VeraCrypt (one full pass of ciphertext) and filled the disk with cleartext data (a second almost-complete pass of ciphertext). StableBit CloudDrive began uploading, and the Amazon Cloud Drive web interface confirms that I have so far uploaded about 90 GB of chunks. This gave the same behaviour in the CloudDrive UI as seen previously - take a look at http://imgur.com/Zqhlo7H The screenshot is taken after about 24 hours of uploading since the last time I unmounted the VeraCrypt volume. I will destroy the drive, enable logging and recreate the problem.
  14. I'll test that. I created a new 500 GB disk without formating or assigning a drive letter, then whole-disk-encrypted it with VeraCrypt again. I did not fill the encrypted volume with data. This means that the entire CloudDrive has had ciphertext (of cleartext 0) written to it exactly once. It began uploading immediately, and the StableBit CloudDrive UI reports that over 91 GB has been uploaded since my previous post. To isolate the problem further, I will destroy this drive and then re-create and fill it with data (so that it is written to twice). In the interest of time I think I will do it with a 250 GB disk. I will do that after conducting the 250 GB filled cleartext test.
  15. The VeraCrypt volume should still resemble an NTFS volume in layout, so if CloudDrive pins the area (beginning?) of the block device that is normally used for the MFT and other metadata it should give the sought-after performance improvement. The VeraCrypt documentation states that encryption offsets the cleartext drive blocks by 131072 bytes in the ciphertext. There is also some trim from the end of the volume. Anyway. Correct. The Amazon Cloud Drive web interface reports that I have uploaded 340 GB now (my Amazon Cloud Drive is empty save for this CloudDrive disk). This has increased from the original 80 GB i reported in the first post. Clearly, CloudDrive has spent a lot of time and bytes uploading something, but apparently it's not reported in the UI or acknowledged by reducing the "size on disk" of the CloudPart.xxx-folder. The StableBit CloudDrive UI still shows no data in the cloud (right pie chart is all-local) and the left pie chart shows "To upload: 1 TB". 80 Mbit/s from the ISP. The upload speed to Amazon in the StableBit CloudDrive UI varies between 2 and 15 Mbit/s. At this point I will destroy this disk and re-create it to test if there is some improvement from a second attempt. Maybe I will test with a smaller disk to see if there is any change.
×
×
  • Create New...