Jump to content
Covecube Inc.

chiamarc

Members
  • Content Count

    29
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by chiamarc

  1. I've been experiencing very long boot times in the past year and I'm trying to rule out everything I can. I've tried disabling everything and using Safe boot to no avail. It is not any startup programs because a clean boot does the same. I've also removed all external devices. Is it possible for me to temporarily disable the drivepool drivers to help diagnose this? Is there something else I can do to diagnose? Thanks.
  2. One thing you *could* do is try to use a traffic shaper with a scheduler (like NetLimiter or NetBalancer). I have a need for this also and I've done it in the past. These programs can limit the bandwidth of the entire system or sets of individual applications/services. Turn your CloudDrive throttling off then limit it during the daytime with the traffic shaper. Chris, which processes would need to be limited CloudDrive.Service.exe and CloudDrive.Service.Native.exe? Are there any caveats? -Marc
  3. Out of curiosity, exactly what is your use case? Why do you need to measure uncached read/write speeds on a virtual drive? I know it's an inconvenient hack but you could aggregate perfmon throughput stats for individual disks if you really need to. Otherwise, you might get some mileage out of a nice Powershell script.
  4. @anderssonoscar, this has been mentioned elsewhere but I think it's important enough to repeat: technically, the pool that you have "backed up" to the cloud is *replicated* to the cloud. There is a distinct difference between backup and replication that is lost on some people. If you make a change to something in your pool (including deletion), that change is propagated to the cloud drive in a (potentially very) small window of time. This window is of course dependent on the number of files and amount of data being replicated. My point is that it is not a backup, or if it is, it's a very weak one! A backup implies that you can always restore files that were lost, deleted, or otherwise changed, intentionally or unintentionally. StableBit replication is designed to always keep a specified number of copies of something in your locations of choice so that at least one copy of that something will be likely to survive in the event of corruption or destruction in one or more of the locations. It cannot and should not be used reliably as a backup. Personally, I do this: I replicate important stuff locally with DP but I use a backup program to backup to my CD. Unimportant stuff (i.e., easily reproduced or re-obtained) may simply be stored on my CD without replication or replicated locally.
  5. OK, this may be the last and final straw. My cache was completely consistent for 2-3 days, nothing to upload. After what I thought was a normal shutdown and no "normal" writes to the cloud drive (only reading of metadata by a backup program which, e.g., does not modify the archive bit), the CD service determined that there had been something that caused a provider resynchronization. I find myself once again uploading close to 50 GB. As I've stated before, I have limited total monthly bandwidth so this is a serious issue for me. In my previous post I asked how I can mitigate the amount of data that needs to be resynchronized/uploaded after a crash or what have you. If this cannot be improved then I have no choice but to seek a solution other than Cloud Drive. I will not bad mouth the product because I think you and Alex are doing an amazing job in general. I just find it hard to believe that there aren't more people with my problem (I've definitely seen some on the forums with the resync on crash problem) that would at least support the notion that re-sync should be redesigned. I still don't understand the mapping between sets of NTFS blocks and CD "blocks" that make some form of local journaling/tracking impractical. I mean please, I'm a computer scientist; give us an example of what you're talking about in your 12/10/17 post.
  6. I would think this is transactional. Doesn't the cloud service API confirm that something has been committed? Additionally, why wouldn't it be possible to have a journal (in the cloud) for "blocks" written to the cloud? If there's a crash just verify the journal. Given what you know about Windows, can you give me a series of steps (like the aforementioned disabling of thumbnails) that will minimize my cache re-upload after a crash? For whatever reason I've experienced many crashes in the last 10 months and frankly, with my limited bandwidth, I just can't afford to keep doing this. I really like CloudDrive and I don't think I'm an odd use case (backing up to it).
  7. After a recent BSOD, something like this just happened again. The "to upload" was down to around 82G and now it's back up to 175G+. There has got to be a way to prevent this from happening...
  8. Last night I took it upon myself to do a binary search for the first occurrence of this problem between betas 930 and 949. I can confirm that, at least on my setup, the problem seems to start with 948 and continues to 950. Viktor, can you check this? Thanks.
  9. Guess what? I found the key in my clipboard manager! I was absolutely sure that I had the right key because of the date. I detached my cloud drive then reattached it and it asked for the key. Here's where things get sticky and please tell me if you've seen this before. I entered the key and it did....nothing. It went back to showing me the "this drive is encrypted" screen. I looked at the Feedback log and one message says "Complete - Unlock drive Backups" but the other message still insists the drive needs to be unlocked. Was this a silent failure? I tried several more times with both keys that I had...same result. Oh well I figured I just had the wrong key. I stupidly (but reluctantly) destroyed the drive thinking I had no way in hell of getting it back. Here's where I'm kicking myself: I've set up another drive on the same provider, recorded the key, attached it and written a small amount of data to it. Just to assure myself that I did something absolutely ridiculously stupid by destroying the drive, I detached the new drive then tried to attach it again with the key I just generated. And... I'm getting the same damn result as I had with the previous drive. I am not able to unlock it no matter what I do. This is CD 1.0.2.950 Beta.
  10. But that doesn't make sense to me. How can you check what needs to be uploaded if there can always be a discrepancy between NTFS blocks and those that comprise the chunks uploaded to the provider? If *nothing* (or only a little) changed on the local pool, how can there be more than 4 times the data that was still waiting to be uploaded previously? If DrivePool is writing files to the cloud drive, which get turned into NTFS blocks in the local cache, then aggregated into 100MB chunks(?) and uploaded to the provider, how can the total size of those chunks be much more than the size of the files written plus metadata? Even if NTFS is writing to different blocks, unless we're dealing with sparse files, wouldn't CD just chunk the blocks that belong to the files waiting to be written? What's in the other 425G worth of chunks? Oh, I'm starting to get the picture: it's data from previously written files or whatever because you can't change the chunk on the provider piecemeal. So if I upload a chunk C originally that contains the blocks of 1,000 files and then change one of those blocks, CD has to upload another 100MB chunk C' to replace C? The problem of fragmentation is highly exacerbated here because the chunks are so large and the chunks are large for, among other reasons, transmission efficiency. This must really stick in Alex's craw!
  11. Hi Folks (especially Chris), I'm especially frustrated right now because of a dumb mistake on my part and a high likelihood of a misunderstanding of the intricacies of how, when, and why DP is balancing and duplicating to a cloud drive. My setup is a local pool balanced across 5 hard drives with several folders duplicated x2 comprising ~4.1TB. The local drivepool is part of a master pool that also contains a cloud drive. The cloud drive is only allowed to contain duplicated data and currently it is storing about 1TB of duplicates from the local pool. I only have ~5Mbps upload bandwidth and I just spent the month of October duplicating to this cloud drive. Yesterday I wanted to remove my local pool from the master pool because I was experiencing slow access to photos for some reason and I was also going to explore a different strategy of just backing up to the cloud drive instead (which allows for versioning). Well, I accidentally removed my cloud drive from the pool. At the time, CD still had about 125G to upload, so I assume that was in the write cache because DP was showing optimal balance and duplication. When the drive was removed of course, those writes were no longer necessary and were removed from CloudDrive's queue. OK, I didn't panic, but I wanted to make sure that the time I just spent using my last courtesy month of bandwidth over 1TB was not wasted. So I added the cloud drive back into the master pool, expecting DP to do a scan and reissue the necessary write requests to duplicate the as yet unduplicated 125G. But lo and behold after balancing/duplication was complete in DP, I look at the CD queue and I see 536G left "to upload"! All I can say at this point is WTF? There was very little intervening time between when I removed the cloud drive and re-added it and almost nothing changed in the duplicated directories. Can someone please explain or at least theorize? I own DrivePool but I've been testing CloudDrive for a while now for this very reason. I needed to assess its performance and functionality and so far it's been a very mixed bag, partly because it's relatively inscrutable. Thanks, Marc
  12. "Created time" is also displayed in the Details... window accessible from the CD Technical Details window. Right now I'm SOL because my PDF is showing a creation time of 27 days earlier
  13. I'd like to ask if Alex can add a little bit of priority on this because I have 2TB+ in the cloud right now and I've blown my two courtesy months with Comcast. I'm covered with duplication locally but I'm a bit exposed on the cloud backup if something bad happens and that key is incorrect.
  14. The main reason I asked for this is that I have a single SSD in my system that I use for my OS but I've got a couple hundred gig of free space that I ultimately intend to use for clouddrive (if I have the right encryption key ). I'd still like it to be part of a pool however or use the ssd balancer with it to good effect.
  15. But it would seem that if it's already stored locally for unlock, you're implicitly trusting the user's machine at least. So question: if I detach the drive then reattach and test the key and it's incorrect, I won't be able to unlock the drive and all the bandwidth I just spent uploading will have been for naught?
  16. Hi Folks, So, I've been uploading to a clouddrive for a couple months now and I wanted to make sure that I have the right encryption key printed out. I also chose to store the key locally for convenience so I could auto-unlock on boot. But here's the thing: I had setup a cloud drive previously then had to destroy it (for irrelevant reasons). Now I can't recall if the PDF I saved was for that previous drive or the current drive. Since I have it saved locally, is it possible to compare the key to the one in the PDF? Where is the key located? If not, can I just print out the key somehow using the locally saved copy? Otherwise, how can I check that I've got the right key in case of a disaster? Thanks, Marc
  17. I really don't understand why the "to upload" state becomes indeterminate for the entire write cache. Shouldn't it only have to re-upload chunks that it didn't record as being completed? Why is a chunk not treated akin to a block in a journaling filesystem? Of course I understand that if chunks are 100MB in size, it could still take some time to write them, but no way should the entire cache be invalidated upon a crash. This is especially important for me right now because I've got my system locking up on a not infrequent basis that requires me to hard reset (plus the occasional BSOD). A 200G cache on a 10TB drive (100/5 Mbps d/u) always takes 45+ minutes to recover.
  18. Using CD 1.0.2.936 Beta, I'm unable to assign a drive letter from within the GUI. Clicking on Manage Drive -> Drive Letter... -> Assign (as shown here): Results in a null assignment error: Please advise. Also, where can I check if a similar bug has already been reported?
  19. Say I have several disks in my pool and I want to reserve extra space for "other" data on one or more individual disks. That's to say, I don't want Disk x to use more than a certain percentage or byte threshold to store pool data. Is there a way to do this short of splitting the drive into multiple partitions?
  20. You could also try running Procmon for a while to capture which processes are deleting anything. The only thing you need to capture are filesystem events and make sure to check "Drop filtered events" under the Filter menu. Then after running for some time, stop capturing (or continue, it's up to you) and search for "Delete: True". The first entry you find should be the result of a SetDispositionInformationFile operation. Right click on the cell in the Detail column and select "Include 'Delete: True'". This will filter every deletion event. Search in the Path column for an instance of a file you didn't expect to be deleted. In the Process Name column you will find which process set the file for deletion. If you have no idea how to use Process Monitor, there are plenty of quick tutorials on the web. Good luck.
  21. chiamarc

    True Bandwidth

    Hi Guys, Thanks for an absolutely wonderful product. I was just wondering, Comcast limits my data usage to 1 TiB per month (at $10 per 50GiB beyond that). Since many cloud providers do not allow incremental chunk updates (like what I'm using, Box), then depending on the write workload, CD has to download chunks, change them, and re-upload them. While the "To Upload" size measurement is accurate, the tooltip that gives an estimate of how long it will take to drain the upload queue is probably off by quite a bit, especially if one is changing files frequently. Further, the total bandwidth used over a given period of time is not really reflected anywhere. There are tools that allow me to measure (out-of-band) all CD traffic but it would be nice to know how much data was actually read/written in order to empty the upload queue, or for that matter, to do any set of operations. This would help with my bandwidth management (knowing if I need to limit upload/download speed in CD at a given point in the month, or ideally, doing it automatically when I reach a certain threshold). I guess this is a request for enhancement but I'm not sure how many other people have a similar need. Thanks, Marc
  22. Brilliant! Tested and satisfied! This gets me *much* closer to my goal and I can now drop files into E: and be sure that they are duplicated locally and into the cloud! This will suffice for the time being as I can always keep important stuff on E: and not so important stuff on D:. Thanks again for being so quick to respond (and change code)!
  23. Thanks again Chris for making things much clearer! This is my last message for a while because I don't want to tax you good folks at CoveCube any longer. With a set of simple tests I have now confirmed at least part of what I thought might be true. It's unfortunate because it means that I can't really use the DP/CD combo to do exactly what I want (and I don't think it's an unreasonable use case). I've moved a x2 folder mytest from D:\ to D:\PoolPart.xxyy and remeasured E:. Initially, it remained on HDD1 and HDD2 and I could see it being duplicated to B:. But then, something got to rebalancing (probably the D: pool) and I watched as mytest was quickly removed from HDD1. At this point, mytest only resides on HDD2 and on B:\. This clearly breaks my requirement that I have at least 2 copies on local drives. I should mention that all my balancing rules are defaults. What's more, dropping files onto the E: pool does not duplicate them on local hard drives either. I can confirm all of this both from the File Placement tab in the balancing panel for D: and by confirming where these files exist on HDD. Unless you have any further suggestions, I'll just have to "backup" to my cloud drive using a traditional program.
  24. OK, thanks again Chris for your patience (I'm starting to sound like what we call in Yiddish a "pechech"). There still seems to be an impedance mismatch here. Please correct my understanding as I walk through what I put together from the very beginning of this thread and what I'm currently seeing (I know I wrote this in a previous post but I'd like you to refer specifically to what's laid out here): I have existing files in a DrivePool, D: comprising only local HDDs. This pool has 2x duplication on several folders, but let's just assume for simplicity that the entire pool is duplicated. So, there are 2 copies of every file somewhere on the local HDDs. I've created a CloudDrive, B: I've created a new DrivePool, E: I've added D: and B: to the "master" pool E: and set 2x duplication on the entire pool. At this point, looking at E: in the GUI shows me that 4.57TB is Other and 7.78TB is unusable for duplication. Specifics aside, I assume the 4.57TB is the data in D: If I do nothing else from this point on, the 4.57TB that's located inside D: will never be duplicated to B: because E: doesn't see that data as something that needs to be duplicated. I assume this is correct based on your comment above about "seeding" the master pool. If I instead move a folder out of D: and into the hidden PoolPart folder for the master pool and re-measure, then that folder will exist locally on D: (on some HDD that's part of that pool) and also be duplicated on B:. However, it will no longer be duplicated on multiple local HDDs because the D: pool doesn't know about it (i.e., it doesn't show up in a listing of the D: drive). The same applies to any future data that I write to E:. In fact, if I look at the folder tree in the Folder Duplication panel for D:, DrivePool (E:\) shows up as disabled. In addition, any future data that I write to D: will not be duplicated on B: So I find that there is no situation in which files stored in any pool will be guaranteed to have 2+ copies locally and 1+ copies in the cloud. Please tell me which statements above are incorrect.
×
×
  • Create New...