Jump to content

chiamarc

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by chiamarc

  1. Hey all,

    Long time DP and CD user here. I have a mix of pools for various purposes but the majority of storage is audio/video/photos.

    A few months ago, I bought a nice NUC for housing my Plex Server. That server has about 1TiB of internal storage used for OS, Plex and other related apps. It is streaming media however by accessing CIFS mounts on my home desktop (the original location of the Plex Server and media). This is far from ideal of course.

    Today I purchased two 12 TiB external USB3 drives that I plan to use as primary pool storage, plus maybe another 6 TiB or so for various endeavors.

    I intend to transfer my DP license to the NUC and install a second licensed copy of CD there as well. I will detach the media cloud drives and reattach to NUC before rebuilding my drive pools.

    My main question is this: what is the best way to go about transferring everything? I have backups of almost everything in the cloud and a fast GigE Internet connection so I'm not super worried about risk. The two methods I thought of were -

    1.

    • Copy all media files to one of the 12TiB drives.
    • Move that drive to the NUC.
    • Create a drive pool on the NUC using the big drive just transferred.
    • Move the files to the hidden drivepool directory.
    • Add the other big drive (and possibly one or two CDs) to the pool.
    • Enable duplication.
    • Let it just do its balancing magic once I enable duplication.

    2.

    • Add one 12 TiB drive to the existing pool on my desktop.
    • Evacuate all other pool drives to the big one I just added.
    • ??? and here's where I get stuck. Can I "recreate" or "re-attach" the pool that has this one disk on the NUC? Is this option unnecessarily complicated?

    Thank you.

     

  2. I still use a 12yo (this week) external 320GiB hard drive that I stole from an old Dell laptop and placed in an external USB3 housing! Technically, it's only been part of my pool for about 4 years. I don't know why I keep it around...nostalgia. ;)

  3. Do you have a price point (aside from "as inexpensive as possible") and a sizing estimate?

    You might look at something like the following for direct-attached or network-attached storage:

    https://www.amazon.com/stores/TerraMaster/page/5E802F2F-5AC0-4C37-B11D-61028DB9AB95?ref_=ast_bln

    Then you can find some relatively inexpensive 6 or 8 TiB internal drives or you can "schuck" a couple of external hard drives (remove the drive from its case) like this one:

    https://www.bestbuy.com/site/wd-easystore-12tb-external-usb-3-0-hard-drive-black/6425301.p?skuId=6425301

    Hope this helps.

  4. Hi Folks,

    Part of this question was previously asked and answered (here) but I recently noticed something interesting: CloudDrive drives do get indexed by Everything (Voidtools). Why is this the case for CD but not DP? Are the drivers not similar?

    This is not a burning question of course because the solution is just to index as a folder rather than NTFS filesystem. Still, it would be nice to get a technical explanation.

    -Marc

     

     

  5. I've been experiencing very long boot times in the past year and I'm trying to rule out everything I can. I've tried disabling everything and using Safe boot to no avail. It is not any startup programs because a clean boot does the same. I've also removed all external devices.

    Is it possible for me to temporarily disable the drivepool drivers to help diagnose this? Is there something else I can do to diagnose?

    Thanks.

  6. One thing you *could* do is try to use a traffic shaper with a scheduler (like NetLimiter or NetBalancer). I have a need for this also and I've done it in the past. These programs can limit the bandwidth of the entire system or sets of individual applications/services. Turn your CloudDrive throttling off then limit it during the daytime with the traffic shaper.

    Chris, which processes would need to be limited CloudDrive.Service.exe and CloudDrive.Service.Native.exe? Are there any caveats?

    -Marc

     

  7. @anderssonoscar, this has been mentioned elsewhere but I think it's important enough to repeat:  technically, the pool that you have "backed up" to the cloud is *replicated* to the cloud.

    There is a distinct difference between backup and replication that is lost on some people.  If you make a change to something in your pool (including deletion), that change is propagated to the cloud drive in a (potentially very) small window of time.  This window is of course dependent on the number of files and amount of data being replicated.  My point is that it is not a backup, or if it is, it's a very weak one!  A backup implies that you can always restore files that were lost, deleted, or otherwise changed, intentionally or unintentionally.  StableBit replication is designed to always keep a specified number of copies of something in your locations of choice so that at least one copy of that something will be likely to survive in the event of corruption or destruction in one or more of the locations.  It cannot and should not be used reliably as a backup.

    Personally, I do this:  I replicate important stuff locally with DP but I use a backup program to backup to my CD.  Unimportant stuff (i.e., easily reproduced or re-obtained) may simply be stored on my CD without replication or replicated locally.

  8. OK, this may be the last and final straw.  My cache was completely consistent for 2-3 days, nothing to upload.  After what I thought was a normal shutdown and no "normal" writes to the cloud drive (only reading of metadata by a backup program which, e.g., does not modify the archive bit), the CD service determined that there had been something that caused a provider resynchronization.  I find myself once again uploading close to 50 GB.  As I've stated before, I have limited total monthly bandwidth so this is a serious issue for me.  In my previous post I asked how I can mitigate the amount of data that needs to be resynchronized/uploaded after a crash or what have you.  If this cannot be improved then I have no choice but to seek a solution other than Cloud Drive.  I will not bad mouth the product because I think you and Alex are doing an amazing job in general.  I just find it hard to believe that there aren't more people with my problem (I've definitely seen some on the forums with the resync on crash problem) that would at least support the notion that re-sync should be redesigned.  I still don't understand the mapping between sets of NTFS blocks and CD "blocks" that make some form of local journaling/tracking impractical.  I mean please, I'm a computer scientist;  give us an example of what you're talking about in your 12/10/17 post.

  9. Quote

    Well, for the crash, we can't really trust that the local information about what has been uploaded or not.  So we re-upload everything in the cache.

    I would think this is transactional.  Doesn't the cloud service API confirm that something has been committed?  Additionally, why wouldn't it be possible to have a journal (in the cloud) for "blocks" written to the cloud?  If there's a crash just verify the journal.

    Given what you know about Windows, can you give me a series of steps (like the aforementioned disabling of thumbnails) that will minimize my cache re-upload after a crash?  For whatever reason I've experienced many crashes in the last 10 months and frankly, with my limited bandwidth, I just can't afford to keep doing this.  I really like CloudDrive and I don't think I'm an odd use case (backing up to it).

  10. Last night I took it upon myself to do a binary search for the first occurrence of this problem between betas 930 and 949.  I can confirm that, at least on my setup, the problem seems to start with 948 and continues to 950.  Viktor, can you check this?  Thanks.

  11. Guess what?  I found the key in my clipboard manager!  I was absolutely sure that I had the right key because of the date.  I detached my cloud drive then reattached it and it asked for the key.  Here's where things get sticky and please tell me if you've seen this before.  I entered the key and it did....nothing.  It went back to showing me the "this drive is encrypted" screen.  I looked at the Feedback log and one message says "Complete - Unlock drive Backups" but the other message still insists the drive needs to be unlocked.  Was this a silent failure?  I tried several more times with both keys that I had...same result.  Oh well I figured I just had the wrong key.  I stupidly (but reluctantly) destroyed the drive thinking I had no way in hell of getting it back.  Here's where I'm kicking myself:  I've set up another drive on the same provider, recorded the key, attached it and written a small amount of data to it.  Just to assure myself that I did something absolutely ridiculously stupid by destroying the drive, I detached the new drive then tried to attach it again with the key I just generated.  And... I'm getting the same damn result as I had with the previous drive.  I am not able to unlock it no matter what I do.  This is CD 1.0.2.950 Beta.

  12. But that doesn't make sense to me.  How can you check what needs to be uploaded if there can always be a discrepancy between NTFS blocks and those that comprise the chunks uploaded to the provider?  If *nothing* (or only a little) changed on the local pool, how can there be more than 4 times the data that was still waiting to be uploaded previously?  If DrivePool is writing files to the cloud drive, which get turned into NTFS blocks in the local cache, then aggregated into 100MB chunks(?) and uploaded to the provider, how can the total size of those chunks be much more than the size of the files written plus metadata?  Even if NTFS is writing to different blocks, unless we're dealing with sparse files, wouldn't CD just chunk the blocks that belong to the files waiting to be written?  What's in the other 425G worth of chunks?  Oh, I'm starting to get the picture:  it's data from previously written files or whatever because you can't change the chunk on the provider piecemeal.  So if I upload a chunk C originally that contains the blocks of 1,000 files and then change one of those blocks, CD has to upload another 100MB chunk C' to replace C?  The problem of fragmentation is highly exacerbated here because the chunks are so large and the chunks are large for, among other reasons, transmission efficiency.  This must really stick in Alex's craw!

  13. Hi Folks (especially Chris),

     

    I'm especially frustrated right now because of a dumb mistake on my part and a high likelihood of a misunderstanding of the intricacies of how, when, and why DP is balancing and duplicating to a cloud drive.  My setup is a local pool balanced across 5 hard drives with several folders duplicated x2 comprising ~4.1TB.  The local drivepool is part of a master pool that also contains a cloud drive.  The cloud drive is only allowed to contain duplicated data and currently it is storing about 1TB of duplicates from the local pool.  I only have ~5Mbps upload bandwidth and I just spent the month of October duplicating to this cloud drive.  Yesterday I wanted to remove my local pool from the master pool because I was experiencing slow access to photos for some reason and I was also going to explore a different strategy of just backing up to the cloud drive instead (which allows for versioning).  Well, I accidentally removed my cloud drive from the pool.  At the time, CD still had about 125G to upload, so I assume that was in the write cache because DP was showing optimal balance and duplication.  When the drive was removed of course, those writes were no longer necessary and were removed from CloudDrive's queue.  OK, I didn't panic, but I wanted to make sure that the time I just spent using my last courtesy month of bandwidth over 1TB was not wasted.  So I added the cloud drive back into the master pool, expecting DP to do a scan and reissue the necessary write requests to duplicate the as yet unduplicated 125G.  But lo and behold after balancing/duplication was complete in DP, I look at the CD queue and I see 536G left "to upload"!  All I can say at this point is WTF?  There was very little intervening time between when I removed the cloud drive and re-added it and almost nothing changed in the duplicated directories.

     

    Can someone please explain or at least theorize?  I own DrivePool but I've been testing CloudDrive for a while now for this very reason.  I needed to assess its performance and functionality and so far it's been a very mixed bag, partly because it's relatively inscrutable.

     

    Thanks,

    Marc

     

  14. I'd like to ask if Alex can add a little bit of priority on this because I have 2TB+ in the cloud right now and I've blown my two courtesy months with Comcast.  I'm covered with duplication locally but I'm a bit exposed on the cloud backup if something bad happens and that key is incorrect.

  15. The main reason I asked for this is that I have a single SSD in my system that I use for my OS but I've got a couple hundred gig of free space that I ultimately intend to use for clouddrive (if I have the right encryption key ;) ).  I'd still like it to be part of a pool however or use the ssd balancer with it to good effect.

  16. But it would seem that if it's already stored locally for unlock, you're implicitly trusting the user's machine at least.

     

    So question:  if I detach the drive then reattach and test the key and it's incorrect, I won't be able to unlock the drive and all the bandwidth I just spent uploading will have been for naught?

  17. Hi Folks,

     

    So, I've been uploading to a clouddrive for a couple months now and I wanted to make sure that I have the right encryption key printed out.  I also chose to store the key locally for convenience so I could auto-unlock on boot.  But here's the thing:  I had setup a cloud drive previously then had to destroy it (for irrelevant reasons).  Now I can't recall if the PDF I saved was for that previous drive or the current drive.  Since I have it saved locally, is it possible to compare the key to the one in the PDF?  Where is the key located?  If not, can I just print out the key somehow using the locally saved copy?  Otherwise, how can I check that I've got the right key in case of a disaster?

     

    Thanks,

    Marc

     

  18. I really don't understand why the "to upload" state becomes indeterminate for the entire write cache.  Shouldn't it only have to re-upload chunks that it didn't record as being completed?  Why is a chunk not treated akin to a block in a journaling filesystem?  Of course I understand that if chunks are 100MB in size, it could still take some time to write them, but no way should the entire cache be invalidated upon a crash.

     

    This is especially important for me right now because I've got my system locking up on a not infrequent basis that requires me to hard reset (plus the occasional BSOD).  A 200G cache on a 10TB drive (100/5 Mbps d/u) always takes 45+ minutes to recover.

  19. Using CD 1.0.2.936 Beta, I'm unable to assign a drive letter from within the GUI.

     

    Clicking on Manage Drive -> Drive Letter... -> Assign (as shown here):

     

    post-7615-0-41876800-1507004467_thumb.png

     

    Results in a null assignment error:

     

    post-7615-0-77097300-1507004468_thumb.png

     

    Please advise.

     

    Also, where can I check if a similar bug has already been reported?

×
×
  • Create New...