Jump to content

chiamarc

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    3

chiamarc last won the day on October 8 2017

chiamarc had the most liked content!

Recent Profile Visitors

849 profile views

chiamarc's Achievements

Advanced Member

Advanced Member (3/3)

8

Reputation

  1. This also depends on how much RAM you have and how efficiently Plex uses that memory and system cache.
  2. Hey all, Long time DP and CD user here. I have a mix of pools for various purposes but the majority of storage is audio/video/photos. A few months ago, I bought a nice NUC for housing my Plex Server. That server has about 1TiB of internal storage used for OS, Plex and other related apps. It is streaming media however by accessing CIFS mounts on my home desktop (the original location of the Plex Server and media). This is far from ideal of course. Today I purchased two 12 TiB external USB3 drives that I plan to use as primary pool storage, plus maybe another 6 TiB or so for various endeavors. I intend to transfer my DP license to the NUC and install a second licensed copy of CD there as well. I will detach the media cloud drives and reattach to NUC before rebuilding my drive pools. My main question is this: what is the best way to go about transferring everything? I have backups of almost everything in the cloud and a fast GigE Internet connection so I'm not super worried about risk. The two methods I thought of were - 1. Copy all media files to one of the 12TiB drives. Move that drive to the NUC. Create a drive pool on the NUC using the big drive just transferred. Move the files to the hidden drivepool directory. Add the other big drive (and possibly one or two CDs) to the pool. Enable duplication. Let it just do its balancing magic once I enable duplication. 2. Add one 12 TiB drive to the existing pool on my desktop. Evacuate all other pool drives to the big one I just added. ??? and here's where I get stuck. Can I "recreate" or "re-attach" the pool that has this one disk on the NUC? Is this option unnecessarily complicated? Thank you.
  3. I still use a 12yo (this week) external 320GiB hard drive that I stole from an old Dell laptop and placed in an external USB3 housing! Technically, it's only been part of my pool for about 4 years. I don't know why I keep it around...nostalgia.
  4. Do you have a price point (aside from "as inexpensive as possible") and a sizing estimate? You might look at something like the following for direct-attached or network-attached storage: https://www.amazon.com/stores/TerraMaster/page/5E802F2F-5AC0-4C37-B11D-61028DB9AB95?ref_=ast_bln Then you can find some relatively inexpensive 6 or 8 TiB internal drives or you can "schuck" a couple of external hard drives (remove the drive from its case) like this one: https://www.bestbuy.com/site/wd-easystore-12tb-external-usb-3-0-hard-drive-black/6425301.p?skuId=6425301 Hope this helps.
  5. Hi Folks, Part of this question was previously asked and answered (here) but I recently noticed something interesting: CloudDrive drives do get indexed by Everything (Voidtools). Why is this the case for CD but not DP? Are the drivers not similar? This is not a burning question of course because the solution is just to index as a folder rather than NTFS filesystem. Still, it would be nice to get a technical explanation. -Marc
  6. I've been experiencing very long boot times in the past year and I'm trying to rule out everything I can. I've tried disabling everything and using Safe boot to no avail. It is not any startup programs because a clean boot does the same. I've also removed all external devices. Is it possible for me to temporarily disable the drivepool drivers to help diagnose this? Is there something else I can do to diagnose? Thanks.
  7. One thing you *could* do is try to use a traffic shaper with a scheduler (like NetLimiter or NetBalancer). I have a need for this also and I've done it in the past. These programs can limit the bandwidth of the entire system or sets of individual applications/services. Turn your CloudDrive throttling off then limit it during the daytime with the traffic shaper. Chris, which processes would need to be limited CloudDrive.Service.exe and CloudDrive.Service.Native.exe? Are there any caveats? -Marc
  8. Out of curiosity, exactly what is your use case? Why do you need to measure uncached read/write speeds on a virtual drive? I know it's an inconvenient hack but you could aggregate perfmon throughput stats for individual disks if you really need to. Otherwise, you might get some mileage out of a nice Powershell script.
  9. @anderssonoscar, this has been mentioned elsewhere but I think it's important enough to repeat: technically, the pool that you have "backed up" to the cloud is *replicated* to the cloud. There is a distinct difference between backup and replication that is lost on some people. If you make a change to something in your pool (including deletion), that change is propagated to the cloud drive in a (potentially very) small window of time. This window is of course dependent on the number of files and amount of data being replicated. My point is that it is not a backup, or if it is, it's a very weak one! A backup implies that you can always restore files that were lost, deleted, or otherwise changed, intentionally or unintentionally. StableBit replication is designed to always keep a specified number of copies of something in your locations of choice so that at least one copy of that something will be likely to survive in the event of corruption or destruction in one or more of the locations. It cannot and should not be used reliably as a backup. Personally, I do this: I replicate important stuff locally with DP but I use a backup program to backup to my CD. Unimportant stuff (i.e., easily reproduced or re-obtained) may simply be stored on my CD without replication or replicated locally.
  10. OK, this may be the last and final straw. My cache was completely consistent for 2-3 days, nothing to upload. After what I thought was a normal shutdown and no "normal" writes to the cloud drive (only reading of metadata by a backup program which, e.g., does not modify the archive bit), the CD service determined that there had been something that caused a provider resynchronization. I find myself once again uploading close to 50 GB. As I've stated before, I have limited total monthly bandwidth so this is a serious issue for me. In my previous post I asked how I can mitigate the amount of data that needs to be resynchronized/uploaded after a crash or what have you. If this cannot be improved then I have no choice but to seek a solution other than Cloud Drive. I will not bad mouth the product because I think you and Alex are doing an amazing job in general. I just find it hard to believe that there aren't more people with my problem (I've definitely seen some on the forums with the resync on crash problem) that would at least support the notion that re-sync should be redesigned. I still don't understand the mapping between sets of NTFS blocks and CD "blocks" that make some form of local journaling/tracking impractical. I mean please, I'm a computer scientist; give us an example of what you're talking about in your 12/10/17 post.
  11. I would think this is transactional. Doesn't the cloud service API confirm that something has been committed? Additionally, why wouldn't it be possible to have a journal (in the cloud) for "blocks" written to the cloud? If there's a crash just verify the journal. Given what you know about Windows, can you give me a series of steps (like the aforementioned disabling of thumbnails) that will minimize my cache re-upload after a crash? For whatever reason I've experienced many crashes in the last 10 months and frankly, with my limited bandwidth, I just can't afford to keep doing this. I really like CloudDrive and I don't think I'm an odd use case (backing up to it).
  12. After a recent BSOD, something like this just happened again. The "to upload" was down to around 82G and now it's back up to 175G+. There has got to be a way to prevent this from happening...
  13. Last night I took it upon myself to do a binary search for the first occurrence of this problem between betas 930 and 949. I can confirm that, at least on my setup, the problem seems to start with 948 and continues to 950. Viktor, can you check this? Thanks.
  14. Guess what? I found the key in my clipboard manager! I was absolutely sure that I had the right key because of the date. I detached my cloud drive then reattached it and it asked for the key. Here's where things get sticky and please tell me if you've seen this before. I entered the key and it did....nothing. It went back to showing me the "this drive is encrypted" screen. I looked at the Feedback log and one message says "Complete - Unlock drive Backups" but the other message still insists the drive needs to be unlocked. Was this a silent failure? I tried several more times with both keys that I had...same result. Oh well I figured I just had the wrong key. I stupidly (but reluctantly) destroyed the drive thinking I had no way in hell of getting it back. Here's where I'm kicking myself: I've set up another drive on the same provider, recorded the key, attached it and written a small amount of data to it. Just to assure myself that I did something absolutely ridiculously stupid by destroying the drive, I detached the new drive then tried to attach it again with the key I just generated. And... I'm getting the same damn result as I had with the previous drive. I am not able to unlock it no matter what I do. This is CD 1.0.2.950 Beta.
  15. But that doesn't make sense to me. How can you check what needs to be uploaded if there can always be a discrepancy between NTFS blocks and those that comprise the chunks uploaded to the provider? If *nothing* (or only a little) changed on the local pool, how can there be more than 4 times the data that was still waiting to be uploaded previously? If DrivePool is writing files to the cloud drive, which get turned into NTFS blocks in the local cache, then aggregated into 100MB chunks(?) and uploaded to the provider, how can the total size of those chunks be much more than the size of the files written plus metadata? Even if NTFS is writing to different blocks, unless we're dealing with sparse files, wouldn't CD just chunk the blocks that belong to the files waiting to be written? What's in the other 425G worth of chunks? Oh, I'm starting to get the picture: it's data from previously written files or whatever because you can't change the chunk on the provider piecemeal. So if I upload a chunk C originally that contains the blocks of 1,000 files and then change one of those blocks, CD has to upload another 100MB chunk C' to replace C? The problem of fragmentation is highly exacerbated here because the chunks are so large and the chunks are large for, among other reasons, transmission efficiency. This must really stick in Alex's craw!
×
×
  • Create New...