Jump to content
Covecube Inc.

chiamarc

Members
  • Content Count

    34
  • Joined

  • Last visited

  • Days Won

    3

chiamarc last won the day on October 8 2017

chiamarc had the most liked content!

About chiamarc

  • Rank
    Advanced Member
  1. This also depends on how much RAM you have and how efficiently Plex uses that memory and system cache.
  2. Hey all, Long time DP and CD user here. I have a mix of pools for various purposes but the majority of storage is audio/video/photos. A few months ago, I bought a nice NUC for housing my Plex Server. That server has about 1TiB of internal storage used for OS, Plex and other related apps. It is streaming media however by accessing CIFS mounts on my home desktop (the original location of the Plex Server and media). This is far from ideal of course. Today I purchased two 12 TiB external USB3 drives that I plan to use as primary pool storage, plus maybe another 6 TiB or so for vario
  3. I still use a 12yo (this week) external 320GiB hard drive that I stole from an old Dell laptop and placed in an external USB3 housing! Technically, it's only been part of my pool for about 4 years. I don't know why I keep it around...nostalgia.
  4. Do you have a price point (aside from "as inexpensive as possible") and a sizing estimate? You might look at something like the following for direct-attached or network-attached storage: https://www.amazon.com/stores/TerraMaster/page/5E802F2F-5AC0-4C37-B11D-61028DB9AB95?ref_=ast_bln Then you can find some relatively inexpensive 6 or 8 TiB internal drives or you can "schuck" a couple of external hard drives (remove the drive from its case) like this one: https://www.bestbuy.com/site/wd-easystore-12tb-external-usb-3-0-hard-drive-black/6425301.p?skuId=6425301 Hope this he
  5. Hi Folks, Part of this question was previously asked and answered (here) but I recently noticed something interesting: CloudDrive drives do get indexed by Everything (Voidtools). Why is this the case for CD but not DP? Are the drivers not similar? This is not a burning question of course because the solution is just to index as a folder rather than NTFS filesystem. Still, it would be nice to get a technical explanation. -Marc
  6. I've been experiencing very long boot times in the past year and I'm trying to rule out everything I can. I've tried disabling everything and using Safe boot to no avail. It is not any startup programs because a clean boot does the same. I've also removed all external devices. Is it possible for me to temporarily disable the drivepool drivers to help diagnose this? Is there something else I can do to diagnose? Thanks.
  7. One thing you *could* do is try to use a traffic shaper with a scheduler (like NetLimiter or NetBalancer). I have a need for this also and I've done it in the past. These programs can limit the bandwidth of the entire system or sets of individual applications/services. Turn your CloudDrive throttling off then limit it during the daytime with the traffic shaper. Chris, which processes would need to be limited CloudDrive.Service.exe and CloudDrive.Service.Native.exe? Are there any caveats? -Marc
  8. Out of curiosity, exactly what is your use case? Why do you need to measure uncached read/write speeds on a virtual drive? I know it's an inconvenient hack but you could aggregate perfmon throughput stats for individual disks if you really need to. Otherwise, you might get some mileage out of a nice Powershell script.
  9. @anderssonoscar, this has been mentioned elsewhere but I think it's important enough to repeat: technically, the pool that you have "backed up" to the cloud is *replicated* to the cloud. There is a distinct difference between backup and replication that is lost on some people. If you make a change to something in your pool (including deletion), that change is propagated to the cloud drive in a (potentially very) small window of time. This window is of course dependent on the number of files and amount of data being replicated. My point is that it is not a backup, or if it is, it's a v
  10. OK, this may be the last and final straw. My cache was completely consistent for 2-3 days, nothing to upload. After what I thought was a normal shutdown and no "normal" writes to the cloud drive (only reading of metadata by a backup program which, e.g., does not modify the archive bit), the CD service determined that there had been something that caused a provider resynchronization. I find myself once again uploading close to 50 GB. As I've stated before, I have limited total monthly bandwidth so this is a serious issue for me. In my previous post I asked how I can mitigate the amount of
  11. I would think this is transactional. Doesn't the cloud service API confirm that something has been committed? Additionally, why wouldn't it be possible to have a journal (in the cloud) for "blocks" written to the cloud? If there's a crash just verify the journal. Given what you know about Windows, can you give me a series of steps (like the aforementioned disabling of thumbnails) that will minimize my cache re-upload after a crash? For whatever reason I've experienced many crashes in the last 10 months and frankly, with my limited bandwidth, I just can't afford to keep doing this. I
  12. After a recent BSOD, something like this just happened again. The "to upload" was down to around 82G and now it's back up to 175G+. There has got to be a way to prevent this from happening...
  13. Last night I took it upon myself to do a binary search for the first occurrence of this problem between betas 930 and 949. I can confirm that, at least on my setup, the problem seems to start with 948 and continues to 950. Viktor, can you check this? Thanks.
  14. Guess what? I found the key in my clipboard manager! I was absolutely sure that I had the right key because of the date. I detached my cloud drive then reattached it and it asked for the key. Here's where things get sticky and please tell me if you've seen this before. I entered the key and it did....nothing. It went back to showing me the "this drive is encrypted" screen. I looked at the Feedback log and one message says "Complete - Unlock drive Backups" but the other message still insists the drive needs to be unlocked. Was this a silent failure? I tried several more times with both
  15. But that doesn't make sense to me. How can you check what needs to be uploaded if there can always be a discrepancy between NTFS blocks and those that comprise the chunks uploaded to the provider? If *nothing* (or only a little) changed on the local pool, how can there be more than 4 times the data that was still waiting to be uploaded previously? If DrivePool is writing files to the cloud drive, which get turned into NTFS blocks in the local cache, then aggregated into 100MB chunks(?) and uploaded to the provider, how can the total size of those chunks be much more than the size of the fil
×
×
  • Create New...