Jump to content
Covecube Inc.

Royce Daniel

  • Content Count

  • Joined

  • Last visited

  • Days Won


Royce Daniel last won the day on September 28 2016

Royce Daniel had the most liked content!

About Royce Daniel

  • Rank
  • Birthday 03/09/1973

Contact Methods

  • Website URL

Profile Information

  • Gender
  • Location
    Seattle, WA
  • Interests
    Racing Motorcycles
    Breaking computers
  1. Nope. Just the same Reddit threads that Drash has already pointed out. Talk about popcorn eating entertainment...
  2. Ah. I thought you were asking me if I found anything interesting in my search. Well, yeah that quote was interesting. Imagine if some marketing team at Amazon decided to search through everyone's Cloud Drive account to see what pictures are uploaded, music, movies what ever. They could index it all and build behavior profiles to make targeted marketing campaign ads. Imagine some user, like my grandmother, who doesn't know WTF encryption is thinks, "oh great, for 60 bucks a year I can store all my personal information here". *smacks head* I can imagine a lot of dreadful things people do unaware
  3. @Spider99 That quote is directly from Plex's TOS... it's in there. I wouldn't be uploading any *cough* questionable home movies. You might find yourself online in all your embarrassing glory. Wait... wut? I mean I know they have the ability to go in and look at your files but I was under the impression that the "chunks" that are uploaded are encrypted. Is that not the case? I did setup encryption. This is all they will see... chunk files which are encrypted.
  4. I'm sure at some point there might be some kind of a "crack down" because there will be users who will abuse the service. Let's face it, if a would be Pirate wants to store and run copy protected material unencrypted... they are going to get caught. From what I've read Plex Cloud doesn't encrypt the content you upload via Plex to Amazon Cloud Drive which means Amazon has the ability to see what you have stored and examine it. If it contains embedded metadata suggesting that it was obtained illegally... your toast. If I were a rogue user I wouldn't use it. People talk about Amazon Clou
  5. After installing build 711 beta build for DrivePool I tried this again for kicks. Same error. However, I did some digging and the same issue pops up under FlexRaid with a running VM. The person reporting the issue solved it by shutting down the VM... thus releasing the drive. That got me thinking. The error I get when the backup fails basically translates into not being able to create a snapshot. This requires full write access or the ability to make modifications to the drives properties. I remembered when I attempted to unmount the DrivePool drive when I first created it and tried to ref
  6. VSS: OK OK I get it. Sucks... So what are you using to back up 50TB?!?!
  7. Not sure if this is relevant to this particular thread but I was having a similar issue with my "cache" drive filling up. I specifically had a Local cache size set to 1.00 GB and left the Cache type setting at it's default; Expandable (recommended). I wanted my local cache to stay at or around 1.00 GB because the only other drive I have in my server is the main OS drive which is a 160GB SSD and it's the only drive that was available for me to select as a cache drive in the UI when I created the Cloud Drive volume. I only use the Cloud Drive to store backup's so to have a large local cache for
  8. Oh... yeah duh! It's practically plain text. I'm just so used to everything already being compressed.
  9. I'm using the Windows Server 2012 R2 Essentials server back up option. It does rely on VSS. Using the info contained in your link I was able to start the backup without error but, like I said, I have to select all the PoolPart folders manually rather than just simply selecting the folders in the DrivePool that I want. Which means I'm backing up all the data with the duplicated files. Raw data plus all the duplicated files is roughly 30TB of data vs just raw data without the duplicated files would only be about 12TB of data. Some of the folders I have set to x3 but most are x2. It would be nice
  10. Sorry to resurrect an old discussion - I got this in a forum post from 2 years ago where Christopher was answering a question about why Server 2012 R2 Essentials fails during a back up: http://stablebit.com/Support/DrivePool/1.X/Manual?Section=Using%20Server%20Backup%20to%20Backup%20the%20Pool So this fix worked for me but I have a HUGE problem with backing duplicated files. Why? Because I have over 60TB of raw storage. Using Amazon Cloud Drive @ 1.5 MB/s it would take me roughly 2 years to do a complete backup - incremental will be faster sure but damn! That first one is a doozy. I co
  11. My DrivePool isn't comprised of a mismatched file system. All the drives were formatted ReFS then joined to the DrivePool. I only mentioned that the resulting DrivePool seemed to show up as a Sudo NTFS volume. I'm not sure the fix you proposed would work in my case?
  12. MEMORY.DMP - uploading C:\Users\Administrator>fltmc Filter Name Num Instances Altitude Frame ------------------------------ ------------- ------------ ----- DfsDriver 1 405000 0 msnfsflt 0 364000 0 Cbafilt 3 261150 0 DfsrRo 0 261100 0 Datascrn 0 261000 0 Dedup 5 180450 0 luafv
  13. Yes. I moved all the files from the root of the logical volume into the DrivePoolPart folder and it showed up in my mounted DrivePool drive. Hence my reference to it only taking a few seconds lending credence that it was just a simple file move on the same drive. Of course I had to move blocks of files from the individual drive roots outside the DrivePool mount point into the DrivePoolPart folder. I fully understand that ReFS isn't supported yet and the NTFS volume that's referenced in Disk Manager is only a sudo reference. It's not a true NTFS volume. I'm wondering... when I Right Click o
  • Create New...