Jump to content
Covecube Inc.

Royce Daniel

  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Royce Daniel

  1. Make sure you turn off Automatic updates.
  2. Nope. Just the same Reddit threads that Drash has already pointed out. Talk about popcorn eating entertainment...
  3. Ah. I thought you were asking me if I found anything interesting in my search. Well, yeah that quote was interesting. Imagine if some marketing team at Amazon decided to search through everyone's Cloud Drive account to see what pictures are uploaded, music, movies what ever. They could index it all and build behavior profiles to make targeted marketing campaign ads. Imagine some user, like my grandmother, who doesn't know WTF encryption is thinks, "oh great, for 60 bucks a year I can store all my personal information here". *smacks head* I can imagine a lot of dreadful things people do unaware...
  4. @Spider99 That quote is directly from Plex's TOS... it's in there. I wouldn't be uploading any *cough* questionable home movies. You might find yourself online in all your embarrassing glory. Wait... wut? I mean I know they have the ability to go in and look at your files but I was under the impression that the "chunks" that are uploaded are encrypted. Is that not the case? I did setup encryption. This is all they will see... chunk files which are encrypted.
  5. I'm sure at some point there might be some kind of a "crack down" because there will be users who will abuse the service. Let's face it, if a would be Pirate wants to store and run copy protected material unencrypted... they are going to get caught. From what I've read Plex Cloud doesn't encrypt the content you upload via Plex to Amazon Cloud Drive which means Amazon has the ability to see what you have stored and examine it. If it contains embedded metadata suggesting that it was obtained illegally... your toast. If I were a rogue user I wouldn't use it. People talk about Amazon Cloud Drive / Plex Cloud as a private Netflix. I don't see it. Netflix has a shared video library where multiple users can view the same file. If every movie nut in the world has a private collection of 10TB or more... holy data Armageddon batman! Amazon has some amazing search algorithms they use to manage their data in various ways. With Amazon's TOS they have the ability to use those algorithms on your "private" movie libraries. Who's to say they aren't DE-duping to some extent and linking metadata among users accounts? It's a multi-tenant environment. Could easily be done. Plex's TOS allows for it too. I'm adding this topic to my daily news search because I think at some point there are going to be some really interesting stories that come out from this...
  6. After installing build 711 beta build for DrivePool I tried this again for kicks. Same error. However, I did some digging and the same issue pops up under FlexRaid with a running VM. The person reporting the issue solved it by shutting down the VM... thus releasing the drive. That got me thinking. The error I get when the backup fails basically translates into not being able to create a snapshot. This requires full write access or the ability to make modifications to the drives properties. I remembered when I attempted to unmount the DrivePool drive when I first created it and tried to reformat it from NTFS to ReFS. I learned later that it's not a real NTFS drive but its access is limited and you can't reformat it. Perhaps the API that's trying to create a snapshot or VSS copy doesn't have sufficient rights or access to write a bootfile (as the error message in the event log suggests)? Or some other underlying mechanic that is preventing the back up from creating a snapshot? I'll be honest, I know very little about how VSS works but from what I was able to read about it my assumption seems logical. If you recall from the older days of FAT and FAT32 you couldn't make any changes to the file allocation table when the drive was "in use". Even applications like Partition Magic freaked out if something was using the drive while it attempted to make changes to the drives partition table. Perhaps VSS is something along those lines... *shrug* Just a thought.
  7. VSS: OK OK I get it. Sucks... So what are you using to back up 50TB?!?!
  8. Not sure if this is relevant to this particular thread but I was having a similar issue with my "cache" drive filling up. I specifically had a Local cache size set to 1.00 GB and left the Cache type setting at it's default; Expandable (recommended). I wanted my local cache to stay at or around 1.00 GB because the only other drive I have in my server is the main OS drive which is a 160GB SSD and it's the only drive that was available for me to select as a cache drive in the UI when I created the Cloud Drive volume. I only use the Cloud Drive to store backup's so to have a large local cache for storing frequently accessed files didn't apply to me and it's not needed. When I started my backup it blew through that 1 GB very quickly and continued to grow well beyond the 1 GB setting. My "chunks" are set to 100 MB and since my upload bandwidth couldn't keep up with how fast the backup was making chunks, naturally it pilled up but I wanted the Cloud Drive to throttle the backup, not the other way around where disappearing drive space and ultimately low drive space causes the CloudDrive to throttle itself. Something that was never suggested in this thread or others I came upon with a similar issue did either... my fix was to set the Cache type to Fixed. Now my local cache hovers around 1 - 1.2 GB. Once the chunks are uploaded the cache is flushed and the Local cache size stays capped at where I set it. /cheers
  9. Oh... yeah duh! It's practically plain text. I'm just so used to everything already being compressed.
  10. I'm using the Windows Server 2012 R2 Essentials server back up option. It does rely on VSS. Using the info contained in your link I was able to start the backup without error but, like I said, I have to select all the PoolPart folders manually rather than just simply selecting the folders in the DrivePool that I want. Which means I'm backing up all the data with the duplicated files. Raw data plus all the duplicated files is roughly 30TB of data vs just raw data without the duplicated files would only be about 12TB of data. Some of the folders I have set to x3 but most are x2. It would be nice if DrivePool was VSS friendly. I meant Amazon Cloud Drive's upload caps specifically. The CloudDrive app is really awesome compared to all the others I've tried. All your stuff is really awesome. It's why I bought the whole package. I LOVE your approach to DrivePool in how you can control the amount of redundancy on a per-folder basis. Well turns out that I WAS saturating my upload pipe. LOL! I haven't looked at my Internet account in a couple years. It's never been an issue until now. I just remember getting the fastest I could get my hands on. 150 / 5 mbps. I did some checking that that is their "advertised" speed not what the line is provisioned for. It's actually provisioned for 50 (up to 200 burst) mbps / 12 mbps. This makes total sense since that is what I'm seeing as an average upload speed... 12.6 on average. So I upgraded my account, after I swap the modem out tomorrow, and I'll have a provisioned speed of 300 / 30 mbps. But still, 30TB will take roughly 97 days to complete @ 30 mbps. If I can solve the issue of only backing up RAW data and upload only 12 TB that would only take roughly 39 days @ 30 mbps. But I did some digging and apparently, in the state that it's in now, the CloudDrive app is capped at 20 mbps.
  11. Sorry to resurrect an old discussion - I got this in a forum post from 2 years ago where Christopher was answering a question about why Server 2012 R2 Essentials fails during a back up: http://stablebit.com/Support/DrivePool/1.X/Manual?Section=Using%20Server%20Backup%20to%20Backup%20the%20Pool So this fix worked for me but I have a HUGE problem with backing duplicated files. Why? Because I have over 60TB of raw storage. Using Amazon Cloud Drive @ 1.5 MB/s it would take me roughly 2 years to do a complete backup - incremental will be faster sure but damn! That first one is a doozy. I considered only backing up the most critical data but even still that's at least 1 year. I've done some comparison with other providers; sure there are other cloud storage solutions that don't throttle that severely but you're either limited to a few hundred gigs or beyond a couple terabytes it gets Enterprise level expensive. The only other reliable service I could find was Crash Plan but there is a huge Reddit thread where people are saying that they are only getting about 1.5 MB/s too and they claim that their service packages are "unlimited". We haven't even officially arrived at the 4K video era yet so folks buying movies in 4K and archiving them in digital media servers... do you see my point yet? If we're stuck doing huge backups at 1.5 MB/s and it takes 1-2 years to do a complete back up, oh god. I've gone through all my files and the best I can do is 13.5 TB of un-duplicated data that I NEED to backup. Is it possible to engineer DrivePool to put files tagged as "duplicated" in separate folders outside the PoolPart* folder so we can at least manually select only un-duplicated data? Please don't make me go back to Storage Spaces.
  12. My DrivePool isn't comprised of a mismatched file system. All the drives were formatted ReFS then joined to the DrivePool. I only mentioned that the resulting DrivePool seemed to show up as a Sudo NTFS volume. I'm not sure the fix you proposed would work in my case?
  13. MEMORY.DMP - uploading C:\Users\Administrator>fltmc Filter Name Num Instances Altitude Frame ------------------------------ ------------- ------------ ----- DfsDriver 1 405000 0 msnfsflt 0 364000 0 Cbafilt 3 261150 0 DfsrRo 0 261100 0 Datascrn 0 261000 0 Dedup 5 180450 0 luafv 1 135000 0 Quota 0 125000 0 npsvctrig 1 46000 0 Windows 2012 R2 Essentials upgraded to Standard (build 9600) StableBit DrivePool version BETA BSOD Message: KMODE_EXCEPTION_NOT_HANDLED (covefs.sys)
  14. Yes. I moved all the files from the root of the logical volume into the DrivePoolPart folder and it showed up in my mounted DrivePool drive. Hence my reference to it only taking a few seconds lending credence that it was just a simple file move on the same drive. Of course I had to move blocks of files from the individual drive roots outside the DrivePool mount point into the DrivePoolPart folder. I fully understand that ReFS isn't supported yet and the NTFS volume that's referenced in Disk Manager is only a sudo reference. It's not a true NTFS volume. I'm wondering... when I Right Click on the DrivePool volume and try to enter Properties, it causes my server to BSOD. I have to use the Shared Folders Snap-In to create network file shares. Is this what you're referring to in regards to the API not being fully linked or the fact that it "looks" like an NTFS volume so Windows is being stupid and trying to execute NTFS API's on an ReFS volume?
  15. O.K. So now I understand the underlying architecture. The StableBit DrivePool is actually a "fake" volume... at least fake from the perspective of Windows Explorer and Disk Manager. It's labeled as an NTFS volume but it's really not. AND you CAN add a bunch of ReFS drives to a DrivePool with data and move that data into the DrivePool without having to re-copy any files. When you add an already formatted volume to a DrivePool it creates a hidden folder named (in my case) PoolPart.5de19635-8ccf-4b71-8cc6-e389a29406c3. Each GUID will be different on each drive that you add to the DrivePool. This PoolPart folder is your DrivePool root directory. Upon first adding a drive with data, all your data will be in the normal root directory of the hard drive (top arrow). Just move all the data into the PoolPart folder following the layout of your folder hierarchy in each drive (bottom arrow). The screenshot shows each HD mounted to a folder instead of a drive letter so your view might be different from mine. I mounted the DrivePool drive to an actual Drive Letter. To answer my earlier question about having an NTFS virtual volume on top of an ReFS logical volume... it's not a TRUE virtual volume it's a linked directory hierarchy made to look like a drive volume - similar to DFS? It's a true ReFS volume with all the benefits therein. Unless there are some other fancy things happening under the hood that I can't see and of course I could be totally wrong here but, I moved 12TB worth of data into a fresh DrivePool in under 5 seconds and it all showed up properly in the mounted DrivePool root. Most storage nerds will tell you that it's not really moving the files around on the storage media but more or less updating the volumes metadata; that's why it's so fast compared to copying or copying/moving to another hard drive. /cheers
  16. Oh duuuuuhhhh... ok now I feel dumb. I am dumb but that's beside the point. I installed the StableBit Scanner for "Everyone else". I saw the Windows Home Server 2011+ version but overlooked it because I am running Windows Server 2012 R2. Doh.
  17. Check out the screen shot in this post... it's clearly there. http://community.covecube.com/index.php?/topic/2174-damaged-pooled-hdd/&do=findComment&comment=14987
  18. Screenshot tells all. I'm running all the latest Beta versions.
  19. Well you're limited by your network bandwidth which currently most have Gigabit... I'm still waiting for 10gbps gear to come down in price. $700 for a decent switch is still kind of steep for home / media streaming / file server archive use. I'm using link aggregation between servers right now and that is more than sufficient for now... I wish they'd hurry up and fix LAG for Windows 10 clients. bah...
  20. Hmmm... I added an ReFS 1TB drive (no data) to a pool and I'm trying to reformat the virtual volume from NTFS to ReFS but it's not allowing me to dismount the volume. It says that it's in use even after I stopped all the StableBit services... Is this not possible with StableBit?
  21. Thanks for the screenshots. I'm glad you don't appear to be having any issues. My concern is running an NTFS volume on top of ReFS. I'm not sure what DrivePool is using - logical? or virtual? on top of physical. I assume it's using a virtual layer since the DrivePool "partition" doesn't show up in a disk manager. My assumption is that the container for the virtual DrivePool volume resides on top of an ReFS formatted logical volume. There are known performance issues with storing Hyper-V VHD's on an ReFS volume. If my assumptions are correct this is a similar situation - storing files on a virtual NTFS volume Also, ReFS won't protect you from BitRot if your files are stored on an NTFS virtual volume even if it's stored on an ReFS logical volume. I have been running Storage Spaces for a couple of years and when I did my initial build I ran some performance tests with both an ReFS logical volume with NTFS virtual volumes and compared it against an ReFS logical volume with ReFS virtual volumes. The ReFS / ReFS setup had faster read/write & random numbers. It was also faster over a network. I easily saturated my 1gb pipe - client to server & a 4gb pipe from server to server. I'm wondering if I should also setup DrivePool the same way.
  22. Then at this point my main concern is resolving the BSOD. I can't imagine having an NTFS volume on top of an ReFS volume being a good thing. Sort of defeats the purpose of leveraging ReFS; virtual volume or not. My disk array is a bank of 48 SAS disks all connected via an HBA controller. I hope I'm not running into some kind of compatibility issue. The BSODs error indicates a Kernel panic not a driver issue. I'm still going to try the steps above to see if I can get around the BSOD. Instead of dumping all 4 drives at once onto the DrivePool I'm going to add them one at a time and reboot after each addition for good measure then examine the logs.
  23. Brand new hard drives that have never been used in Storage Spaces. Yes, formatted through disk manager with ReFS but I noticed that when the pool was done being created and the volume mounted, it was NTFS. It made a pooled volume from the free space of the 4 drives. I tried moving one directory via Explorer from the mount point directory into the DrivePool volume and then used Re-measure... my server BSOD. I'm guessing the server didn't like having 4 freshly formatted ReFS volumes with a DrivePool NTFS volume on top of it in the free space. I'm going to try and break the pool back apart and make a DrivePool volume with an empty drive all ReFS, THEN add the x4 4TB drives back into the pool. I was just wondering if someone else ran into this. Since ReFS is technically still in Beta I thought I would point this issue out.
  • Create New...