Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11573
  • Joined

  • Last visited

  • Days Won

    366

Everything posted by Christopher (Drashna)

  1. Well, a lot of modern drives (especially SandForce based controllers) do some level of compression/trickery with the data. I actually just had a nice discussion about this at HSS forums, in regards to BitLocker. So, at the very least, keep both scripts. You can use it to test to see if the drive does any compression like this! As for the more than 128GBs ... that's not unexpected. Just remember that each time you create a file, it creates an NTFS file system entry. These are 4k in size (depending on how the disk is formatted) (ntfs cluster size, aka allocation unit size). So, that you were using small files, and a lot of them, this is ... well perfectly normal. Also, I'm not surprised by this at all. Considering that SSDs do have a limited number of writes, anything that can be done to minimize those writes will extend the life of the drive.
  2. We do have a number of threads about FlexRAID and SnapRAID. If you haven't already, you may want to check them out. http://community.covecube.com/index.php?/topic/52-faq-parity-and-duplication-and-drivepool/&do=findComment&comment=3542 As for unduplicating the data, check the file protection options. Make sure that pool file duplication is not enabled. After doing that, check the folder duplication option. Any folder that is "x1+" means that a sub folder has duplication enabled. Also, keep in mind, that we do have a metadata folder on the pool that is duplicated x3. This is hard coded and cannot be changed. This may grow in size, if you're using a lot of reparse points (or similar file system links). Also, if the UI is complaining about the duplication, it should be offering to resolve it, and that may be the easiest way to handle it.
  3. That was the first thing I could find, actually. But yeah, it's pretty powerful.
  4. Yes. As for balancing, if you don't want stuff moved around at all... check the specific balancers. The simplest way would be to disable ALL of the balancers but the Ordered File Placement Balancer plugin. I say simpler, because the issue with the "Prevent Drive Overfill" balancer may be what was causing issues for you in the past (it sounds like it may have been the case). Disabling all the but the one you want means that only IT will be used. And since it's the Ordered File Placement balancer, it will ensure that it's settings are the ONLY relavient ones, outside of the File Placement Rules. And speaking of which. http://dl.covecube.com/DrivePoolBalancingPlugins/OrderedFilePlacement/Notes.txt Make sure that you set this, if you're using both the Ordered File Placement balancer and file placement rules. Otherwise, ... well, you may notice issues. Which may be the case here. And sorry for not picking up on that earlier, as well.
  5. What OS are you using? If you're using WHS2011, then grab the WSS Troubleshooter and run it on the server: http://wiki.covecube.com/StableBit_DrivePool_Troubleshooting Run the "Restore DrivePool Shares" option. This will reshare the pooled drives and show all of your content in the shares. As for the drives, all the content are in hidden folders. so you won't see them by default. This gives the appearance of an empty drive, while it may actually be quite full. If you're still having issues, could you open a ticket at https://stablebit.com/Contact ? And if needed, we can set up a remote support session for you, and I can get hands on with your system, and get this resolved for you.
  6. Well, I'm sorry to hear about the issues with FlexRAID and iSCSI. As for the Parity, it may be a while off, it at all. StableBit CloudDrive is a huge piece of code, and very complex. Parity would be even more complex, so it may be a while before Alex feels up to looking into it, especially as the next project after this is pretty much StableBit FileSafe, and that's not being developed yet. But we do know that a lot of people really are looking for parity, so at the very least, I'll keep on bringing it up to Alex.
  7. It really depends on his usage. I would DEFINITELY recommend a SSD for the system disk. The speed boost you get out of the system makes it all but necessary anymore. As for SSD vs HDD for the games... if he plays one or two games exclusively for a while, and then occasionally plays other games... In this case, I would say get the SSD for the games. Install the new and commonly played ones to the SSD and the older games or not as much played to a fast HDD. This way, you get the best of both. Additionally, a 10k RPM drive may not be necessary. A SSHD (hybrid) drive may work just as well here. Or even a regular drive. I personally use a Seagate 1TB drive for my games and none are on my system. They load fairly fast, and I get 60fps on most ... FPS'.... And my system is designed to be lower powered (workstation more than gaming PC). Also, another thing to look at is the CPU. Unless you're playing a lot of high end games that ... actually take advantage of the CPU (most really don't unfortunately), then a Pentium or a Core i3 CPU will serve you well (I'm living that example actually). Save the money on a CPU that's going to idle most of the time during a game (unless you manually set it to handle physX), and spend it on getting ... well both a SSD and a nice, fast HDD.
  8. Why not try PowerShell? It's much more powerful. [Byte[]]$out=@(); 0..2047 | %{$out += Get-Random -Minimum 0 -Maximum 255}; [System.IO.File]::WriteAllBytes("myrandomfiletest",$out)
  9. By default, "Real Time Duplication" is enabled on StableBit DrivePool. Unless you have a specific reason (and some people do), then we recommend that you leave it enabled. What Real Time duplication does is that any time a file is created, modified, moved, copied, etc... the file is written to both destination disks in parallel. This means that there are no issues with locked files, or the contents changing while duplicating, etc. This includes modifying and appending to files. You can run databases, and even Virtual Machines off of the pool, if you wanted. In fact, I believe that Alex (the developer) does so. I've also ran MySQL server off of a pool at one point (mostly to see how well it worked). I moved it back, because ... well databases run better from a SSD. This really depends on the situation. But for the most part, it will just error out on the file. This is handled this way to ensure that the file contents do not get out of sync, as that could be even more problematic. As vfsrecycle_kid indicates, StableBit DrivePool will evacuate the contents of a disk if StableBit Scanner marks the disk as damaged (unreadable sectors). You can optionally set this for SMART errors as well.
  10. Well, in StableBit Scanner, we don't really display the RAW SMART data. We do our best to interpret all of it and present it in a meaningful way. This way, you can see how many reallocated sectors you have, not just the raw, nearly unreadable data. As for the snapshots, be careful. These definitely do work, but they can also adversely affect performance, especially in the long run. That is because thye work by creating differentiated virtual hard drives. Meaning that they create a secondard drive, linked to the original, but with only the changes. Enough of these, and ... well, I'm sure you can see why it could affect performance. Another option is Windows Backup. It uses VHDs to create snapshots of the system as well, but on a dedicated drive... If you're using Windows 8... then you pretty much screwed, as there is no UI. However, you can still use "WBADMIN" (command line util) to create backups. For instance, my server has the normal backup, and I've added a once a week backup additionally, that backs up the system to a completely different drive. This is the command that I run: wbadmin start backup -backupTarget:T:\ -include:C:,H:,W:,Y: -systemState -allcritical -quiet Some of the flags aren't needed (like systemState), but I'd rather be safe. the C:\ drive is the system, H:\ is my hyperV storage (deduplicated), W: is WDS (PXE/network boot) and "Y:\" is my work "drive" for projects. This backs up the system to the "T:\" drive. But I could add a full path and the like there, I believe. And you could set up your Windows VM to do this, as well. Just use the Task Scheduler to do this on a schedule.
  11. Yup, unfortunately moves are just copy then delete commands. As for file system filters: File system filters ... well, sit "on top" of the file system, and ... filter any and all requests made to the files system. This is actually how antivirus programs create the real time scanners. They're just file system filters. This is also how the UAC works, some disk encryption programs (diskcryptor, in specific), and even how the Data Deduplication feature in Server 2012R2 works (it stores the duplicate data in the system volume info folder, and "rebuilds" the data when you access it, by intercepting "in transit" and merging the parts back together). ALso, the UAC virtualization that can occur happens this way (luafv). Specifically, a command is sent to the disk (look up info, access, copy, delete, etc). The file system filter intercepts this command and determines what to do with it. Then it passes it on to the actual disk. So, it's conceivable that to have/create a file system filter that will look for file deletes, check the source (network or local) and then actually CHANGE the command (to a move or ... cancel it altogether) at the time of the command. During the XP era, there used to be half a dozen utilities that actually did this. But I've been looking and can't really find one. I'll keep looking, but I can't promise anything. This is also why antivirus programs can cause performance issues... and even stability issues.... filters are incredibly powerful, and need to work "just right".
  12. I've let Alex know, and we'll see what he says. However, I wouldn't be surprised if the SSD is doing something at the firmware level to "optimize" this. A better way would be to create files with random data. The above uses the same data each time, and it is absolutely possible that the SSD firmware is "cheating" and doing some sort of optimization. Using random data would mean that it would be impossible (or very hard) for it to do something like that.
  13. Yes, it's mostly a matter of having enough space to duplicate the data ... and then to "reduplicate" in the case of a failed drive. Also, as I've mentioend before, the Seagate Archive drives work very well with StableBit DrivePool. Especially, if you mainly have videos or other files like that. Write performance isn't fantastic (but it's still pretty good actually!), but if it's a concern, then get the SSD Optimizer balancer and a couple of SSDs And they're the best price per TB ratio on the market (reliably at least). And yes, having online backup is important as well.
  14. In that case, then yes, formatting with the larger allocation unit size may be a very good idea.
  15. If the folder exists out side of the PoolPart folder, we don't care about it. It's not part of the pool structure, and it is ignored (other than counted as "other" data). However, the $RECYCLE.BIN folder is a hidden folder by default. And more importantly, marked as a system file. You'll need to enable the "Show hidden files" option, as well as disable the "hide protected operating system files" option to be able to see the folder.
  16. .... normally. The default options should be the best for it. As the Server Backup code can be ... finicky sometimes.
  17. And yeah, man plex is sure a resource hog. Is part of why I switched to Emby. It's .... got it's moments (I've never seen a 54 queue length on a SSD before.... but that was a "one time" deal because of the switch from the gd library to image magick). But as long as you're comfortable with ZFS and how it works, then whatever product fits your needs!
  18. To restore the client backups, stop the services for it, copy the files over, and start them back up. You may want to run a repair after doing so... but that's it. As for Emby, I'm using the Emby Home theater app for the HTPC. But I'm using the web UI primarily, and chromecast. If I was using games and other apps more, I'd consider using the Emby bridge for Kodi. But it works fine, as is. The advantage to the Emby Home Theater app, is that if you set up Emby to use network shares (or use the path substitution), the home theater app will play the files DIRECTLY instead of transcoding and streaming them. The downside is that you have to install codecs on the system manually. I think this applies to the Kodi bridge as well. As for the Flicr device, that's pretty neat! And yup, you nailed it on the head. DrivePool was designed to "just work" and to be very simple to use. We're glad that this design tennant definitely came across. And Plex and Emby and Kodi are all very similar in that regards. With a lot of settings to dig into, if you really want.
  19. I do go into the StableBit Scanner's surface scan here: http://blog.covecube.com/2014/10/why-using-stablebit-scanner-is-a-good-idea/ It's a .... bit of a read, though. ANd here is the manual section (from WHSv1's StableBit Scanner version, but the code hasn't changed significantly) http://stablebit.com/Support/Scanner/1.X/Manual?Section=Surface%20Scanner Also, a bit of a read. But basically, the surface scan attempts to read from each and every sector on the disk, once a month (configurable, default setting). This means that it makes sure that the data is readable. It doesn't check it's value... just if it's accessible. However, this can (and some cases does) trigger the drive's built in error correction routines. Meaning that it may remap the sector, or repair it. This entire process is usually called "data scrubbing". Additionally, if Scanner detects damage, and DrivePool is installed on the same system, it will cause DrivePool to automatically evacuate the contents of the disk. This is done to preserve the integrity of your pool, as this indicates a serious issue, and in a lot of cases ..... can and will spread to more of the disk (it's a physical defect or damage, and usually isn't just affecting one sector). As for ReFS... it's very new, and with time it will get better. That said, the write speeds are still pretty atrocious, from some of the stats I've seen... But it does have the build in error checking.... that only really works well with a Storage Spaces parity or mirrored array (it will read from another disk to repair the damaged file!) As for disk savings... what you're saving in disk space, you're offsetting in CPU cycles. Parity and similar is done by mathematically compressing the files. Depending on your system... that extra CPU cycles can grind your system to a hault. And depending on how often this may happen, the offset in energy may have paid for that extra disk space. It's something to at least consider, though, it probably won't be that drastic. As for APCd... IIRC, I've seen others with issues with it in the past. So it could very much be the issue.
  20. Depends on where you set up the backups from. If you use the dashbaord, it will want to dedicate the entire disk to backups. To the point of repartitioning and formatting the disk. You can use "wbadmin.msc" to ... well, get a LOT more control over the backup configuration. In fact, I highly recommend this. And .... you could use that "control" to backup the "X:\PoolPart.xxxxx\ServerFolders\Client Computer Backups" folder for each pooled drive, in fact. However, I would recommend using the "back up to a volume" option. Just use one partition on the backup drive, and copy the client computer backup to that drive as well. That way, you're not worried about messing with partitioning and the like. As for copying over, there are a bunch of options. Microsoft SyncToy is a great free option, that's easy to setup. But AllWays Sync, Free File Sync, SyncBack, etc are all good utilities to do this. If you don't mind a bit of command line, using Robocopy and the Windows Task Scheduler is another great idea.
  21. What sort of bit rot are you referring to? The physical medium degrading (to which StableBit Scanner detects, specifically)? Or to the random bit flip "caused by cosmic rays"? I say this in quotation marks, because the actual probability of this happening .... well, you're will the lotto first, most likely. Also, modern drives to a LOT of error detection and correction. And they do this invisibilty and silently, so you never see it happening. The above may be common in older drives, but anything remotely modern (eg, using SATA, and even a bit before that) is not very prone to these issues. I'm definitley familar with Plex. I was using it recently, until trying MediaBrowser/Emby again. Emby works a lot better for my usage. But both are good programs! As for the TV tuner stuff, yup. It really sucks when everything is on at the same time. Do you have the manufacturer's software installed? If so, then that could definitely be the cause. It may even be intentional, based on it's settings. Worth checking out, and digging into. Personally, I just use the default Windows management for batteries. It works really well, for the most part.
  22. the "electric" failure of your disk is a very hard thing to diagnose. Other than keen observation/deduction, I'm not sure there is a way to "detect" that... aside from trial and error (aka, pulling out one drive at a time .... which I've had to do before ). Well, first, anyone is "eligible" to submit a ticket. We don't care if you have a license or not. We just want to help make sure everything is working well! As for the cableCard .... I'm sorry to hear that. I used to do the PVR stuff, so I definitely understand. However, IIRC, the HD HomeRun Prime is a network based cableCard tuner, and I ... think... it uses UPNP/DLNA for the live TV streams. May be worth checking into. It could get you off of Windows entirely. In fact, have you checked out MediaBrowser (now "Emby")? It supports a number of PVR solutions, as well as an extensive download library. It may be just what you are looking for, and may let you get away from Windows entirely, if you so desire. As for troubleshooting, unless the application has specific logs (and we do), the Event Viewer on windows is an incredibly helpful diagnostic tool! It is definitely the "go to" thing for figuring out issues. And there is a LOT you can do with it. As for the power plans, I do believe I said this already, but ... it really shouldn't have switched. It's very odd that it did. Because each power plan has a "on battery" and "plugged in" options. As for the different plans... well, if you look into them, they have different times... and most importantly, check the "Processor power management". That actually makes a big difference, as this option can (does) throttle the CPU based on the system status (plugged in or not), and can extend battery life and reduce power consumption.
  23. Just to clarify, the Read Striping feature should take care of this, for the most part. At least from a read aspect. And for a bit of detail about the fetaure (and some related stuff), you should check this out: http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Performance%20Options And here is the section about File Placement Rules: http://stablebit.com/Support/DrivePool/2.X/Manual?Section=File%20Placement
  24. A dedicated SSD for the metadata folder is actually a very good idea! Considering that Plex can be a bit slow about loading this data.... well, "brilliant" is fitting here. Though, OneDrive isn't as big of a deal, it is still a good idea.
  25. Nope. other than maybe with your controller and enclosure. If you haven't formatted the drive, then when you add it to the pool, StableBit DrivePool will format it as a GPT disk. This has the advantage of supporting both Advanced formatted drives natively, but that it supports ... well up to 16EB volumes (16 MILLION TBs) IIRC. So, no 2TB limit on volume size! Otherwise, you can manually initialize and format the disk in Disk Management (run "diskmgmt.msc"). It will prompt you to initialize any uninitialized disks, as soon as it opens. One caveat here, what are you storing on your pool (don't need to answer). If you're using mostly large files (like videos, ISOs, client backup database, etc), then you may want to manually format the drive and use a higher Allocation Unit size. The downside to doing so, is that you may end up with more "wasted" space on the disk (as a 1k file will use a full "cluster", even if it's a 64k cluster). This is what we call "slack space", and is counted as part of "Other" on your disks. The upsides:​​ Larger contiguous chunks of data, which means potentially faster access less fragmentation (as each chunk is larger, and kept together) This is entirely personal preference, and not strictly required. Personally, I have a couple of Seagate Archive (8TB) drives, and ... well, no issues with them.
×
×
  • Create New...