Jump to content

Shane

Moderators
  • Posts

    739
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by Shane

  1. Was referring to DrParis from the screenshot, yes. That error in the log file might be worth submitting to Covecube?
  2. It looks like you are using the special Archive Optimizer plugin? From that plugin's Notes page, I believe it uses and requires background balancing and may not "play nice" with the other balancers? You may wish to check how you have prioritised it with respect to your other balancers and experiment a bit to see if that is what is disabling manual balancing. http://dl.covecube.com/DrivePoolBalancingPlugins/ArchiveOptimizer/Notes.txt
  3. Were there any matching entries in the DrivePool logs? C:\ProgramData\StableBit DrivePool\Service\Logs
  4. Hmm. Based on those error messages and what you've described, while waiting for Covecube I'd be eyeing the network drivers for the machines (client, server or both), checking for excessive memory utilisation (paged vs non-paged pools), and whether booting with utorrent disabled (and keeping it disabled until you observe whether the effect still occurs or should have occurred by that time if utorrent wasn't the problem) has any effect.
  5. Here's how I perceive the whole alpha/beta/version ball of string. Alpha - the bare skeleton of a program, very crash-prone, full of bugs and lacking many planned features. Beta - all major planned features and adding minor features, lots of minor bugs, often still some major bugs RC (Release Candidate) - all major features, most/all minor features, only minor/unknown bugs should remain Release - all major features, all minor features, no known bugs (except maybe a couple that are proving really hard to fix) Stable - no known bugs, or at least the few remaining known bugs are well-documented edge-cases with workarounds Final - can mean Stable, can mean Release, can mean last minor revision of a particular version on the roadmap Version - a numerical way of distinguishing that the software is different from the last time it was published, often in the form of a date (e.g. ymd 20120616), integer (e.g. build# 1234) or revision (e.g. major.minor 2.3) or combination (e.g. ymd.build or revision.build or y.major.minor.build) Roadmap - a list of planned features sorted by planned version, at least in theory. For a hypothetical example, Fooware 2014 build 5.4 Beta might be a program published in the year 2014, is the 5th major version of that program, which in turn has seen 4 minor revisions (added another minor feature and/or fixed some bugs since the 3rd minor revision) that is still in Beta (has a tendency to crash for at least six different reasons along with numerous edge case problems). To further confuse things, Alpha/Beta/etc can refer to a particular version of a program, a range of versions of a program, or be a separate tag independent of the version numbering scheme, depending on the preference of the person(s) doing the naming. For example, if you see a roadmap with Fooware 2014.5.3 Stable followed by Fooware 2014.5.4 Beta, this is likely to mean that a new minor feature has been added that may/has introduced some new bugs as well.
  6. Actually, parity support doesn't mandate that the user's content be unreadable on other machines; that depends on the implementation used. The thing is that while it's possible to add even folder-level parity to JBOD-style pools in such a way as to maintain content readability on non-pool machines and still keep things simple for the end-user? Implementing it is way past non-trivial and I've no idea whether the performance costs would make it impractical. Sort of like hot fusion. (but if any angel investors are out there with a loose seven figures to punt Covecube's way, maybe Alex would be happy to work on it for a few years)
  7. Hi Vincent, you've prompted me to write a FAQ on this topic. http://community.covecube.com/index.php?/topic/52-faq-parity-and-duplication-and-drivepool/ The TLDR for your particular question is: the parity block has to be as big as the biggest data block you're protecting, so the moment you increase the size of your biggest parity-protected folder you're going to need a bigger parity block (or mathematical equivalent). A disk doesn't change in size; folders can vary dramatically. So the problems to overcome are more than what you have with a "simple" drive-level parity system, even without having to ensure the parity, duplication and balancing subsystems cooperate.
  8. The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle. Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made). Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe. Part 1: how parity works and the difference between parity and duplication Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast. Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks. Sounds great, right? Right. But there are tradeoffs. Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time. In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good). In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations). (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover) Part 2: the difference between drive-level and folder-level parity Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well. (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort). Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity. Conclusion: the TLDR for parity in DrivePool As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done. My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".
  9. Hi RogerDodger, it's been less than seven hours since your first post on this subject (I don't work for Covecube so I don't know if you've previously submitted error reports to them directly before today), and that means it was 5pm on a Friday in Covecube's timezone (United States eastern seaboard). So it's now almost midnight for them. Please don't expect an instant 24/7 response for software you know is in beta. They're still human, they still need to sleep (and even if they were awake and at work, a beta means it's unlikely yours is the only bug that needs fixing). For what it's worth, DrivePool can reveal underlying bugs in the Windows/device layers (e.g. because DrivePool requires multiple drives to be operating together as a unified pool, requiring those layers to work harder). That's not DrivePool's fault. So re your comment "The desktop has a black screen for a background. I don't really care, but clearly there is some kind of conflict. When I set the desktop to a picture or theme, it does not last." - DrivePool has nothing to do with your display drivers nor the Windows display subsystem. Even if you'd set a picture from the pool as a background, Windows caches that picture as a rendered bitmap in a system directory and loads it from there. I would be testing the RAM, CPU, HDDs, backplane, etc. Do you have any stress-testing software? Also, I hope you were uninstalling DrivePool or at least stopping the DrivePool service before deleting the C:\ProgramData\StableBitDrivePool folder. Finally, your data is still available within the individual drives, as DrivePool keeps them as standard files within hidden Poolpart folders in the root of each - so if you urgently need a file in the pool it can still be found that way (I suggest using a program that quickly and easily search your NTFS drives, e.g. VoidTool's Everything).
  10. I've gone from single to dual to quad core, 2GB to 4GB to 8GB, and there's a difference, but the biggest leap (by an order of magnitude) in general Windows performance I ever had was when I changed from mechanical platter to solid state for my system drive. My POST-to-desktop time went from a couple of minutes to 20 seconds, and my icon-littered dual-display desktop is immediately responsive. Now if all someone does with their PC is write letters and play solitaire, that's probably irrelevant. But otherwise.... The car analogy I use with my non-tech-savvy friends is, "It's like the difference between a station wagon and a sports car." (backing onto topic, another +1 for "option") (wait, this is the off-topic section... in that case, here's a - my favourite part is what they do to the drives while the machine is running...)
  11. Thanks for the explanation, Alex. It's a shame that shell extensions are so inefficient; if I have to choose between "works well" and "looks slick" I'll pick the former every time.
  12. I've now posted and pinned a brief FAQ in this forum on the Unduplicated vs Duplicated vs Other vs Unusable terms. lyzanxia, per that FAQ, could you check whether you have folder duplication turned off for one or more of your folders (e.g. in version 2.x, via Pool Options -> File Protection -> Folder Duplication). If you click on the root entry and tap the asterisk key it will completely expand the tree. If this isn't the problem, then it's possible that you have files in the pool with incompatible permissions. Log files can be found in the "C:\ProgramData\StableBit DrivePool\Service\Logs\Service\" folder. Here's an example where DP finds only 1 copy of a file where it's expecting 2 copies: DrivePool.Service.exe Warning 0 [CoveFsPool] Incomplete file found: '\MSIa9e16.tmp' (ExpectedNumberOfCopies=2, Found=1) 2013-06-10 02:42:24Z 2037049447
  13. The "Other" and "Unusable" sizes displayed in the DrivePool GUI are often a source of confusion for new users. Please feel free to use this topic to ask questions about them if the following explanation doesn't help. Unduplicated: the total size of the files in your pool that aren't duplicated (i.e. exists on only one disk in the pool). If you think this should be zero and it isn't, check whether you have folder duplication turned off for one or more of your folders (e.g. in version 2.x, via Pool Options -> File Protection -> Folder Duplication). Duplicated: the total size of the files in your pool that are duplicated (i.e. kept on more than one disk in the pool; a 3GB file on two disks is counted as 6GB of duplicated space in the pool, since that's how much is "used up"). Other: the total size of the files that are on your pooled disks but not in your pool and all the standard filesystem metadata and overhead that takes up space on a formatted drive. For example, the hidden protected system folder "System Volume Information" created by Windows will report a size of zero even if you are using an Administrator account, despite possibly being many gigabytes in size (at least if you are using the built-in Explorer; other apps such as JAM's TreeSize may show the correct amount). Unusable for duplication: the amount of space that can't be used to duplicate your files, because of a combination of the different sizes of your pooled drives, the different sizes of your files in the pool and the space consumed by the "Other" stuff. DrivePool minimises this as best it can, based on the settings and priorities of your Balancers. More in-depth explanations can also be found elsewhere in the forums and on the Covecube blog at http://blog.covecube.com/ Details about "Other" space, as well as the bar graphs for the drives, are discussed here: http://blog.covecube.com/2013/05/stablebit-drivepool-2-0-0-230-beta/
  14. Unduplicated vs Duplicated vs Other vs Unusable really needs its own sticky (post-it note goes on the monitor to do that later today). Long story short, if you have turned on duplication for the entire pool, have never turned off duplication for any folder in the pool, and you've got enough space on the pooled drives for all your files to be duplicated, the "unduplicated" size should be zero (and if so, you won't even see the "unduplicated" field). "Other" is all the stuff on a disk that isn't part of the pool. To quote from the Covecube blog:
  15. The demo video does look slick. Seems to have very nice Explorer integration, claims XP support, otherwise nothing DrivePool doesn't already have that I can see? No mention of balancers, plugins, etc. IMO, adding Explorer integration would be a good idea when DP 2 hits stable, e.g. with x1/x2/xN tags visible and context-menu-manipulable on the folders directly (though DP's current separate "duplication tree" GUI is great for seeing the overall structure - perhaps the context menu could provide a direct link to it). Is that on the cards, further down the track or still under wraps?
  16. Hi Tom, this is already in Scanner. Right-click any disk in Scanner and tick "Ping Disk". There's also a "Ping" column, which is "the average time to read 1 MB from the disk for the past 10 seconds. Pinging a disk is also an easy way to visually identify a particular disk. Just look at the disk access light for disk activity every 1 second." - from the Scanner changelog (https://stablebit.com/Scanner/ChangeLog?Platform=whs2 and https://stablebit.com/Scanner/ChangeLog?Platform=win) for v2.2.0.2723 onwards. Remember to untick the option when you're done. Or tick more disks and see if you can strobe the LEDs.
  17. I do like the new text for the pool options menu. Much more obvious. A couple other things I noticed while showing DrivePool 2 and Scanner to someone whose computers I was fixing (she bought both!): * DP 2's GUI doesn't yet have maximise/minimise/restore buttons, and can't be set to open maximised via shortcut properties. * DP 2's GUI doesn't remember the state (open/closed) of the Performance section between sessions. (obviously we're still in beta, so bugfixes come before polish; this is just FYI for when there's time)
×
×
  • Create New...