Jump to content

Shane

Moderators
  • Posts

    1035
  • Joined

  • Last visited

  • Days Won

    104

Reputation Activity

  1. Like
    Shane reacted to methejuggler in All In One Balancing Plugin   
    Take a look at the "Releases" link on the right side of the github page. I'll edit the post with a link to there too so people can find it easier.
  2. Like
    Shane got a reaction from BIFFTAZ in Disk Space Equalizer Question   
    There's no harm either way; you could turn it off once it's done what you wanted and then turn it back on if you find it's needed, or just leave it on.
  3. Like
    Shane got a reaction from gtaus in Removing external drives from a pool   
    ... I'd guess that you could use Windows' Device Manager to disable the Covecube Virtual Disk and the Covecube Disk Enumerator, but I wouldn't mess with those unless Stablebit provided instructions.
  4. Thanks
    Shane reacted to methejuggler in All In One Balancing Plugin   
    In an attempt to learn the plugin API, I created a plugin which replicates the functionality of many of the official plugins, but combined into a single plugin. The benefit of this is that you can use functionality from several plugins without them "fighting" each other.
    I'm releasing this open source for others to use or learn from here:
    https://github.com/cjmanca/AllInOne
    Or you can download a precompiled version at:
    https://github.com/cjmanca/AllInOne/releases
    Here's an example of the settings panel. Notice the 1.5 tb drive is set to not contain duplicated or unduplicated:

     
    Balance result with those settings. Again note the 1.5tb is scheduled to move everything off the disk due to the settings above.

  5. Like
    Shane reacted to Spider99 in Upgrading mirrored drives in drivepool   
    i would go with option 1 - as you only have two drives with duplication so they are the "same" - does not matter which one you choose - vss will not be copied etc
    3 - would work but will be slower than 1
    2 - avoid cloning liable to give you problems
    and yes - shut down any service thats writing to the pool before you start - more for maintaining the best speed
    internal copy will be approx 1TB per 3 hrs - give or take - remember speed will vary by files size (lots of small files very slow) (large files quick) and where the data is on the disk - i suspect that the new 8TB will be quicker than the 4TB so the speed will depend on the 4TB disk....
  6. Like
    Shane reacted to methejuggler in Plugin Source   
    I actually wrote a balancing plugin yesterday which is working pretty well now. It took a bit to figure out how to make it do what I want. There's almost no documentation for it, and it doesn't seem very intuitive in many places.
    So far, I've been "combining" several of the official plugins together to make them actually work together properly. I found the official plugins like to fight each other sometimes. This means I can have SSD drop drives working with equalization and disk usage limitations with no thrashing. Currently this is working, although I ended up re-writing most of the original plugins from scratch anyway simply because they wouldn't combine very well as originally coded. Plus, the disk space equalizer plugin had some bugs in a way which made it easier to rewrite than fix.
    I wasn't able to combine the scanner plugin - it seems to be obfuscated and baked into the main source, which made it difficult to see what it was doing.
    Unfortunately, the main thing I wanted to do doesn't seem possible as far as I can tell. I had wanted to have it able to move files based on their creation/modified dates, so that I could keep new/frequently edited files on faster drives and move files that aren't edited often to slower drives. I'm hoping maybe they can make this possible in the future.
    Another idea I had hoped to do was to create drive "groups" and have it avoid putting duplicate content on the same group. The idea behind that was that drives purchased at the same time are more likely to fail around the same time, so if I avoid putting both duplicated files on those drives, there's less likelihood of losing files in the case of multiple drive failure from the same group of drives. This also doesn't seem possible right now.
  7. Like
    Shane got a reaction from gtaus in Cannot write to pool "Catastrophic Failure (Error 0x8000FFFF)", cannot remove disk from pool "The Media is Write Protected"   
    The drive may have failed. Do you have another computer you could test the drive in? Or if not, could you boot from a Live CD or Live USB to see if it shows up in a different OS?
  8. Like
    Shane got a reaction from gtaus in How to I get automatic notices of replies to my questions?   
    Hi gtaus, I can see that your account is following this thread, so hopefully you'll get notified about this response.
    Maybe check that https://community.covecube.com/index.php?/notifications/options/ is set to your liking?
  9. Thanks
    Shane reacted to cocksy_boy in Cannot write to pool "Catastrophic Failure (Error 0x8000FFFF)", cannot remove disk from pool "The Media is Write Protected"   
    After posting, I found an issue I had missed: the disk was marked as Read Only in disk management. 
    After running DISKPART from cmd I managed to remove the read-only tag using the command attributes disk clear readonly  and it appears to be OK now.

  10. Like
    Shane got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    Checking, it seems the ST6000DM003 is a SMR drive. I don't recommend putting it in any pool where you want decent rewrite performance, but if you're only wanting good read performance it's fine.
  11. Like
    Shane got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    ... Would that 6TB USB HDD happen to be a Seagate Backup Plus Hub by any chance?
    Because I bought one a ways back (P/N 1XAAP2-501) and it behaves exactly the same way you've described. The drive inside might actually be okay, just with a lemon enclosure. Since yours is out of warranty and you're planning to ditch it, consider instead shucking it and using the HDD as an internal drive (run the Seagate long test on it again of course).
    BTW just FYI, Seagate has used SMR drives in its Backup Plus Hubs and I don't recommend using SMR drives in a pool, so also check the part number of the drive itself to see if it is one, with (if you don't want to open it up to check) a utility like Crystal Disk Info or similar.
  12. Like
    Shane reacted to gtaus in File Placement, how to add new HDD for only backup files   
    Well, my excellent idea has proven successful, but maybe in an unexpected way. I created a 2nd DrivePool for my non-critical \movie\ data using my suspect 6TB USB HDD. Everything seemed to be working fine as I used Teracopy to move 6TB of \movie\ files to the new DrivePool using the verify command to ensure all files were 100% correct upon transfer. Everything worked fine for a day or two. Thinking I was good to go, I was about to add some more older HDDs to that 2nd DrivePool when, for whatever reason, that 2nd DrivePool went offline and I could not access it (only the suspect 6TB USB HDD on the pool at that time). Fortunately for me, I only had that one suspect 6TB HDD on the 2nd DrivePool at that time, so troubleshooting was fast and easy. Turns out, that suspect 6TB HDD has a tendency to shut itself down, go offline, and then throw the DrivePool into an error state. I was able to unplug the 6TB, plug it back in, and then both the drive and the 2nd DrivePool came back to life.
    This happened a couple more times yesterday, with that 6TB USB HDD going offline seemingly random, requiring me to unplug it and then plug it back in, so I decided it was just not worth the trouble trying to salvage use out of that 6TB HDD anymore. I am currently in the process of using Teracopy with verify to move all the \movies\ file off that 2nd DrivePool with only the 6TB USB HDD back to my original DrivePool. 
    Turns out that creating a 2nd DrivePool using only that suspect 6TB USB HDD was a good idea because I quickly was able to determine there is indeed something wrong with it (despite all the diagnostic programs that scan it and report it is working fine). The only program that correctly determined the HDD was failing was Seagate Seatools using the long generic test (it passed the short tests). Unfortunately, the HDD is out of warranty so I will have to just eat the loss and move on.
    But, I would like to shout out to Stablebit Drivepool because, even though that 6TB HDD is failing, my data seems to be recoverable. I am currently about 50% complete on the file transfer off that suspect HDD and all files are intact. I can guarantee you that if that same suspect HDD had gone bad in my MS Windows 10 Storage Spaces setup, using data packets spread all over the pool, it would have most assuredly crashed the entire Storage Space and all my data would have been lost. How do I know that? Because I used Storage Spaces for years and after my 3rd catastrophic loss of data due to a HDD failure, I moved over to Storage Spaces. Despite having 2 and even 3 HDD failure protection on Storage Spaces, I lost entire pools of data when only 1 drive out 20 drives went bad. I guess I can attest to the fact that when a HDD goes bad in DrivePool, I have been able to recover my data off that failing HDD. For my few folder/files that require duplication, I have 2x or 3x duplication set and with DrivePool I don't worry that any 1 drive will crash my entire pool.
    Thanks to everyone for their responses and helping me work through this issue.
  13. Like
    Shane got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    In the File Placement Rules section:
    your FIRST (uppermost) rule should be that all files matching "\movies\*" are to be placed on all drives INCLUDING that one (check all drives) your LAST (lowermost) rule should be that all files matching "*" are to be placed on all drives EXCLUDING that one (check all the others, leave it unticked, and in your particular case you also want to select "Never allow files to be placed on any other disks" for this rule). Any other rules you might have should be placed ABOVE the LAST rule, and should not have that drive checked (and again, you may wish to select "Never allow..."). This is because FPR checks the rules from uppermost to lowermost until it finds a matching rule for the file being balanced and then uses only that rule.
    NOTE that File Placement is only performed when balancing is triggered instead of in real time; you might wish to use the SSD Optimizer balancer plugin to mark at least one of the other disks as "SSD" so that new files are never even temporarily placed on the 6TB HDD, which is otherwise possible even if you have "Balance immediately" selected in the Settings tab.
  14. Like
    Shane got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    For whatever it's worth, in the past I have encountered problems with "copier" software silently missing files. Admittedly I was dealing with very large file sets, very long paths, and unicode names, all back when a lot of software would have trouble with just one of those let alone all three, and the less-than-reliable hardware I was (ahem) relying on at the time probably didn't help, but the important takeaway is that if you're working with "irreplaceable" data you might want to stress-test your copier and verify that it is actually doing what it says it's doing.
  15. Like
    Shane reacted to SPACEBAR in WSL2 Support for drive mounting   
    I ran WSL2 + Storage Space from August 2019 to March 2020. It works well. I ended up switching from Storage Space to DrivePool because it's really hard to diagnose Storage Space when it misbehaves. And when it does fail (and it will), it does so in spectacular fashion. I was bummed out about losing parity functionality but DrivePool + Snapraid is fine for my use case.
    Anyway, I was able to use DrivePool with WSL2 (Docker) by mounting cifs volumes (windows shares). Here's an example:
    version: '3.7' services: sonarr: build: . container_name: sonarr volumes: - type: bind source: C:\AppData\sonarr\config target: /config - downloads:/downloads cap_add: - SYS_ADMIN - DAC_READ_SEARCH volumes: downloads: driver_opts: type: cifs o: username=user,password=somethingsecure,iocharset=utf8,rw,nounix,file_mode=0777,dir_mode=0777 device: \\IP\H$$\Downloads Note that I'm building the image. This is because I need to inject cifs-utils into it. Here's the dockerfile:
    FROM linuxserver/sonarr RUN \ apt update && \ apt install -y cifs-utils && \ apt-get clean There are security considerations with this solution:
    1. Adding SYS_ADMIN capability to the docker container is dangerous
    2. You need to expose your drive/folders on the network. Depending on how your windows shares are configured, this may be less than ideal.
    Hope this helps!
  16. Thanks
    Shane reacted to Reid Rankin in WSL 2 support   
    I've been following up on this with some disassembly and debugging to try and figure out what precisely is going wrong. WSL2's "drvfs" is just a 9P2000.L file server implementation (yep, that's the protocol from Plan 9) exposed over a Hyper-V Socket. (On the Windows side, this is known as a VirtIO socket; on the Linux side, however, that means something different and they're called AF_VSOCK.) The 9P server itself is hard to find because it's not in WSL-specific code -- it's baked into the Hyper-V "VSMB" infrastructure for running Linux containers, which predates WSL entirely. The actual server code is in vp9fs.dll, which is loaded by both the WSL2 VM's vmwp.exe instance and a copy of dllhost.exe which it starts with the token of the user who started the WSL2 distro. Because the actual file system operations occur in the dllhost.exe instance they can use the proper security token instead of doing everything as SYSTEM.
    The relevant ETW GUID is e13c8d52-b153-571f-78c5-1d4098af2a1e. This took way too long to find out, but allows you to build debugging logs of what the 9P server is doing by using the tracelog utility.
    tracelog -start p9trace -guid "#e13c8d52-b153-571f-78c5-1d4098af2a1e" -f R:\p9trace.etl <do the stuff> tracelog -stop p9trace The directory listing failure is reported with a "Reply_Rlerror" message with an error code of 5. Unfortunately, the server has conveniently translated the Windows-side NTSTATUS error code into a Linux-style error.h code, turning anything it doesn't recognize into a catch-all "I/O error" in the process. Luckily, debugging reveals that the underlying error in this case is an NTSTATUS of 0xC00000E5 (STATUS_INTERNAL_ERROR) returned by a call to ZwQueryDirectoryFile.
    This ZwQueryDirectoryFile call requests the new-to-Win10 FileInformationClass of FileIdExtdDirectoryInformation (60), which is supposed to return a structure with an extra ReparsePointTag field -- which will be zero in almost all cases because most things aren't reparse points. Changing the FileInformationClass parameter to the older FileIdFullDirectoryInformation (38) prevents the error, though it results in several letters being chopped off of the front of each filename because the 9P server expects the larger struct and has the wrong offsets baked in.
    So things would probably work much better if CoveFs supported that newfangled FileIdExtdDirectoryInformation option and the associated FILE_ID_EXTD_DIR_INFO struct; it looks like that should be fairly simple. That's not to say that other WSL2-specific issues aren't also present, but being able to list directories would give us a fighting chance to work around other issues on the Linux side of things.
  17. Like
    Shane got a reaction from cocksy_boy in Forced out of FlexRaid Transparent raid. Coming to Drivepool + Snapraid, need some infos.   
    Umfriend is correct. The service should be stopped to prevent any chance of balancing occurring during the migration when using that method.
    And that method is fine so long as your existing arrangement is compatible with DrivePool's pooling structure.
    E.g. if you have:
    drive D:\FolderA\FileB moved to D:\PoolPart.someguid\FolderA\FileB
    drive E:\FolderA\FileB moved to E:\PoolPart.someguid\FolderA\FileB
    drive F:\FolderA\FileC moved to F:\PoolPart.someguid\FolderA\FileC
    then your drivepool drive (in this example P: drive) will show:
    P:\FolderA\FileB
    P:\FolderA\FileC
    as DrivePool will presume that FileB is the same file duplicated on two drives.
    As Umfriend has warned, when it next performs consistency checking DrivePool will create/remove copies as necessary to match your chosen settings (e.g. "I want all files in FolderA to exist on three drives"), and will warn if it finds a "duplicated" file that does not match its duplicate(s) on the other drives.
    As to Snapraid, I'd follow Umfriend's advice there too.
  18. Like
    Shane got a reaction from kenwshmt in How to locate file location on specific HDD in DrivePool?   
    I'd suggest a tool called Everything, by Voidtools. It'll scan the disks (defaults to all NTFS volumes) then just type in a string (e.g. "exam 2020" or ".od3") and it shows all files (you can also set it to search folder names as well) that have that string in the name, with the complete path. Also useful for "I can't remember what I called that file or where I saved it, but I know I saved it on the 15th..." problems.
  19. Like
    Shane got a reaction from Adramelramalech in eXtreme bottlenecks   
    In my experience Resource Monitor's reporting of read and write rates can lag behind what's actually happening, making it look like it's transferring more files at any given point than it really is - but that transfer graph is definitely a sign of hitting some kind of bottleneck. It's the sort of thing I'd expect to see from a large number of small files, a network drive over wireless, or a USB flash drive.
    Can you tell us more about what version of DrivePool you're using (the latest "stable" release is 2.2.3.1019), what drives are involved (HDD, SSD, other), how they're hooked up (SATA, USB, other) and if you've made any changes to the Manage Pool -> Performance options (default is to have only Read striping and Real-time duplication ticked)?
    Examining the Performance indicators of DrivePool (to see it, maximise the DrivePool UI and click the right-pointing triangle to the left of the word Performance) and the Performance tab of Task Manager when the bottlenecking is happening might also be useful.
    Hmm. You might also want to try something other than the built-in Windows copier to see if that helps, e.g. FastCopy ?
  20. Like
    Shane got a reaction from Querl28 in how to replace failing HDD   
    Hi Querl28. There's a few different ways.
    Simplest is you install the replacement drive, tell DrivePool to add it to the pool and then tell DrivePool to remove the old one from the pool. DP tell you whether it successfully moved all the files on the old drive across (in which case you can then physically remove the old drive) or not (in which case you have to decide what to do about it).
    If you don't have spare ports to add the new drive before removing the old one, but you have enough free space on your other drives in the pool, then you can tell DP to remove the old drive from the pool before you install the new one.
    See also this support page on removing drives.
  21. Like
    Shane reacted to Umfriend in 2nd request for help   
    Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
  22. Like
    Shane reacted to gtaus in 2nd request for help   
    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue.
    Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory.
    I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories.
    Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
  23. Like
    Shane reacted to vfsrecycle_kid in Drive letters... again   
    You'll want to open diskmgmt.msc and from there right click the drives in an order that does not produce conflicts.
     

     
    See: http://wiki.covecube.com/StableBit_DrivePool_Q6811286
     
    If you don't want WXYZ to be seen at all, then you do not need to give them drive letters (DrivePool will still be able to pool them together) - Instead of clicking "Change" simply click "Remove" when dealing with the drive letters.
     
    Probably the easiest order to do this if you want to keep every drive with drive letters:
     
    Change F to D Change E, G, H, I to W, X, Y, Z Change J to E (now that E has been freed up)  
    Probably the easiest order to do this if you want the pooled drives to have no drive letter:
    Change F to D Remove Drive Letters E, G, H, I (see my picture for the Remove Button) Change J to E --
     
    You should be able to keep DrivePool running during this whole transition phase (you don't need to remove drives from the pools). Personally I'd go with Option 1. While there are Folder Mounts that you could use, I think it would just be easiest to keep everything easily accessible the "normal" way. Plus without Drive Letters you won't be able to add non-pooled content to the pooled drives (just incase you wanted to do that)
     
    Hope it helps
  24. Like
    Shane got a reaction from Christopher (Drashna) in FAQ - Parity and Duplication and DrivePool   
    The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle.   Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post if anything changes or needs clarification (or if someone points out any mistakes I've made).   Disclaimer: I do not work for Covecube/Stablebit. These are my own comments. You don't know me from Jack. No warranty, express or implied, in this or any other universe.   Part 1: how parity works and the difference between parity and duplication   Duplication is fast. Every file gets simultaneously written to multiple disks, so as long as all of those disks don't die the file is still there, and by splitting reads amongst the copies you can load files faster. But to fully protect against a given number of disks dying, you need that many times number of disks. That doesn't just add up fast, it multiplies fast.   Parity relies on the ability to generate one or more "blocks" of a series of reversible checksums equal to the size of the largest protected "block" of content. If you want to protect three disks, each parity block requires its own disk as big as the biggest of those three disks. For every N parity blocks you have, any N data blocks can be recovered if they are corrupted or destroyed. Have twelve data disks and want to be protected against any two of them dying simultaneously? You'll only need two parity disks.   Sounds great, right? Right. But there are tradeoffs.   Whenever the content of any data block is altered, the corresponding checksums must be recalculated within the parity block, and if the content of any data block is corrupted or lost, the corresponding checksums must be combined with the remaining data blocks to rebuild the data. While duplication requires more disks, parity requires more time.   In a xN duplication system, you xN your disks, for each file it simultaneously writes the same data to N disks, but so long as p<=N disks die, where 'p' depends on which disks died, you replace the bad disk(s) and keep going - all of your data is immediately available. The drawback is the geometric increase in required disks and the risk of the wrong N disks dying simultaneously (e.g. if you have x2 duplication, if two disks die simultaneously and one happens to be a disk that was storing duplicates of the first disk's files, those are gone for good).   In a +N parity system, you add +N disks, for each file it writes the data to one disk and calculates the parity checksums which it then writes to N disks, but if any N disks die, you replace the bad disk(s) and wait while the computer recalculates and rebuilds the lost data - some of your data might still be available, but no data can be changed until it's finished (because parity needs to use the data on the good disks for its calculations).    (sidenote: "snapshot"-style parity systems attempt to reduce the time cost by risking a reduction in the amount of recoverable data; the more dynamic your content, the more you risk not being able to recover)   Part 2: the difference between drive-level and folder-level parity   Drive-level parity, aside from the math and the effort of writing the software, can be straightforward enough for the end user: you dedicate N drives to parity that are as big as the biggest drive in your data array. If this sounds good to you, some folks (e.g. fellow forum moderator Saitoh183) use DrivePool and the FlexRAID parity module together for this sort of thing. It apparently works very well.   (I'll note here that drive-level parity has two major implementation methods: striped and dedicated. In the dedicated method described above, parity and content are on separate disks, with the advantages of simplicity and readability and the disadvantage of increased wear on the parity disks and the risk that entails. In the striped method, each disk in the array contains both data and parity blocks; this spreads the wear evenly across all disks but makes the disks unreadable on other systems that don't have compatible parity software installed. There are ways to hybridise the two, but it's even more time and effort).   Folder-level parity is... more complicated. Your parity block has to be as big as the biggest folder in your data array. Move a file into that folder, and your folder is now bigger than your parity block - oops. This is a solvable problem, but 'solvable' does not mean 'easy', sort of how building a skyscraper is not the same as building a two-storey home. For what it's worth, FlexRAID's parity module is (at the time of my writing this post) $39.95 and that's drive-level parity.   Conclusion: the TLDR for parity in DrivePool  
    As I see it, DrivePool's "architectural imperative" is "elegant, friendly, reliable". This means not saddling the end-user with technical details or vast arrays of options. You pop in disks, tell the pool, done; a disk dies, swap it for a new one, tell the pool, done; a dead disk doesn't bring everything to a halt and size doesn't matter, done.   My impression (again, I don't speak for Covecube/Stablebit) is that parity appears to be in the category of "it would be nice to have for some users but practically it'd be a whole new subsystem, so unless a rabbit gets pulled out of a hat we're not going to see it any time soon and it might end up as a separate product even then (so that folks who just want pooling don't have to pay twice+ as much)".
  25. Like
    Shane got a reaction from Tardas-Zib in Glossary - Alpha, Beta, RC, Release, Stable, Final, Version   
    Here's how I perceive the whole alpha/beta/version ball of string.
     
    Alpha - the bare skeleton of a program, very crash-prone, full of bugs and lacking many planned features.
     
    Beta - all major planned features and adding minor features, lots of minor bugs, often still some major bugs
     
    RC (Release Candidate) - all major features, most/all minor features, only minor/unknown bugs should remain
     
    Release - all major features, all minor features, no known bugs (except maybe a couple that are proving really hard to fix)
     
    Stable - no known bugs, or at least the few remaining known bugs are well-documented edge-cases with workarounds
     
    Final - can mean Stable, can mean Release, can mean last minor revision of a particular version on the roadmap
     
    Version - a numerical way of distinguishing that the software is different from the last time it was published, often in the form of a date (e.g. ymd 20120616), integer (e.g. build# 1234) or revision (e.g. major.minor 2.3) or combination (e.g. ymd.build or revision.build or y.major.minor.build)
     
    Roadmap - a list of planned features sorted by planned version, at least in theory.
     
    For a hypothetical example, Fooware 2014 build 5.4 Beta might be a program published in the year 2014, is the 5th major version of that program, which in turn has seen 4 minor revisions (added another minor feature and/or fixed some bugs since the 3rd minor revision) that is still in Beta (has a tendency to crash for at least six different reasons along with numerous edge case problems).
     
    To further confuse things, Alpha/Beta/etc can refer to a particular version of a program, a range of versions of a program, or be a separate tag independent of the version numbering scheme, depending on the preference of the person(s) doing the naming. For example, if you see a roadmap with Fooware 2014.5.3 Stable followed by Fooware 2014.5.4 Beta, this is likely to mean that a new minor feature has been added that may/has introduced some new bugs as well.
×
×
  • Create New...