Jump to content

Shane

Moderators
  • Posts

    744
  • Joined

  • Last visited

  • Days Won

    66

Posts posted by Shane

  1. Cloud Drive works by creating a virtual reproduction of a basic volume and drive on the remote cloud service, down to and including the sector level; its consumption of actual physical space starts at a minimum point and expands up to the designated capacity as data is written to it, much like the VHDX virtual volumes as used in HyperV and similar environments.

    Thus when you delete a file on CloudDrive the drive itself does not shrink on the cloud service, any more than when you delete a file on a physical drive the drive itself does not shrink inside your computer.

    Similarly, just like a physical drive, the sectors occupied by the data of deleted files remain available for re-use by new files (or for recovery in case of accidental deletion); when you delete a file on the drive, the "free" space of the drive itself goes up just like on a physical drive inside your computer.

    If you wish to free up actual space on your cloud provider's service for non-drive purposes, you should be using the Resize option of CloudDrive to shrink the drive; CloudDrive will automatically follow the Resize with a Cleanup afterwards to safely free up any excess used space on the cloud provider's service that is no longer required to support the drive.

    To pre-answer "why can't CloudDrive automagically free up de-allocated space in the background and/or without resizing", I'd presume that doing so safely would require significant additional coding (I know VHDX has an option to compact without resizing, but not AFAIK while writable!) and the task itself would slow down the drive for users when it was happening so IMO it becomes a question of "can StableBit get enough ducks lined up in a row that it's worth adding this rather than doing something else".

    But if you'd really like to have it do that, you can submit feature requests directly to StableBit via the Contact form.

  2. I believe it's used when DrivePool has a choice of drive(s) to pick; a seek penalty will influence which are chosen.

    E.g. if you tell it to open a file (e.g. a movie) that's stored on more than one drive in the pool, it will look at which of those drives are busy and which have no penalty. So if your file was on a SSD and a HDD it would prefer to open it from the SSD unless the latter was already busy with something else.

    The update means this now also applies to nested pools; in such situations (e.g. a pool consisting of a pool of SSDs and a pool of HDDs) the combined pool can now know which of it's comprising pools have no seek penalty.

  3. 4 hours ago, techguy1 said:

    To make sure, I also tested going into drive 1 and drive 2 directly via the hidden PoolPart folder to directly pull File A and File B off 2 different drives at the same time.

    Windows shows 2 read operations going simultaneously and ... my speed caps out at 250-270 MB/s no matter how many different drives I read off of.

    If you have drives X and Y comprising pool Z, then directly accessing X:\PoolPart or Y:\PoolPart yourself rather than through Z:\ means you're not using DrivePool - so if you're still getting the same capped total speed then the problem isn't DrivePool.

    You might need to look at the USB 3.x controller you're using, perhaps there is an issue there / a driver update available? What is the brand/model of the DAS?

  4. Q. "I use File Juggler for this, as I saw no way at all to do this via balancing rules (file date, file size is not present). Unless I'm mistaken?"

    A. Correct, DrivePool's balancing doesn't support balancing by file date or file size, only by file name.

    Q. "I have mounted the SSD and the HDD pool to empty NTFS folders, and file juggler interacts with those. Will this mess up my pools at all? I would assume not?"

    A. If the File Juggler program is pointed at a pool, there shouldn't be any issue. If the File Juggler program is pointed at a poolpart (one of the hidden folders that comprise a pool) then it could mess up DrivePool's measuring which could in turn affect balancing; if you find this is happening and File Juggler can run a command or script after it completes a juggle then you may wish to have it order DrivePool to perform a re-measure (DrivePool may also automatically initiate a re-measure if it notices the discrepancy).

    Q. "Regarding my HDD pool, I would like it if the balancing is only done with newly written files. So no files get moved from one disk to another. Is this possible?"

    A. Given what you've described, for the "HDD Pool" I would remove Disk Space Equalizer and would suggest using Prevent Drive Overfill instead.

  5. I can't comment officially as I'm just a volunteer mod (see my sig below). Having another look, the Notifications menu entry in DrivePool being missing when it's not linked to the Cloud service seems like a bug?

    New versions of DrivePool, Scanner and Cloud were just released yesterday so check if those make a difference, and if they don't then I'd suggest opening a support request with StableBit directly.

  6. VapechiK is correct; StableBit's DrivePool is not a parity RAID system, it will not "self-heal" a damaged file.

    StableBit's Scanner is able to detect and attempt to repair damaged files, and if you have that plus DrivePool's duplication you can manually replace a non-repairable file with its good duplicate when alerted.

    Some users combine DrivePool with SnapRAID to get parity healing capability (albeit not fully automated).

    As VapechiK indicates, you can also pool sets of RAID volumes to let those provide duplication/parity.

  7. DrivePool does not automatically remove empty folders from poolparts during balancing (basically for redundancy reasons; some pool metadata is stored as AD streams attached to the folders and the "space cost" of this is normally very low).

    The upshot is that so long as the "abracadabra" folder tree is showing 0 bytes (as an administrator) and does not contain any hidden system folders (e.g. $recycle.bin or system volume information) then it can be safely removed manually while the DrivePool service is stopped; the only thing you'd "lose" is some extra redundancy.

     

  8. If you reduce or turn off Duplication (for a folder or the pool) then DrivePool simply goes through and deletes the excess instances.

    If you have some files that you wish to keep duplicated then you can manage duplication on a per-folder (or folder tree) basis, e.g. I have a bunch of files I don't care about at in folders at x1, some I care about at in folders at x2 and some that I absolutely want safe that I keep in folders at x3 (and I have a backup of everything anyway externally).

    There's also SnapRAID which takes a different approach (it uses parity drives) but can be used in conjunction with DrivePool. Pros and cons.

  9. A seek penalty (in the context of storage devices) refers to a storage device needing additional time to perform a seek (begin retrieving data), and the OS can request a storage device to report whether or not it has a seek penalty (or how big it is). Mechanical hard drives have a seek penalty - they need time for the platter(s) to spin and for the head(s) to move into position before they can begin accessing data. Solid state drives (and RAM drives) usually do not (or it's very small). You can get edge cases where a drive might not itself have a seek penalty but the path to it does (e.g. a SSD being virtually mounted over a slow network).

    TLDR it can be a reasonably accurate indicator as to whether a device is a HDD or SSD (or at least close enough for practical purposes).

  10. After a reboot, does manually creating a new folder or copying new data into each of the hidden poolpart folders cause the error on any drive(s), or is it only when creating/copying in the virtual pool drive?

    Do you have particularly-paranoid antivirus software, and does (temporarily) disabling and/or uninstalling it make the problem disappear?

    P.S. I've found that Q5510455 can fix most NTFS permission errors but unfortunately there are still a few it won't. See here for an alternative, more thorough method.

  11. If the content I want to put into the pool is already on one of the drives (about to be) in the pool, I prefer to (add the drive and) manually move the content into the poolpart. As you noted, it's almost instant.

    One thing I do to reduce any risk of accidentally conflicting files or folders across poolparts is to create a unique folder to move the content into; e.g. if I have a pool "P:\" then - after I've stopped the service - I create a unique folder in the poolpart (e.g. "D:\poolpart.xyz\123" where there wasn't previously a "P:\123") and move what I want into that unique folder - then I can start the service back up and remeasure, and any further moving can be done via the pool rather than the poolpart; I don't even have to wait for the remeasure to finish).

  12. I believe the short answer is that single-file-at-a-time balancing was by far the simplest and safest to code*, so guaranteed reliability (and more time to code other things) was chosen over raw speed.

    *In my very limited and outdated experience, multi-threaded operations are fantastic when you can just tick a checkbox in a compiler that can safely (hah) optimise it all for you, and a complex pile of risk conditions when you have to write it yourself.

  13. Moving the files into the poolpart while the service is stopped prevents any risk of DrivePool attempting to move the files while you're moving the files.

    Personally I haven't found it necessary to reset DrivePool's settings afterwards, as per the KB article, instead just using Manage Pool -> Re-measure afterwards (and if I'm using duplication, making sure that is appropriately set for the new folders and then Troubleshooting -> Recheck Duplication afterwards), but your mileage may vary.

×
×
  • Create New...