-
Posts
744 -
Joined
-
Last visited
-
Days Won
66
Posts posted by Shane
-
-
Other than perhaps reporting it via opening a support ticket, not that I'm aware of.
-
Yes, the Cloud app does seem to be incorrectly adding them together as separate pools even though it seems to have detected there's a hierarchical arrangement.
-
I believe it's used when DrivePool has a choice of drive(s) to pick; a seek penalty will influence which are chosen.
E.g. if you tell it to open a file (e.g. a movie) that's stored on more than one drive in the pool, it will look at which of those drives are busy and which have no penalty. So if your file was on a SSD and a HDD it would prefer to open it from the SSD unless the latter was already busy with something else.
The update means this now also applies to nested pools; in such situations (e.g. a pool consisting of a pool of SSDs and a pool of HDDs) the combined pool can now know which of it's comprising pools have no seek penalty.
-
If the SSD was tested on the same port on the PC as the Syba then that would rule out the PC (at least in theory; in practice it could still be a combination of the interaction between the particular chipsets), leaving only the Syba as the common factor. Otherwise while I too would suspect the Syba here, we couldn't entirely rule out the PC.
-
4 hours ago, techguy1 said:
To make sure, I also tested going into drive 1 and drive 2 directly via the hidden PoolPart folder to directly pull File A and File B off 2 different drives at the same time.
Windows shows 2 read operations going simultaneously and ... my speed caps out at 250-270 MB/s no matter how many different drives I read off of.If you have drives X and Y comprising pool Z, then directly accessing X:\PoolPart or Y:\PoolPart yourself rather than through Z:\ means you're not using DrivePool - so if you're still getting the same capped total speed then the problem isn't DrivePool.
You might need to look at the USB 3.x controller you're using, perhaps there is an issue there / a driver update available? What is the brand/model of the DAS?
-
I know that Windows will always check the available space against the total size of the folders/files to be copied, or when moving those between different drives, but I'm surprised that's happening when moving files within the same pool as I'm under the impression that should appear to Windows to be the same "drive".
-
Q. "I use File Juggler for this, as I saw no way at all to do this via balancing rules (file date, file size is not present). Unless I'm mistaken?"
A. Correct, DrivePool's balancing doesn't support balancing by file date or file size, only by file name.
Q. "I have mounted the SSD and the HDD pool to empty NTFS folders, and file juggler interacts with those. Will this mess up my pools at all? I would assume not?"
A. If the File Juggler program is pointed at a pool, there shouldn't be any issue. If the File Juggler program is pointed at a poolpart (one of the hidden folders that comprise a pool) then it could mess up DrivePool's measuring which could in turn affect balancing; if you find this is happening and File Juggler can run a command or script after it completes a juggle then you may wish to have it order DrivePool to perform a re-measure (DrivePool may also automatically initiate a re-measure if it notices the discrepancy).
Q. "Regarding my HDD pool, I would like it if the balancing is only done with newly written files. So no files get moved from one disk to another. Is this possible?"
A. Given what you've described, for the "HDD Pool" I would remove Disk Space Equalizer and would suggest using Prevent Drive Overfill instead.
-
I can't comment officially as I'm just a volunteer mod (see my sig below). Having another look, the Notifications menu entry in DrivePool being missing when it's not linked to the Cloud service seems like a bug?
New versions of DrivePool, Scanner and Cloud were just released yesterday so check if those make a difference, and if they don't then I'd suggest opening a support request with StableBit directly.
-
Were you able to figure this out? I also use Firefox and save to a network share (well, a mapped drive of a network share) that is a DrivePool pool on the other end but haven't encountered this error, fwiw.
-
Drive might still be usable - sometimes you get lucky and the bad sectors don't breed, and if so chkdsk /r will at least mark those sectors to not be used by the file system - but if you don't want the risk that you'll end up with more and more bad sectors then yeah wipe and bin/rma the drive.
-
Perhaps a combination of CyLog's FillDisk (does what it implies: fills a disk with files full of zeroes) and StableBit's Scanner (scans drive sectors to verify they're readable)?
There's also utilities such as Seagate Tools and HDDScan.
-
VapechiK is correct; StableBit's DrivePool is not a parity RAID system, it will not "self-heal" a damaged file.
StableBit's Scanner is able to detect and attempt to repair damaged files, and if you have that plus DrivePool's duplication you can manually replace a non-repairable file with its good duplicate when alerted.
Some users combine DrivePool with SnapRAID to get parity healing capability (albeit not fully automated).
As VapechiK indicates, you can also pool sets of RAID volumes to let those provide duplication/parity.
-
DrivePool does not automatically remove empty folders from poolparts during balancing (basically for redundancy reasons; some pool metadata is stored as AD streams attached to the folders and the "space cost" of this is normally very low).
The upshot is that so long as the "abracadabra" folder tree is showing 0 bytes (as an administrator) and does not contain any hidden system folders (e.g. $recycle.bin or system volume information) then it can be safely removed manually while the DrivePool service is stopped; the only thing you'd "lose" is some extra redundancy.
-
If you reduce or turn off Duplication (for a folder or the pool) then DrivePool simply goes through and deletes the excess instances.
If you have some files that you wish to keep duplicated then you can manage duplication on a per-folder (or folder tree) basis, e.g. I have a bunch of files I don't care about at in folders at x1, some I care about at in folders at x2 and some that I absolutely want safe that I keep in folders at x3 (and I have a backup of everything anyway externally).
There's also SnapRAID which takes a different approach (it uses parity drives) but can be used in conjunction with DrivePool. Pros and cons.
-
A seek penalty (in the context of storage devices) refers to a storage device needing additional time to perform a seek (begin retrieving data), and the OS can request a storage device to report whether or not it has a seek penalty (or how big it is). Mechanical hard drives have a seek penalty - they need time for the platter(s) to spin and for the head(s) to move into position before they can begin accessing data. Solid state drives (and RAM drives) usually do not (or it's very small). You can get edge cases where a drive might not itself have a seek penalty but the path to it does (e.g. a SSD being virtually mounted over a slow network).
TLDR it can be a reasonably accurate indicator as to whether a device is a HDD or SSD (or at least close enough for practical purposes).
-
If it's none of the above, I could only speculate blindly. Perhaps some file filter driver from another app? Some weird issue in the JBOD firmware? You could also try resetting DrivePool's settings and application state if something in there has come unglued. If you end up still stuck, I'd suggest contacting StableBit.
-
It asks you whether you want #1 it to use the most recent versions of the file(s) or #2 to manually fix it yourself.
-
-
After a reboot, does manually creating a new folder or copying new data into each of the hidden poolpart folders cause the error on any drive(s), or is it only when creating/copying in the virtual pool drive?
Do you have particularly-paranoid antivirus software, and does (temporarily) disabling and/or uninstalling it make the problem disappear?
P.S. I've found that Q5510455 can fix most NTFS permission errors but unfortunately there are still a few it won't. See here for an alternative, more thorough method.
-
-
If the content I want to put into the pool is already on one of the drives (about to be) in the pool, I prefer to (add the drive and) manually move the content into the poolpart. As you noted, it's almost instant.
One thing I do to reduce any risk of accidentally conflicting files or folders across poolparts is to create a unique folder to move the content into; e.g. if I have a pool "P:\" then - after I've stopped the service - I create a unique folder in the poolpart (e.g. "D:\poolpart.xyz\123" where there wasn't previously a "P:\123") and move what I want into that unique folder - then I can start the service back up and remeasure, and any further moving can be done via the pool rather than the poolpart; I don't even have to wait for the remeasure to finish).
-
I believe the short answer is that single-file-at-a-time balancing was by far the simplest and safest to code*, so guaranteed reliability (and more time to code other things) was chosen over raw speed.
*In my very limited and outdated experience, multi-threaded operations are fantastic when you can just tick a checkbox in a compiler that can safely (hah) optimise it all for you, and a complex pile of risk conditions when you have to write it yourself.
-
Moving the files into the poolpart while the service is stopped prevents any risk of DrivePool attempting to move the files while you're moving the files.
Personally I haven't found it necessary to reset DrivePool's settings afterwards, as per the KB article, instead just using Manage Pool -> Re-measure afterwards (and if I'm using duplication, making sure that is appropriately set for the new folders and then Troubleshooting -> Recheck Duplication afterwards), but your mileage may vary.
-
Hi toyoda, as I understand it a license transfer can be triggered by a significant change in hardware or OS. If that's not happening then you'd need to contact StableBit directly for support so they can figure out what is setting it off.
I delete files on the Cloud drive, but the changes are not reflected in the software
in General
Posted
Cloud Drive works by creating a virtual reproduction of a basic volume and drive on the remote cloud service, down to and including the sector level; its consumption of actual physical space starts at a minimum point and expands up to the designated capacity as data is written to it, much like the VHDX virtual volumes as used in HyperV and similar environments.
Thus when you delete a file on CloudDrive the drive itself does not shrink on the cloud service, any more than when you delete a file on a physical drive the drive itself does not shrink inside your computer.
Similarly, just like a physical drive, the sectors occupied by the data of deleted files remain available for re-use by new files (or for recovery in case of accidental deletion); when you delete a file on the drive, the "free" space of the drive itself goes up just like on a physical drive inside your computer.
If you wish to free up actual space on your cloud provider's service for non-drive purposes, you should be using the Resize option of CloudDrive to shrink the drive; CloudDrive will automatically follow the Resize with a Cleanup afterwards to safely free up any excess used space on the cloud provider's service that is no longer required to support the drive.
To pre-answer "why can't CloudDrive automagically free up de-allocated space in the background and/or without resizing", I'd presume that doing so safely would require significant additional coding (I know VHDX has an option to compact without resizing, but not AFAIK while writable!) and the task itself would slow down the drive for users when it was happening so IMO it becomes a question of "can StableBit get enough ducks lined up in a row that it's worth adding this rather than doing something else".
But if you'd really like to have it do that, you can submit feature requests directly to StableBit via the Contact form.