-
Posts
905 -
Joined
-
Last visited
-
Days Won
86
Reputation Activity
-
Shane got a reaction from ZagD in Can I use Drivepool inside a VM?
FWIW when I ran DrivePool in a MS Hyper-V VM a few times late last year, to mess with various pool setups, it worked without issue.
-
Shane reacted to FONMaster in reinstalling, but installer says reboot pending.
Mission accomplished! Thanks for the steer, Shane!
-
Shane reacted to RoadHazard386 in Making files visible on OneDrive Personal
Never mind. 😉 To accomplish my particular task, I don't actually need to make the files visible in Windows File Explorer, I just need to tweak OneDrive's settings to de-select the "OneDrive\Apps\StableBit Cloud Drive" folder. Now OneDrive will stop syncing it to each and every PC with OneDrive installed.
To free up the space it's already wasted, I then select the same folder in Windows File Explorer, right-click it, and choose "free up space."
-
Shane got a reaction from Shooter3k in Change settings.json to allow more concurrent disks?
Not that I know of. Perhaps make a feature request via the contact form?
-
Shane reacted to Alex in How to access your cloud drive data after May 15 2024 using your own API key
This post will outline the steps necessary to create your own Google Drive API key for use with StableBit CloudDrive. With this API key you will be able to access your Google Drive based cloud drives after May 15 2024 for data recovery purposes.
Let's start by visiting https://console.cloud.google.com/apis/dashboard
You will need to agree to the Terms of Service, if prompted.
If you're prompted to create a new project at this point, then do so. The name of the project does not matter, so you can simply use the default.
Now click ENABLE APIS AND SERVICES.
Enter Google Drive API and press enter.
Select Google Drive API from the results list.
Click ENABLE.
Next, navigate to: https://console.cloud.google.com/apis/credentials/consent (OAuth consent screen)
Choose External and click CREATE.
Next, fill in the required information on this page, including the app name (pick any name) and your email addresses.
Once you're done click SAVE AND CONTINUE.
On the next page click ADD OR REMOVE SCOPES.
Type in Google Drive API in the filter (enter) and check Google Drive API - .../auth/drive
Then click UPDATE.
Click SAVE AND CONTINUE.
Now you will be prompted to add email addresses that correspond to Google accounts. You can enter up to 100 email addresses here. You will want to enter all of your Google account email addresses that have any StableBit CloudDrive cloud drives stored in their Google Drives.
Click ADD USERS and add as many users as necessary.
Once all of the users have been added, click SAVE AND CONTINUE.
Here you can review all of the information that you've entered. Click BACK TO DASHBOARD when you're done.
Next, you will need to visit: https://console.cloud.google.com/apis/credentials (Credentials)
Click CREATE CREDENTIALS and select OAuth client ID.
You can simply leave the default name and click CREATE.
You will now be presented with your Client ID and Client Secret. Save both of these to a safe place.
Finally, we will configure StableBit CloudDrive to use the credentials that you've been given.
Open C:\ProgramData\StableBit CloudDrive\ProviderSettings.json in a text editor such as Notepad.
Find the snippet of JSON text that looks like this:
"GoogleDrive": { "ClientId": null, "ClientSecret": null } Replace the null values with the credentials that you were given by Google surrounded by double quotes.
So for example, like this:
"GoogleDrive": { "ClientId": "MyGoogleClientId-1234", "ClientSecret": "MyPrivateClientSecret-4321" } Save the ProviderSettings.json file and restart your computer. Or, if you have no cloud drives mounted currently, then you can simply restart the StableBit CloudDrive system service.
Once everything restarts you should now be able to connect to your Google Drive cloud drives from the New Drive tab within StableBit CloudDrive as usual. Just click Connect... and follow the instructions given.
-
Shane got a reaction from igobythisname in Remove error "Not enough space on the disk"
"?-did i go about this correctly? (i know)i have been impatient to let it duplicate/rebalance while i'm trying to complete my drives swap"
"?-why did it give me the not enough space error when i added a new 18tb?"
I might guess that if you had "duplicate files later" checked then it may not have had time to do that after the first 10tb was removed and before the second 10tb was removed, so it had to duplicate 2x10tb when at that point it only had 1x18 to go to? And/or did you have any File Placement rules that might've interfered? Only other thing I can think of is something interrupting the enclosure and DrivePool thought that meant not enough space.
"?-why does the 2nd 10tb only read in my Sabrent enclosure but not when I install it in my tower?"
No idea; DrivePool shouldn't have any problem with a poolpart drive being moved to a different slot (assuming DP was shut down before moving it and started up after moving it). When you say it didn't show up, do you mean "it didn't show up in DrivePool" or do you mean "it didn't show up at all, e.g. not in Windows Disk Management"? Because the latter would indicate a non-DP problem.
-
Shane reacted to igobythisname in Remove error "Not enough space on the disk"
i see what you mean about the 2x10tb only have 1x18tb to go to.. the 2nd 10tb successfully removed, yay!
as for the drive not showing up in my tower-it was not showing up in BIOS or Disk Management; turns out it was the 3.3v pin on a shucked WD white label that was the issue.. i covered up the power pin with tape and all is well now. thank you so much for your response.
now i'm in the process of removing the 18tbs and 10tbs and reformatting them, so i can increase the allocation unit size from 4KB to 64KB
-
Shane got a reaction from bobloadmire in Optimizing SSD cache with drive pool, trying to keep most recent files in cache for faster reads.
"1. Can I specify to keep the most recent files in the SSD cache for faster reads, while also duplicating them to archive disks during the nightly schedule?"
No; files can be placed by file and folder name, not by file age. You could try requesting it via the contact form? Not sure whether it'd work better as a plug-in or a placement rule (maybe the latter as an option when pattern matching, e.g. "downloads\*.*" + "newer than 7 days" = "keep on drives A, B, D").
"2. can I specify some folders to always be on the SSD cache?"
Yes; you can use Manage Pool -> Balancing -> File Placement feature to do this. You would also need to tick "Balancing plug-ins respect file placement rules" and un-tick "Unless the drive is being emptied" in Manage Pool -> Balancing -> Settings. If you're using the Scanner balancing plug-in, you should also tick "File placement rules respect real-time file placement limits set by the balancing plug-ins".
-
Shane got a reaction from inedibleoatmeal in Writing zeros to free space in pool
If you mean Microsoft's cipher /w, It's safe in the sense that it won't harm a pool. However, it will not zero/random all of the free space in a pool that occupies multiple disks unless you run it on each of those disks directly rather than the pool (as cipher /w operates via writing to a temporary file until the target disk is full, and drivepool will return "full" once that file fills the first disk to which it is being written in the pool).
You might wish to try something like CyLog's FillDisk instead (that writes smaller and smaller files until the target is full), though disclaimer I have not extensively tested FillDisk with DrivePool (and in any case both programs may still leave very small amounts of data on the target).
-
Shane reacted to fleggett1 in Drive question.
I heard back from Gillware. They want $1200 AND another drive that's at least 20 TB. And they can only "guarantee" that they'll be able to recover around 80% of the disk.
JFC. It's like they think the drive has been at the bottom of the sea for the past year. I was thinking $400 tops.
I may never again go anywhere near diskpart. If I had known things were going to be this expensive, I would've tried using something like EaseUS first, on my own.
Ughughughughughugh.
-
Shane reacted to fleggett1 in Drive question.
I'm still restoring the pool, which'll take another couple of days (no word yet from Gillware), but I got off my duff and took the Sabrent apart. I have to say that it's really impressive on the inside, with almost everything some manner of metal. Only the front doors and a handful of other parts are plastic. Unpopulated, Sabrent says that it weighs 17.8 lbs, so it would definitely kill you if someone dropped it on your head, even drive-less.
The PSU is a very cute 3.5" x 5" x 1.25" device that should be up to the task since it only has to have enough oomph for 10 drives along with the two 120mm "Runda" RS1225L 12L fans. As you can see from the label, it's a 399.6W "Mean Well" EPP-400-12 unit when cooled adequately. Since a normal 3.5 drive only consumes around 9.5W at load, that's only 95W needed for the drives, with the rest for the fans and the supporting PCB.
So, assuming the PSU isn't outright lying, I don't think that's the issue. I checked mine for obvious problems, like expanding or leaking caps, but didn't see anything. The caps themselves look to be of pretty decent quality and the bottom circuit board appears clean. The unit has what looks to be a rheostat, but I have no idea what it controls.
If I had to take a very wild guess, I would say there's something amiss with the backplane, as that seems to be a common source of consternation with low(er)-grade server-style boxes, like the Norco. It is interesting that I only started seeing drive dropouts once I had populated all ten bays, so maybe it works fine with only 8 or 9 bays running.
So far, the two Terras are running just fine. I'm praying to all sorts of gods that it remains so.
-
Shane reacted to Martixingwei in Google drive not supported after may 2024? Why?
I believe currently the only way is to download everything to local and use the converter tool come with CloudDrive to convert it to other mountable format.
-
Shane reacted to fleggett1 in Drive question.
Oh, hrrmmm, interesting. It looks like FreeFileSync did the job, but its UI looks like it was done by a madman, so I might try Robocopy next time. That fileclasses link is EXTREMELY helpful.
A bit more on the pool reconstruction front. I really needed more than six drive bays to work with, so I bloodied a credit card and bought another Terra. FFS mirrored everything on the old pool to the three new Seagates seemingly fine. Oh, but before that, the Seagates long formatted successfully, so I'm considering them good to go.
I'm currently long formatting three of the old pool drives, which'll take another 24 hours. Once that's done, I'll fire-up the second Terra with old pool drives and copy everything over from them to the drives that I'm in the process of long formatting (presuming they pass).
Gillware has the drive. I stressed to my contact that the drive should be fine electronically and mechanically, so they shouldn't have to take it apart. I'm HOPING this will lower the cost of the recovery substantially. You would think restoring everything from a simple diskpart clean should be a cakewalk for a professional recovery service, but we'll see.
Incidentally, I was looking over Terramaster's line of products and they are all-in on NAS devices, with their flagship product supporting 24 drives. I wish they would offer a 10- or even 8-bay DAS box, but then you're back to needing a beefy PSU. I still intend to take apart the Sabrent assuming it's not a nightmare to do so.
That's currently all the news that's fit to print. More to come!
-
Shane got a reaction from Mav1986 in Google drive not supported after may 2024? Why?
Once you've obtained your own API key from Google, the file to edit is C:\ProgramData\StableBit CloudDrive\ProviderSettings.json
I also suggest making a backup copy of that file before you make any changes to it.
You may then need to restart clouddrive. Per Christopher's post, "The safest option would be to detach the drive, change API keys, and re-attach. However, you *should* be able to just re-authorize the drive after changing the API keys."
I suspect 350 Mbps is the best you'll get.
-
Shane reacted to fleggett1 in Drive question.
Rossmann said they couldn't do anything with the drive, which REALLY surprised me, so they referred me to Gillware Data Recovery. I need to talk to them first before sending the drive in to make sure the attempt will be a reasonable sum, which I'll try to do tomorrow. I guess the moral of the story is not to futz around with diskpart unless you know for damn sure you know what you're doing.
I received the Terramaster. It looks like a typical grey internal drive cage, just a little wider and without any mounting points. It uses screw-in drive sleds, unlike the Sabrent, which is a PITA, but c'est la vie. However, it also came with some paper-like inserts that go in-between the drives and the bottom of the sleds, which I've never seen. The boxed instructions don't say why they're required and the online product guide doesn't even mention them, so I can only guess their purpose (electrical insulators?). Bizarre. It also doesn't have an internal PSU as such, but came with a laptop-style 12V barrel plug brick power converter. Unlike the Sabrent, the drives sit vertically, which makes me a bit uncomfortable (I might lay it on its side). There is only one USB-C connector.
There has been some talk on the Terramaster forums about people needing a powered USB hub with this unit. I don't know why this would be required when directly attached to the USB-C port on a motherboard, but maybe that's been my problem all along. If so, I'll consider slitting my wrists later.
I'm in the process of long-formatting three 18 TB Seagate Exos simultaneously. If that passes, I'll continue trying to rebuild the pool. This is my last attempt at doing so. If the formats fail, I'm giving-up, at least for awhile. Maybe in the far-flung future I'll get a proper tower with a 10,000 watt PSU and stuff it with drives, but I really, REALLY hope the Terramaster comes through. I have confirmed that Windows recognizes all six drives, but the Sabrent did the same until I started encountering those drive dropouts, so longevity will determine the winner.
More to come!
-
Shane reacted to fleggett1 in Drive question.
I haven't heard from Rossmann yet. If I don't get a reply by Wednesday, I'll voice call them.
I did order the 6-bay Terramaster, which should be here tomorrow. If/when I'm satisfied it's running as advertised, I'll rebuild the pool and copy over what I can. Assuming everything goes smoothly (har har), I'll reinstall everything and restore from config backups, which I've been holding-off on doing until I can figure things out.
I really, REALLY hope the Terramaster "just works". Since it only accepts six drives, the power demand should be a lot less than the Sabrent. I'd still like to disassemble the Sabrent to see what sort of PSU it's using. I'm 99.9% sure it's something custom, but you never know until you actually eyeball things.
I'll report back in a few days. Wish me luck.
-
Shane reacted to fleggett1 in Drive question.
I think I'm gonna give up on having a pool. Maybe a computer. I was trying to clear that 18 TB Exos out using diskpart. This Exos has the label of ST1800NM. Another 20 GB Exos that I had bought a few months ago has the label ST2000NM.
I was tired, bleary-eyed, more than a little frustrated with all these problems, wasn't thinking 100% straight, and selected the ST2000NM in diskpart, and cleaned it.
Problem is this drive had gigabytes of data that was critical to the pool. GIGABYTES. I still can't believe I made such a simple, rookie, and yet devastating mistake.
My God.
I don't know if any of the file table can be salvaged, as I just did a simple "clean" and not a "clean all", but I've got a recovery request into Rossmann Repair Group to see if they can do anything with it. I know there are some tools in the wild that I could probably run myself on the drive, but I don't trust myself to do much of anything atm.
I should never have thrown money at an AM5 system. I also probably should've stayed away from the Sabrent and anything like it. Instead, I should've done what any sane person would've done and assembled a proven AM4 or Intel platform in a full tower and attached the drives directly to the motherboard. Yeah, the cable management would've been a nightmare, but literally anything would be better than this. My goal of staving-off obsolescence as much as possible has instead kicked me in the teeth while I was already lying prone in a ditch.
If, by some miracle, Rossmann is able to recover the data, I'm going to take a long and hard look at my PC building strategy. Hell, maybe I'll throw money at a prebuilt or one of those cute HDMI-enabled NUCs that're all the rage. I just know that I'm exhausted and am done with all of this, at least for the time being.
-
Shane got a reaction from MrPapaya in Real-time/deferred duplication settings question...
I think that's not currently possible. It does sound useful and you might wish to submit a feature request (e.g. perhaps "when setting per-folder duplication, can we please also be able to set whether duplication for a folder is real-time or scheduled") to see if it's feasible?
-
Shane reacted to MrPapaya in Source and destination backup drives
I'm sure you figured it out already... From the images you posted, it just looks like a simple change is needed.
The pool called ORICO BOX is fine as is. The one in the first image is not correct. You should have:
A pool that has 12TB1 & 12TB2 with NO duplication set. (lets give it drive letter W:)
A pool called ORICO BOX with NO duplication set. (with the assorted drives) (Lets call it drive letter X:)
Now, drive W: essentially has 24TB of storage since anything written to W: will only be saved to ONE of the two drives. You can set the balancing plugin to make them fill up equally with new data.
Drive X: essentially has 28TB of storage since anything written to X: will only be saved to ONE of the five drives.
At this point, you make ANOTHER new pool, Lets call it Z: In it, put Drivepool W: and Drivepool X:. Set the duplication settings to 2X for the entire pool. Remember, you are only setting Drivepool Z: to 2X duplication, no other drivepools need changed.
What this should do (if I didn't make a dumb typo): Any file written to drive Z: will have one copy stored on either 12TB1 OR 12TB2, AND a duplicate copy will be stored on ONE of the five Orico Box drives. You must read & write your files on drive Z: to make this magic happen. Draw it out as a flowchart on paper and it is much easier to visualise.
-
Shane got a reaction from roirraWedorehT in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11
FWIW, digging through Microsoft's documentation, I found these two entries in the file system protocols specification:
https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/2d3333fe-fc98-4a6f-98a2-4bb805aff407
https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/98860416-1caf-4c80-a9ab-8d61e1ccf5a5
In short, if a file system cannot provide a file ID that is both unique within a given volume and stable until deleted, then it must set the field to either zero (indicating the file system does not support file IDs) or maxint (indicating the file system cannot given a particular file a unique ID) as per the specification.
-
Shane got a reaction from roirraWedorehT in Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11
MitchC, first of all thankyou for posting this! My (early a.m.) thoughts:
(summarised) "DrivePool does not properly notify the Windows FileSystem Watcher API of changes to files and folders in a Pool." If so, this is certainly a bug that needs fixing. Indicating "I changed a file" when what actually happened was "I read a file" could be bad or even crippling for any cohabiting software that needs to respond to changes (as per your example of Visual Studio), as could neglecting to say "this folder changed" when a file/folder inside it is changed.
(summarised) "DrivePool isn't keeping FileID identifiers persistent across reboots, moves or renames." Huh. Confirmed, and as I understand it the latter two should be persistent @Christopher (Drashna)? However, attaining persistence across reboots might be tricky given a FileID is only intended to be unique within a volume while a DrivePool file can at any time exist across multiple volumes due to duplication and move between volumes due to balancing and drive replacement. Furthermore as Microsoft itself states "File IDs are not guaranteed to be unique over time, because file systems are free to reuse them". I.e. software should not be relying solely on these over time, especially not backup/sync software! If OneDrive is actually relying on it so much that files are disappearing or swapping content then that would seem to be an own-goal by Microsoft. Digging further, it also appears that FileID identifiers (at least for NTFS) are not actually guaranteed to be collision-free (it's just astronomically improbable in the new 64+64bit format as opposed to the old but apparently still in use 16+48bit format).
(quote) "the FileID numbers given out by DrivePool are incremental and very low. This means when they do reset you almost certainly will get collisions with former numbers." Ouch. That's a good point. Any suggestions for mitigation until a permanent solution can be found? Perhaps initialising DrivePool's FileID counter using the system clock instead of initialising it to zero, e.g. at 100ns increments (FILETIME) even only an hour's uptime could give us a collision gap of roughly thirty-six billion?
(quote) "I would avoid any file backup/synchronization tools and DrivePool drives (if possible)." I disagree; rather, I would opine that any backup/synchronization tool that relies solely on FileID for comparisons should be discarded (if possible); a metric that's not reliable over time should ipso facto not be trusted by software that needs to be reliable over time. EDIT 2024-10-22: However, as MitchC has pointed out, determining whether your tools are using FileID can be difficult and the risk of finding out the hard way is substantial.
Incidentally, on the subject of file hashing I recommend ensuring Manage Pool -> Performance -> Read striping is un-ticked as I've found intermittent hashing errors in a few (not all) third-party tools when this is enabled; I don't know why this happens (maybe low-level disk calls that aren't compatible with non-physical volumes?) but disabling read-striping removes the problem and I've found the performance hit is minor.
-
Shane got a reaction from Tiemmothi in FAQ - Unduplicated vs Duplicated vs Other vs Unusable
The "Other" and "Unusable" sizes displayed in the DrivePool GUI are often a source of confusion for new users. Please feel free to use this topic to ask questions about them if the following explanation doesn't help.
Unduplicated: the total size of the files in your pool that aren't duplicated (i.e. exists on only one disk in the pool). If you think this should be zero and it isn't, check whether you have folder duplication turned off for one or more of your folders (e.g. in version 2.x, via Pool Options -> File Protection -> Folder Duplication).
Duplicated: the total size of the files in your pool that are duplicated (i.e. kept on more than one disk in the pool; a 3GB file on two disks is counted as 6GB of duplicated space in the pool, since that's how much is "used up").
Other: the total size of the files that are on your pooled disks but not in your pool and all the standard filesystem metadata and overhead that takes up space on a formatted drive. For example, the hidden protected system folder "System Volume Information" created by Windows will report a size of zero even if you are using an Administrator account, despite possibly being many gigabytes in size (at least if you are using the built-in Explorer; other apps such as JAM's TreeSize may show the correct amount).
Unusable for duplication: the amount of space that can't be used to duplicate your files, because of a combination of the different sizes of your pooled drives, the different sizes of your files in the pool and the space consumed by the "Other" stuff. DrivePool minimises this as best it can, based on the settings and priorities of your Balancers.
More in-depth explanations can also be found elsewhere in the forums and on the Covecube blog at http://blog.covecube.com/
Details about "Other" space, as well as the bar graphs for the drives, are discussed here: http://blog.covecube.com/2013/05/stablebit-drivepool-2-0-0-230-beta/
-
Shane got a reaction from Bear in Running Out of Drive Letters
Pretty much as VapechiK says. Here's a how-to list based on your screenshot at the top of this topic:
Create a folder, e.g. called "mounts" or "disks" or whatever, in the root of any physical drive that ISN'T your drivepool and IS going to be always present: You might use your boot drive, e.g. c:\mounts You might use a data drive that's always plugged in, e.g. x:\mounts (where "x" is the letter of that drive) Create empty sub-folders inside the folder you created, one for each drive you plan to "hide" (remove the drive letter of): I suggest a naming scheme that makes it easy to know which sub-folder is related to which drive. You might use the drive's serial number, e.g. c:\mounts\12345 You might have a labeller and put your own labels on the actual drives then use that for the name, e.g. c:\mounts\501 Open up Windows Disk Management and for each of the drives: Remove any existing drive letters and mount paths Add a mount path to the matching empty sub-folder you created above. Reboot the PC (doesn't have to be done straight away but will clear up any old file locks etc). That's it. The drives should now still show up in Everything, as sub-folders within the folder you created, and in a normal file explorer window the sub-folder icons should gain a small curved arrow in the bottom-left corner as if they were shortcuts.
P.S. And speaking of shortcuts I'm now off on a road trip or four, so access is going to be intermittent at best for the next week.
-
Shane reacted to Elijah_Baley in Windows update caused a problem
This is mainly an informational post. This is concerning Windows 10.
I have 13 drives pooled and i have every power management function set so as to not allow Windows to control power or in any way shut down the drives or anything else. I do not allow anything on my server to sleep either. I received a security update from Windows about 5 days ago.
After the update I began daily to receive notices that my drives were disconnected.
Shortly after any of those notices (within 2 minutes) I received a notice that all drives have been reconnected. There was never any errors resulting from whatever triggered the notices.
I decided to check and I found that one of my USB controllers had its power control status changed. I changed it back to not allowing Windows to control its power and I have not received any notices since.
I do not know for sure but I am 99% sure that the Windows update toggled that one controller's power control status to allow windows to turn it off when not being used.
I cannot be 100% sure that I have had it always turned off but, until the update, I received none of the notices I started receiving after the update.
I suggest, if anyone starts receiving weird notices about several drives becoming lost from the pool, that you check the power management status of your drives. Sometimes Windows updates are just not able to resist changing things. They also introduce gremlins. You just have to be careful to not feed them after midnight and under no circumstances should you get an infested computer wet.
-
Shane reacted to DebrodeD in File Placement Rules & SSD Optimizer don't work together
Future reference for anyone else who runs into this issue, I fixed it with the following settings:
Under file placement settings, uncheck "unless drive is being emptied", but leave "file placement rules respect real-time..." checked. This is important because the SSD optimizer empties the drive, which is why it was overriding file placement rules.
In file placement rules, Folder A should have the desired archive drive checked as well as the SSD cache drive.