Jump to content

All Activity

This stream auto-updates

  1. Past hour
  2. I replaced my Hades Canyon windows 10 PC with a MeLE 4C mini pc with windows 11. I deactivated the drivepool license on the windows 10 machine and after installing drivepool on the MeLE 4C mini pc, activated it. I have a four bay DAS with two 14TB drives and two 16TB drives. I have two pools (we'll call them 28TB pool, and 32TB pool). I connected the DAS to the mini pc and at first file manager could only see three of the four drives, but after plugging the DAS into the USB-C port I can see all the media files on the two pools. The problem is when I double click a media file on the 28TB pool the app says "file not found" even though I can see the file in the drivepool drive letter in file manager. I can play all files in the larger 32TB pool. I moved the DAS back to the windows 10 machine for regression testing, and even though I deactivated the drivepool license, I can still play any file from either pools. I do not know if this is a mini pc problem, a windows 11 problem, or if drivepool is doing something strange. I didn't include my activation ID because I didn't think it would be needed to troubleshoot this.
  3. Today
  4. hello again lol i'm going to ASSUME you are NOT using duplication on your DrivePool. i gather this from your previous post where you said you had attempted to use SnapRAID to no avail. Shane is of course correct about disabling any file placement rules pertaining to the iSCSI drive BEFORE attempting the below. i would: 1. show hidden files and folders, and STOP the DrivePool service. 2. navigate to the PoolPart folder on the iSCSI drive, select all folders/files, right-click CUT and go up and PASTE on the root of the drive. 3. reopen DP GUI and remeasure the pool (important). the data you moved out of the pool will now show as 'other.' but it will still exist on the root of the iSCSI drive (just not in the 'pool.'). 4. STOP the DrivePool service again. open file explorer instances for each of the underlying pooled drives you wish to move the unpooled iSCSI data to. right-click COPY and PASTE distributing the data from the root of the iSCSI drive into the PoolPart folders on whatever local drives in the pool where there is space to do so. i say 'COPY' because you want to compare the copied data to the source, right? once you are satisfied a bit for bit copy has occured, you can gradually right-click SHIFT + DELETE all data from the iSCSI drive. 5. when done, REMEASURE the pool and REBOOT. keep in mind that, whenever the DrivePool service may be stopped, the DrivePool kernel driver is still enabled and the DrivePool still shows up in file explorer as one big honkin' drive. this means all drives comprising your pool are still considered part of the 'conglomerate whole,' and traditional 'drag-n-drop' practices can/may no longer apply between the underlying drives. it can be a mindf**k for sure. HAVE FUN cheers
  5. Removing a drive doesn't make the pool read-only during the removal. It does prevent removing other drives (they'll be queued for removal instead) and I believe it may prevent some other background/scheduled tasks, but one should still be able to read and write files normally to the pool. Only problem I can think of is if you're removing drive X and you've got a file placement rule that says files can only be put on X; I'd assume you'd have to change or disable that rule.
  6. Yesterday
  7. I'm trying to move some files around on my drive pool, and I need to evacuate a large iscsi drive. The proper process of removing the drive will lock my pool up for a few days, but I want to continue to use the pool to write data to while the removal is happening. Can I avoid this by manually going into the poolpart and moving the files to another drive's poolpart? Will that freak out the drivepool? I've already excluded my drive I'd like to remove from the file placement, along with disabling any automatic balancing.
  8. Not that I know of. Perhaps make a feature request via the contact form?
  9. Did you get this figured out? I swapped out 2 drives this week, but I did them one at a time.. same-first drive removed quickly and easily, the 2nd drive took ~5 days after a remove error I received after 36 hours..
  10. is there any way to make it more than 1 disk at a time?
  11. Non-real-time duplication is scheduled, so one disk at a time. When that runs is controlled by FileDuplication_DuplicateTime in the Settings.json file.
  12. sorry, I was referring to non real-time duplication. Are there any setting/options for when that runs or is that the same thing?
  13. As far as I can tell real-time tasks (e.g. real-time duplication) are as concurrent as they need to be (e.g. 2x duping = simultaneous writes to two disks, 10x duping = to ten disks, etc) while scheduled tasks (e.g. balancing, duplication checking, etc, including manually initiated tasks) are one disk at a time (at least for writes).
  14. Last week
  15. Hello, brand new to this software. I've noticed this question gets asked indirectly a lot but I haven't been able to find a definitive answer yet. When the "duplicating" step is running, what option in the settings.json file allows for more than 2 disks to be used at the same time? I've tried changing the settings to all sorts of high numbers but still only 2-4 disks are being used concurrently (when there is 10 disks total in the pool). Once that's changed, is a reboot required? A restart of the duplication step? A restart of the software? At what point does the updated settings take effect? Thanks for any information you're able to provide and sorry about the noob question.
  16. hello, id like a "file placement rule" put "smaller/bigger than" on "selected drives" to better optimize space and speed. thank you, i appreciate this product.
  17. This post will outline the steps necessary to create your own Google Drive API key for use with StableBit CloudDrive. With this API key you will be able to access your Google Drive based cloud drives after May 15 2024 for data recovery purposes. Let's start by visiting https://console.cloud.google.com/apis/dashboard You will need to agree to the Terms of Service, if prompted. If you're prompted to create a new project at this point, then do so. The name of the project does not matter, so you can simply use the default. Now click ENABLE APIS AND SERVICES. Enter Google Drive API and press enter. Select Google Drive API from the results list. Click ENABLE. Next, navigate to: https://console.cloud.google.com/apis/credentials/consent (OAuth consent screen) Choose External and click CREATE. Next, fill in the required information on this page, including the app name (pick any name) and your email addresses. Once you're done click SAVE AND CONTINUE. On the next page click ADD OR REMOVE SCOPES. Type in Google Drive API in the filter (enter) and check Google Drive API - .../auth/drive Then click UPDATE. Click SAVE AND CONTINUE. Now you will be prompted to add email addresses that correspond to Google accounts. You can enter up to 100 email addresses here. You will want to enter all of your Google account email addresses that have any StableBit CloudDrive cloud drives stored in their Google Drives. Click ADD USERS and add as many users as necessary. Once all of the users have been added, click SAVE AND CONTINUE. Here you can review all of the information that you've entered. Click BACK TO DASHBOARD when you're done. Next, you will need to visit: https://console.cloud.google.com/apis/credentials (Credentials) Click CREATE CREDENTIALS and select OAuth client ID. You can simply leave the default name and click CREATE. You will now be presented with your Client ID and Client Secret. Save both of these to a safe place. Finally, we will configure StableBit CloudDrive to use the credentials that you've been given. Open C:\ProgramData\StableBit CloudDrive\ProviderSettings.json in a text editor such as Notepad. Find the snippet of JSON text that looks like this: "GoogleDrive": { "ClientId": null, "ClientSecret": null } Replace the null values with the credentials that you were given by Google surrounded by double quotes. So for example, like this: "GoogleDrive": { "ClientId": "MyGoogleClientId-1234", "ClientSecret": "MyPrivateClientSecret-4321" } Save the ProviderSettings.json file and restart your computer. Or, if you have no cloud drives mounted currently, then you can simply restart the StableBit CloudDrive system service. Once everything restarts you should now be able to connect to your Google Drive cloud drives from the New Drive tab within StableBit CloudDrive as usual. Just click Connect... and follow the instructions given.
  18. If the main goal is to have the cloud storage act as an offsite backup that doesn't slow down the local storage, instead of a single pool with local and cloud disks I'd suggest separate pools for local and cloud with a backup tool updating the latter from the former. Edit: Otherwise, with only a single pool of mixed local and cloud drives, ensure the cloud drive local cache is set to expandable and is on a large enough disk to handle typical write volumes. Per the manual: "An expandable cache will try not to consume all of the disk space on the local cache drive. It will always try to keep at least 5 GB free. As the free space on the cache drive approaches 5 GB, in order to continue to maintain some free space on the local cache drive, writes to the cloud drive will get progressively slower."
  19. I've been trying to setup a 2x duplicate system and am running into performance issues. I currently have 2x 8TB drives and would like to also use Azure Cloud storage. I have successfully set this up but I keep running into issues where it seems to be slow due to DrivePool writing on both Local and Cloud Drives at the same time. What would be the best configuration for using cloud storage as an offsite backup without sacrificing performance?
  20. i see what you mean about the 2x10tb only have 1x18tb to go to.. the 2nd 10tb successfully removed, yay! as for the drive not showing up in my tower-it was not showing up in BIOS or Disk Management; turns out it was the 3.3v pin on a shucked WD white label that was the issue.. i covered up the power pin with tape and all is well now. thank you so much for your response. now i'm in the process of removing the 18tbs and 10tbs and reformatting them, so i can increase the allocation unit size from 4KB to 64KB
  21. "?-did i go about this correctly? (i know)i have been impatient to let it duplicate/rebalance while i'm trying to complete my drives swap" "?-why did it give me the not enough space error when i added a new 18tb?" I might guess that if you had "duplicate files later" checked then it may not have had time to do that after the first 10tb was removed and before the second 10tb was removed, so it had to duplicate 2x10tb when at that point it only had 1x18 to go to? And/or did you have any File Placement rules that might've interfered? Only other thing I can think of is something interrupting the enclosure and DrivePool thought that meant not enough space. "?-why does the 2nd 10tb only read in my Sabrent enclosure but not when I install it in my tower?" No idea; DrivePool shouldn't have any problem with a poolpart drive being moved to a different slot (assuming DP was shut down before moving it and started up after moving it). When you say it didn't show up, do you mean "it didn't show up in DrivePool" or do you mean "it didn't show up at all, e.g. not in Windows Disk Management"? Because the latter would indicate a non-DP problem.
  22. fleggett1

    Drive question.

    I told Gillware that I couldn't afford the recovery and to just ship the drive back. Cost me $30 in shipping fees. I'll try some recovery tools on my own, like EaseUS. If that doesn't work, I know there are less expensive companies to consider, as from what I understand, Gillware is one of the most priciest operations doing this sort of stuff.
  23. Sorry for the inconvenience; I searched the forum last night and this morning before deciding to post this, haven't really seen anyone post this yet.. I have 5 drives in a Sabrent 5 bay usb c enclosure that I created my drivepool with: 2x 10tb, 2x 12tb, 1x 14tb; i want to remove 2x 10tb and install new 2x 18tb; i successfully removed 1x10tb, it didn't take much time, i had the middle of 4 boxes checked on the remove options, and I installed 1 18tb; now it's time to remove the 2nd 10tb, and it's been a weeks long struggle for me.... it took like 36hrs and i got an error-"There is not enough space on the disk" ...how? i just installed an 18tb and i see the pool has barely put like 1/10 of data on it; then i thought, i have 2x4tb drives in my tower (not apart of any pool), and stablebit scanner has been telling me they're failing; i'll remove them, install the 2x 10tb i trying to remove from the enclosure; i successfully removed the 1x 10tb from the pool, i'll just pull the 2nd 10tb and put both in my tower and add them back to the pool; tried that and my computer only reads the 1 i successfully removed, the 2nd 10tb is not showing up (i did read from my prior search that this is because it's in read only mode..?); i thought my computer would read the 2nd 10tb, and that drivepool will identify it and carry on like it's in the pool (all i did was take it out of my enclosure and put it in my tower). so now i put the (error'd)10tb back in my enclosure, i added the 1 (removed)10tb into my tower, added it to the pool, so fingers crossed it won't tell me there is not enough space on the disk... I'm at 85.7% on the removal as of writing this. ?-did i go about this correctly? (i know)i have been impatient to let it duplicate/rebalance while i'm trying to complete my drives swap ?-why did it give me the not enough space error when i added a new 18tb? ?-why does the 2nd 10tb only read in my Sabrent enclosure but not when I install it in my tower? Thanks in advance!
  24. After some more testing I can confirm cipher /w run on the pool only erases the free space on one drive in the pool before stopping, while filldisk run on the pool appears to leave anywhere from a few megabyes to over a gigabyte untouched on each of the disks in the pool. Filldisk also reacted similarly when run on individual drives via a mounted path (e.g. E:\Disks\MountedDrive) rather than via a drive letter, which is something to keep in mind if you use the former method to directly access your drives. Based on observing how DrivePool operated, I would recommend free space cleaning tools only be run directly on the individual drives. CyLog's Filldisk: Writes only zeroes, so three times faster than cipher /w. Doesn't remove the files it creates after it's aborted or finished, so you can effectively "pause" and you can see for yourself whether the free space is zero afterwards. In testing did not completely wipe the free space on drives accessed via mounted paths. May not scrub deleted files that are very small (1KB) due to how NTFS works. Microsoft's Cipher: Writes zeroes, then ones, then random bytes so more thorough but three times slower than filldisk. Leaves an empty EFSTMPWP folder on the target afterwards, so hard to check if it gets every last byte and there is a warning in the MS documentation about it potentially missing files of 1KB or less in size. Worked on mounted paths. SysInternals' SDelete: Like Cipher, but apparently has the 1KB file issue solved. TRIM: any SSD with TRIM functionality can do this automagically (and thoroughly). A windows command to manually trigger this is defrag volume /o - e.g. "defrag e: /o". If you are concerned about thieves scrounging through your Windows disks after stealing the machine, I'd recommend Bitlocker and a long passphrase.
  25. thank you, clear and concise. Re #1, seems like it shouldn't be too hard to implement, would be very powerful feature.
  26. "1. Can I specify to keep the most recent files in the SSD cache for faster reads, while also duplicating them to archive disks during the nightly schedule?" No; files can be placed by file and folder name, not by file age. You could try requesting it via the contact form? Not sure whether it'd work better as a plug-in or a placement rule (maybe the latter as an option when pattern matching, e.g. "downloads\*.*" + "newer than 7 days" = "keep on drives A, B, D"). "2. can I specify some folders to always be on the SSD cache?" Yes; you can use Manage Pool -> Balancing -> File Placement feature to do this. You would also need to tick "Balancing plug-ins respect file placement rules" and un-tick "Unless the drive is being emptied" in Manage Pool -> Balancing -> Settings. If you're using the Scanner balancing plug-in, you should also tick "File placement rules respect real-time file placement limits set by the balancing plug-ins".
  27. 2 questions. 1. Can I specify to keep the most recent files in the SSD cache for faster reads, while also duplicating them to archive disks during the nightly schedule? Essentially files would be duplicated at night during down time, and also files wouldn't be deleted from the SSD cache. However the cache would operate on a first in/last out algorithm, so that the most recent files would be kept on the cache for faster access? once the oldest file is to be replaced by the newest file, it would just get deleted from the cache since it would have been placed on the archive disk. 2. can I specify some folders to always be on the SSD cache? for instance my Plex database would be great to have permanently in the SSD cache. I have no need to duplicate this database, as it can always be rebuilt easily. Thanks!
  28. Earlier
  29. The 12 Gb/s throughput for the Dell controllers mentioned on the datasheet is total per controller, so if you operated on multiple drives on the same controller simultaneously I'd expect it to be divided between the drives. Having refreshed my memory on PCIe's fiddly details so I can hopefully get this right, there's also limits on total lanes direct to the CPU and via the mainboard; e.g. the first slot(s) may get all 16 direct to the CPU(s) while the other slots may have to share a "bus" of 4 lanes through the chipset to the CPU. So even though you might have a whole bunch of individually x4, x8, x16 or whatever speed slots, everything after the first slot(s) is going to be sharing that bus to get anywhere else (note: the actual number of direct and bus lanes varies by platform). So you'd have to compare read speeds and copy speeds from each slot and between slots, because copying from slotA\drive1 to slotA\drive2 might be a different result from slotA\drive1 to slotB\drive1 from slotB\drive1 to slotC\drive1... and then do that all over again but with simultaneous transfers to see where, exactly, the physical bottlenecks are between everything. As far as drivepool goes, if C was your boot drive and D was your pool drive (with x2 real-time duplication) and that pool consisted of E and F, then you could see drivepool's duplication overhead by getting a big test file and first copying it from C to D, then from C to E, then simultaneously from C to E and C to F. If the drives that make your pool are spread across multiple slots, then you might(?) also see a speed difference between duplication within the drives on a slot and duplication across drives on separate slots. If you do, then consider whether it's worth it to you to use nested pools to take advantage of that. P.S. Applications can have issues with read-striping, particularly some file hashing/verification tools, so personally I'd either leave that turned off or test extensively to be sure.
  30. If you mean Microsoft's cipher /w, It's safe in the sense that it won't harm a pool. However, it will not zero/random all of the free space in a pool that occupies multiple disks unless you run it on each of those disks directly rather than the pool (as cipher /w operates via writing to a temporary file until the target disk is full, and drivepool will return "full" once that file fills the first disk to which it is being written in the pool). You might wish to try something like CyLog's FillDisk instead (that writes smaller and smaller files until the target is full), though disclaimer I have not extensively tested FillDisk with DrivePool (and in any case both programs may still leave very small amounts of data on the target).
  1. Load more activity
×
×
  • Create New...