Search the Community
Showing results for tags 'drives'.
Found 6 results
How old are your drives?
KingfisherUK posted a question in HardwareI recently ordered some new hardware from a supplier I've used for many years and when looking back through my order history, I noticed that I had purchased four 2TB Samsung HD204UI drives from them 10 years ago - three of these drives are still going strong in my file server! I've had numerous drives fail on me over the years, yet these ones just keep going! This got me thinking - how old is the oldest drive you have in your PC/server? Whilst I now know these HD204UI's are 10 years old, Stablebit Scanner reports them as around 7 years 300 days old, so I assume that's been their "power on" time. (Before anyone asks, yes I do have some redundancy in my system and have spare drives on standby just in case!)
VM Windows 7 drives no longer appearing in DP 'non-pooled' list
disposablereviewer posted a question in GeneralBefore I start, let me say that I did my best due diligence and read through and utilized these three related forum posts but none of them helped me resolve my issues: http://community.covecube.com/index.php?/topic/92-some-drives-not-recognized-by-drivepool/ http://community.covecube.com/index.php?/topic/1061-some-external-usb30-drives-not-recognized-by-drivepool/ http://community.covecube.com/index.php?/topic/1112-usb-hdds-no-longer-showing-in-drivepool-ui/ I am running the trial and doing extensive testing in a virtual machine before spending the money on a license for the DP software. I initially had no issues getting set up and running and it was a smooth process. However, I started playing around removing drives, deleting and re-adding vdi's in virtualbox, etc and now I am having issues with drivepool. Here is my current state after trying everything else I could think of including the suggestions from the above threads. My attempt here was to start with a clean slate: 1. I removed all drives in DP. 2. I uninstalled DP completely including deleting the associated appdata and programdata folders for DP. I also searched the entire registry and removed any references to DP or stablebit. 3. rebooted. 4. ran DISKPART clean on all test vdi drives to wipe them clean and not initialized. 5. shutdown. 6. In virtualbox, I removed all the vdi drives and deleted them completely off the host machine. 7. booted up, verified all the drives are gone, shutdown. 8. In virtualbox, I added all new vdi drives with new names/IDs to my virtual machine similar in size to what I did originally when things worked the first time. 9. booted up, computer management, storage, initialized all drives as GPT (note that I did all these steps with the drives MBR and had same results). 10. installed the latest beta of DP (I've gone through this whole process several times and did this all with the stable version as well, but had same results). 11. loaded up DP with the computer management in background (screenshot 1.jpg). Note that there are 7 "VBOX HARDDISK" options to choose from in non-pooled. 12. clicked add on the first "VBOX HARDDISK" in list (screenshot 2.jpg). New drive E created, new "drivepool" G created as expected. 13. click to remove E from pool (screenshot 3.jpg) 14. After removal, drive E no longer appears in the non-pooled list of drives as expected and there's 1 fewer drive to select from (6 instead of 7) (screenshot 4.jpg). I cannot get drives to reappear in DP one they've been formatted and are available for use in the O/S. 15. Use computer management to reformat E as "M" and format all the other drives "G" through "L." (screenshot 5.jpg). The only drive that appears in the non-pool list is L and all the others are still missing. Additional thoughts: - The first time through when it worked for me I was only using 6 drives instead of 7. It could be that L is only appearing because it's the 7th drive and therefore "new" to DP. - The first time through, I gave the drives a letter and everything worked. I removed them all and decided to use mount links instead to hide the drives in 'My Computer'. This also worked without an issue. - The problem may have come from the fact that I was reformatting, reinitializing, DISKPART cleaning, deleting and readding vdi's in virtualbox, etc without removing drives from DP pools first. I did receive some missing drive errors before doing the above clean slate steps above. Other things I've tried: 1. DISKPART cleaning and uniqueid of the drives. 2. Using mount links instead of drive letters. 3. Reset all settings in DP. 4. Nuking all drives from Virtualbox and adding new ones. Obviously something is funky/buggy with DP here since the drives all work just fine within windows after formatted. I am doing this testing in a VM specifically to try to break it or find issues. I don't know if it's only a virtual machine issue, but this isn't filling me with a lot of confidence for use with my real data and drives on my main box. I obviously won't be able to even do most of the steps above with real data without losing everything. I have attached a log that I ran while doing some of the above (no errors so not sure how helpful that will be). I have a drivepoolservice.dmp as per the suggestion in another thread, but apparently it's huge so I can't upload here... I will try to upload it using this link: https://stablebit.com/contact EDIT: The dumpfile zipped with 7z is still over 50MB and the filesize limit on the contact page is 10MB. Not much I can do. DrivePool.Service-2017-02-02.log
Recommendations for achieving Pool of Mixed NAS and Local Drives
lotsofdrives posted a question in GeneralGreetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 18.104.22.1681 x64 and CloudDrive 22.214.171.1244 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
how can I optimize DP to minimize drive overheats
John K Fisher posted a question in GeneralThis isn't really a DrivePool question directly, but my comp tends to have drive overheats, which isn't good. Obviously the real fix is hardware, (fans, etc) but in the interim, would it be better to have DP fill my biggest drive, then the next, then the next, etc. instead of having it spread across all the drives, allowing the smaller/older drives at the end of that list to be unused and idle and not generating heat in the case, OR.... is it best to spread it out among all the drives evenly even though the case seems to have trouble keeping up. If that makes sense. Which it may not. Basically, acknowledging that this is a hardware issue and not a DP issue, how can I optimize DP to minimize drive overheats?
How I replaced my 8-bay Calvary enclosure on the cheapI had maxed out my nearly 4 year old 8-bay Calvary enclosure (EN-CADA8B-SD) with 2TB drives and was beginning to have issues somewhere in the storage chain (eSATA port multiplier card, drivers, eSATA cables, Calvary enclosure, etc.). After troubleshooting off and on for a couple weeks or longer, I was ready to replace it with something less problematic that wasn't limited to 2TB drives. I didn't however want to spend half a grand for something that came with RAID functionality I knew I'd never use. For the time being, I wanted to continue using my existing drives and just replace the enclosure and connection to it. The enclosure was hooked to a Dell Inspiron 3847 (4th Gen Core i7, 16GB, running Win 8.1). Duties for the machine include Plex, Subsonic, DNS updating, Crashplan backups, Syncback backups from the web, and of course, storage. The machine sits out of sight in the basement, so going with something less pleasing to the eye was fine. I ended up going with a pair of Rosewill 4 drive cages and a couple of IO Crest SATA cards for connecting them to the Dell - which only has 2 PCI-e x1 slots and a single x16 slot. $45 for each of the Rosewills, and $34 each for the IO Crests and I was almost ready to go. The Rosewill cages came with SATA cables that were long enough to reach from the inside of the case to the cages sitting right behind the Dell. Power to the cages was supplied by an extra power supply I had lying around. A quick short from the green wire to any black one will make the power supply think it's hooked to a motherboard and power on. I used a paper clip to accomplish that. If you go this route, keep in mind you'll need about 10w for each drive to be on the safe side for power requirements. If you're interested in doing something like this, what you get is four drives worth of connectivity for about $80. There's no port multiplier functionality going on, one drive connects to one port on a card. If you have extra SATA ports, you can skip the card purchase. If you have extra molex power connectors for the Rosewill cages, you can go a little less ghetto than I did and skip the power supply lying on the desk. You are limited in performance to what a PCI-e x1 slot can handle (about 240MB/sec if I remember correctly), which seems fine, but remember, you're running four drives off of that, and 240MB/sec is theoretical. Real world performance will be lower. All in all, I definitely consider it a win. For not much out of pocket, I replaced the Calvary, gained the ability to use larger drives, and also the ability to buy another card and cage for a total of 12. Not bad for an initial outlay of about $160 out of pocket. Hope this helps someone out there looking to do something similar.
One or More Disks are MissingI've gotten this message a few times, and am just curious, when it happens, what should I do? I'm still testing DrivePool out, and I love the concept, but just want to make sure things work the way I expect before purchasing it and "moving to production" with it. So, ignoring for the moment "why" it happens, when it happens, should I reboot, or disconnect/reconnect the drive, or what? Also, when it says "ST2000DM 001-1CH164 SCSI Disk Device (S/N: --------) 34°C - 1.82 TB (Pool: DrivePool (R:\))" is missing, what's the recommended way of me determining which physical disk that is given that I have several in a drive bay, two or three externals, and an internal drive or two? Thanks for your time