Jump to content
  • 0

How many hdds an evga 1600w t2 can support?


billis777

Question

It comes with cables that connect up to 14 hdds, can i connect more with splitters?

What is the safest amount of hdds per 6 pin sata psu port?

Also what cables you recommend getting to give me access to more hdds?

Evga has only 6 pin to 4 sata ports cables(times 3) and scaring people from using different brands.

To connect more than 14 hdds i will need to use 6 pin to 5 sata ports cables or more.

Can you guys recommend me some good cables for my psu so it can support the most hdds it can?

Link to comment
Share on other sites

Recommended Posts

  • 1

I guess it would also help if you could tell what is is you ant to run off of the PSU. The guidance by PocketDemon seems sensible but it does depend on the usecase.

As a side note, I *think* that if you would be running many SSDs, then the 5V rail might become an issue as SSDs, AFAIK, fully run off of the 5V rail as opposed to HDDs which run off of the 12V rail for the motor and 5V rail for the electronics.

Another thing: Do you already have the 8TB Archive HDDs? If not, then why would you choose those. I have some experience with these drives but when I bought them (early 2015) the price difference between those and regular HDDs was big. Nowadays it is small and more likely that you can find cheaper (and/or larger) alternatives. Aside from price, there is no reason I can think of to choose those.

Link to comment
Share on other sites

  • 1

Yeah, so I agree that it is very likely that a 1600W PSU will be way overkill. A good indication of what OP wants to run off of the PSU would help providing a bit more specific advice.

@billis777:
1. Yes, you can connect more HDDs with splitters
2. I would advise to try to divide the number of HDDs evenly over the seperate cables coming from the PSU if you connect more than 16 HDDs, but that is just a to be sure thing. AFAIK, 5V is all one rail whereas 12V is sometimes divided between various rails (not in the EVGA case it seems but with my Be quiet straightpower 11 750W it is).
3. Any SATA-splitter will do I guess. Something like this https://www.startech.com/Cables/Computer-Power/Internal/4-SATA-Power-Y-Splitter-Cable-Adapter~PYO4SATA but there are cheaper options.
4. Don't get scared. But I would consider to use the supplied cables and add splitters instead of trying to find more expensive 6-pin to more than 4 SATA power ports cables.

 

 

Link to comment
Share on other sites

  • 1

I'm late to the convo but from my personal experience...

I got a 1600W Titanium EVGA PSU that powers a Zenith ROG extreme mobo with a 2950x OC'd to 4.1ghz.  It also powers 2xTitan XP (SLI) Nvidia cards.

Attached, I have 2 SSDs, 3 NVME drives, 6 Bluray drives, and various fans.

Three HBA cards (a LSI 9201-16i and two Intel expanders).

I also have 37 Green drives from 3TB-12TB and 3 SAS drives.

My PSU is able to support all that.

Now...that being said, I can't plug the vacuum into the wall in that loft because that'll trip the switch since my system is likely pulling max from the wall.

Link to comment
Share on other sites

  • 0

it depends on whether you can add a spin up delay or not - as the big power draw comes from the drives initial spin up on the 12V line.

Now, whilst every drive will have its own power draw (& there's no info on what you're looking at using), it's going to be easiest to use - https://www.45drives.com/wiki/index.php?title=Start_up_Power_Draw - as an example; since they've used a mix of consumer drives & high power draw enterprise drives in their testing.

 

So, with 15 consumer drives spinning up together (no spin up delay), this tested to be in the order of 425W - so, with the 12V line on the PSU being ~1600W, this would limit it to a max of around 55-56 drives; assuming that there's no other draw from the PSU of course.

 

If, on the other hand, you can add a spin up delay between the drives (both on boot & after sleep) & were using high power draw enterprise drives - which, ttbomk, needs the drives, RAID/HBA card(s), expander(s) & backplane(s) (as appropriate to your build) to support it - you're instead looking at the 5V line being the limiting factor...

So, with a 120W max on the 5V line from the PSU - 15 drives <=8.5w (random read from all drives) - & so in the order of 210 drives being randomly read from simultaneously; again assuming that there's no other power draw.

 

Okay, so these are only example figures - so obviously you'd need to check the 5 & 12V max draws of the drives you're going to be using - but unless you've either got some uber extreme drive requirements or are also powering other components that have a high power draw then the PSU you're looking at would be complete overkill & a total waste of money.

Well, for something like a 24 bay media server (so reasonably low total consumption from the other components - & using a RAID/HBA & expander combo with spin up delay; & usually a case with a backplane), the general recommendation is for a decent 450-550W PSU to give plenty of headroom... …&, tbh, choosing a 500-550W one is generally because of there being very few decent 450W ones.

 

As to splitters, all I can suggest is to buy something basic from somewhere fairly respectable since I have had a couple of cheap eBay ones burn out & kill things over the years; but there's no need for anything bespoke - & the normal way to increase the count would be to use a handful of molex-to-molex &/or molex-to-sata splitters to increase the count...

...however I do wonder what case you're going to be using with stupid numbers of drive bays that doesn't have a backplane - where you'd be powering the drives via molex instead?

Link to comment
Share on other sites

  • 0

Oh, & it might be helpful for me to explain calculating the max wattage - since most HDD specs give the max values in amps.

So, taking the WD Red drives as an example using the info from the data sheet - https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-red-hdd/data-sheet-western-digital-wd-red-hdd-2879-800002.pdf

This gives a max of 1.85A on the 12V line for the 8TB model - so, multiplying the two together gives a 22.2W max per drive - & with a 1600W max draw from the PSU on the 12V line then that'd be ~72 drives with no spin up delay (& no other 12V draw)...

...with the 10TB having a max of 1.79A - this gives a 21.48W max - & so ~74 drives...

…& the others being 1.75 - gives a 21W max - & so ~76 drives.

Link to comment
Share on other sites

  • 0
42 minutes ago, Umfriend said:

One thing to note is that a PS may be able to deliver 1600W but that typically does not mean you can get 1600W at any voltage or rail. You need to look at the specs of the PS combined with the HDDs as PocketDemon did.

How many HDDs are you looking to connect?

Whilst that's a fair point overall - I had believed that I'd covered it in context by talking about the relative limitations of the specific rails within both spin up delay & non spin up delay environments - where they logically had to be PSU specific.

So what i'd done first was to look up the specs of the PSU - https://www.evga.com/products/product.aspx?pn=220-T2-1600-X1 - which states that it can provide 1599.6W on the 12V rail (there was no value in not rounding to 1600W) & 120W on the 5V.

Likewise, just to work things the other way using the stated amperage on the 12V line then, using the 10TB WD Red example - 1.79A x 74 drives = 132.46A... …& it's rated for 133.3A - so it works either way around.

 

Link to comment
Share on other sites

  • 0
2 minutes ago, Umfriend said:

Yes, sorry, I did not actually look at the specs.

No problem at all... But as I hope I got across you're completely correct that the OP should check out the exact specs of any PSU that they're considering - as my example of "a decent 450-550W PSU" was a little vague...

...so imho at least then you've materially added to the thread. :)

Link to comment
Share on other sites

  • 0

Thanks for the valuable info pocketdemon, my concern is the 5v rail of the evga 1600w is at 24A and most hard drives use around 2A, would that limit me to 12 hdds then?

On the evga psu hdds can only be plugged in at the 6pin ports which are 5v.

The hdds i plan on using from now on are the seagates 8tb archive.

I also think to get the lsi 9305-24i hba card, what size of fan does it support do you know?

 

Link to comment
Share on other sites

  • 0
1 hour ago, billis777 said:

Thanks for the valuable info pocketdemon, my concern is the 5v rail of the evga 1600w is at 24A and most hard drives use around 2A, would that limit me to 12 hdds then?

On the evga psu hdds can only be plugged in at the 6pin ports which are 5v.

The hdds i plan on using from now on are the seagates 8tb archive.

I also think to get the lsi 9305-24i hba card, what size of fan does it support do you know?

 

Running through -

1. I've no idea where you're getting a 2A on the 5V rail typical HDD current draw from, as even the couple 2.5" 10K & 3.5" 15K drives I have here are rated as needing less than 1A on the 5V line.

Just thinking aloud, but perhaps you're looking at the wattage instead - as, going back to the WD Reds as an example, the 10TB is rated at 400mA (ie 0.4A) on the 5V line - & multiplying that by 5V would give 2W.

 

2. So you're meaning the 5x 6pin ports on the PSU... ...however the cables that connect to them go to either SATA &/or MOLEX.

Now the standard cable list for the PSU includes -

SATA (550mm+100mm+100mm+100mm) x3 = 12 SATA drives

SATA (550mm+100mm) / MOLEX (+100mm+100mm) x1 = 2 SATA drives + 2 spare MOLEX

MOLEX (550mm+100mm+100mm) x1 = 3 MOLEX

- which are  the 5 cables that you can connect to the 5x 6pin ports on the PSU of course.

 

So, that obviously gives you 14 SATA drive power connectors to start with...

...& you can then add more by using either - MOLEX-to-SATA splitters...

...or MOLEX-to-MOLEX splitters AND MOLEX-to-SATA splitters coming off them...

...or by using the MOLEX connector (with or without splitters) to connect to a backplane.

 

3. Okay, so we look at the data sheet for the 8TB Seagate Archive drives - https://www.seagate.com/files/www-content/product-content/hdd-fam/seagate-archive-hdd/en-us/docs/100818689b.pdf

...& at figure 2.7.1 it lists the 5V operating amps at 0.33 - which, given the 24A max, would give a limit on the 5V line of 72 drives all spun up simultaneously; assuming no other 5V draw...

...so again I've no idea where you're getting 2A from.

 

4. That's one mighty large heatsink on the 9305-24i!!!

Right, I can't immediately see anything that will tell me the complete layout under the h/s...

...however, since the official Broadcom page (https://www.broadcom.com/products/storage/host-bus-adapters/sas-9305-24i) explicitly describes it as being a single IOC &, a guick google image search is showing nothing unusual on the back of the pcb, it should only be the area above the main chip that needs a fan.

(ie the area within the 4 retaining pins)

So I personally have used Noctua NF-A4x10 FLX fans (with some self-tapping screws of the correct size for the respective heatsinks of course) for years - & they're uber reliable...

...but there are variants of the 40mm Noctua fan which might better suit your needs https://noctua.at/en/products/fan

Tbh though, unless you've got the facility to add a thermal sensor then that'd write off all of the PWM fans...

...there's no real value in having a 5V one...

…& the 10mm one gives more clearance than a 20mm one.

 

Otherwise, as before, imho the PSU you're looking at is complete overkill unless you've got some completely separate major power usage (GFX cards in SLI or something) that you've not listed.

Link to comment
Share on other sites

  • 0
3 hours ago, Umfriend said:

I guess it would also help if you could tell what is is you ant to run off of the PSU. The guidance by PocketDemon seems sensible but it does depend on the usecase.

As a side note, I *think* that if you would be running many SSDs, then the 5V rail might become an issue as SSDs, AFAIK, fully run off of the 5V rail as opposed to HDDs which run off of the 12V rail for the motor and 5V rail for the electronics.

Another thing: Do you already have the 8TB Archive HDDs? If not, then why would you choose those. I have some experience with these drives but when I bought them (early 2015) the price difference between those and regular HDDs was big. Nowadays it is small and more likely that you can find cheaper (and/or larger) alternatives. Aside from price, there is no reason I can think of to choose those.

Whilst you're certainly correct that consumer SSDs don't use the 12V line & have a max 5V power draw that's higher than that for HDDs, just to give a rough idea...

...then you're looking at a peak of around ~5W on the 5V line - which, dividing Watts by Volts gives a max of ~1A.

So yeah, 2-3x higher than a typical 3.5" HDD draw, but that'd still be looking at a safe 22-23 SATA SSDs on that PSU - assuming no other 5V power draw.

 

[Edit - just for clarity, whilst, tbh, I don't imagine you'd have many people trying to run that many consumer SSDs together, the amount of time that you'd actually be running at ~5W/~1A is minimal - & clearly far less time than the same usage on a HDD...

...however, whilst it's very unlikely that you'd ever have all of the SSDs hitting that point at the same time - unless, I guess, you had them all in a single R5 array or something equally weird - it's reasonable to calculate for the worst case scenario... …& this applies equally when looking at all of the components that are being powered.]

 

For the sake of completion though, the one other time that I'm aware of that there could be a significantly higher 5V draw is with consumer 2.5" drives without a spin up delay.

So, whilst the enterprise 2.5" drives use a mix of 5V & 12V, the consumer 2.5" drives also only use 5V; & taking the 5TB Barracuda as an example, the datasheet gives a 1.2A draw on start up - https://www.seagate.com/www-content/product-content/barracuda-fam/barracuda-new/files/barracuda-2-5-final-ds1907-1-1609gb.pdf

...which, using the OP's PSU & no start up delay then would limit to an absolute max of 20 drives; assuming no other 5V power draw.

 

Now, the key thing to note about this datasheet is that, whilst that figure is in Amps, it then gives the running power usage in Watts...

…& 2.1W/5V = 0.42A...

...so, assuming you could stagger the initial spin up of the drives, this takes us back to a stupid number of drives - 47 to be precise - assuming no other 5V power draw.

Okay, why someone would want to use a load of consumer 2.5" drives like this would be a bit of a puzzle - but it's just about trying to make the thing as complete as possible.

 

Oh, & obviously if someone were using ancient drives of whatever sort then they'd also need to check the current on the respective lines...

...but in that everything's just example figures then if you're looking to limit the PSU to being something commensurate to your requirements then you should be checking the specs of what you're buying anyway.

 

As to the 8TB Archives, they wouldn't be my choice either.

Whilst I've got an array of drives, the best value in the UK has been buying 10TB WD Elements external drives from Amazon & shucking...

...though obviously this voids the warranty & I know that the latest stock they have in (since they dropped below £200) needed the 3.3V pin mod doing to be detected by the back plane in my 24 bay SAS DAS - but 33 metres of Kapton tape was only £1.54 delivered.

(due to my usage, I use external SAS to connect between the computer with the raid card in & the server case with the expander in - hence it's a SAS DAS)

Edited by PocketDemon
added a note
Link to comment
Share on other sites

  • 0
6 minutes ago, Umfriend said:

Yeah, so I agree that it is very likely that a 1600W PSU will be way overkill. A good indication of what OP wants to run off of the PSU would help providing a bit more specific advice.

@billis777:
1. Yes, you can connect more HDDs with splitters
2. I would advise to try to divide the number of HDDs evenly over the seperate cables coming from the PSU if you connect more than 16 HDDs, but that is just a to be sure thing. AFAIK, 5V is all one rail whereas 12V is sometimes divided between various rails (not in the EVGA case it seems but with my Be quiet straightpower 11 750W it is).
3. Any SATA-splitter will do I guess. Something like this https://www.startech.com/Cables/Computer-Power/Internal/4-SATA-Power-Y-Splitter-Cable-Adapter~PYO4SATA but there are cheaper options.
4. Don't get scared. But I would consider to use the supplied cables and add splitters instead of trying to find more expensive 6-pin to more than 4 SATA power ports cables.

 

 

All perfectly sensible imho - & I'd agree that Startech would be 'a' reputable supplier for splitters.

Yeah, I guess my approach was to show how the OP could calculate things (& the importance of working to either Watts or Amps) - but certainly, if they get back with what they're proposing as a complete build, it'd be feasible to look at finding a more commensurate PSU for them... ...though we'd also need to know where they're located - as comparative pricing can vary by region.

Well, it'd be better to have a bit of extra cash towards something else in the build - or beer or whatever.

 

The one other thing I've thought of is, having decided on a PSU, it might be worthwhile contacting the manufacturer to see about getting alt cables to better meet what you need & reduce the number of splitters.

Now, whilst YMMV of course, I've had Corsair (at least historically, some of their PSUs were built by decent suppliers - not looked for a while though) in the EU post extras to me a couple of times in the past for nothing.

Link to comment
Share on other sites

  • 0

Thank you so much for all the info pocketdemon.

The lsi 9305 24i card, i measured the distance where the fan goes diagonally and it was little less than 2 inches.

A 40mm fan doesnt look like it would fit there, i contacted broadcom but still havent heard back from them. I'm thinking to buy a 40mm fan and try it out but just by looking at the heatsink the 4 holes where the fan goes dont make also a perfect square like most fans are but a rectangular. Are there any rectangular fans out there or the holes are like that so we can mount two 40mm fans side by side using only 2 screws on each fan?

Also i read somewhere that 8 pin psu vga to sata adapters exist that convert the 12v to 5v so we can mount hdds there too but couldnt find such cables by googling. Are you familiar with such cables?

Link to comment
Share on other sites

  • 0

With the fan, you're not looking at mounting it using the 4 non-return pins that hold the h/s on - but instead using self-tapping screws, of a size that will screw between the fins (cutting a thread into them) on the h/s...

…& it's completely irrelevant if the fan is square to the pins or at a 45 degree angle or somewhere in between - solely that it's centred roughly in the middle of the IOC.

Obviously this modifies the card - however from personal experience of re-selling a couple of older cards previously then buyers have seen a fan already being fitted as being a major plus... ...as it's a far more elegant solution than trying to mount a large fan angled toward the h/s - & the pcie slot coolers really aren't man enough for the job in my experience.

Now, having a quick check in my previous eBay purchases, the last ones I bought (which were mounting 2 of the 40x10mm fans on, respectively a 9271 -8i & a Chenbro expander) were - Countersunk Cross Head Self-Tapping Screw M3X15mm...

...but I can't 100% promise that these will be appropriate since I don't know the gap size between the fins... But you can measure this & find out - & obviously if you bought a 40x20mm fan you'd want them to be 25mm long, not 15mm.

[Edit]

& they need to be countersunk as that's what those fans need.

[End Edit]

 

Otherwise, it wouldn't surprise me if adapters like that existed - well, a 2 second google search & there's certainly the components to wire one up yourself...

...however, unless you are now looking at buying a really underpowered PSU where there was insufficient current on the 5V line (for the HDDs) then I honestly can't see the point.

Well, back before modular PSUs then all we had were the world of splitters to connect lots of HDDs - & whilst there'd be no sense in trying to run all 24 drives off of the same cable for no good reason, a reasonable sharing between what's there is fine.

[Edit]

Yeah, unless something's materially changed in what you're proposing, my honest opinion is that what you're asking here would be finding a solution to something that simply isn't a problem.

Edited by PocketDemon
Link to comment
Share on other sites

  • 0

You saved me from a lot of frustration with the info you gave me about the lsi card.

I plan to build an open air frame and i'm thinking to custom mount a fan next to it, i have a lot of options on how to mount fans on the frame i'm planning to build using 2020 t slot extrusions.

I was also thinking since i will be using the lsi card on an open frame is a cooling fan really required? Will this card get really hot when copying files to a single hdd?

Most of my hdds are full and will just be sitting there so only one hdd right now will be doing work and i was thinking do those hba cards get really hot when operating multiple hdds at the same time or regardless of what you do they always get really hot?

Also what software you recommend to monitor temps?

I'm thinking to also get a pwm fan hub, which one you recommend?

I see on amazon some listed at 12v, do they still get plugged in at the psu 6 pin 5v power sata ports?

Link to comment
Share on other sites

  • 0

Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card.

Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage.

(it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options)

Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead.

So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this.

 

I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not.

Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent.

So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.

Link to comment
Share on other sites

  • 0

Yeah assuming that they're the right diameter for the fins (the gaps have to be a little smaller than the diameter of the screws) then it's just a screwdriver.

Naturally though, a bit like fitting a CPU fan/block, you'd want to work around the screws rather than tightening each one to the fullest extent individually - as this significantly helps in terms of overall positioning of the all of the screws & the fan.

Oh, & it's a 'rest the card on something on a table' job - as it's easy to get the screws off vertical unless you can push straight down.

 

Link to comment
Share on other sites

  • 0

Sounds good, one more thing, my motherboard p8z77 v lk has an option in the bios to make the hdds hot swap, what are the advantages and why is not on by default?

Is hot swap better off in some cases?

The hdds i plan on using from now on are the 8tb archive by seagate, if i have them as hot swap in the bios i can unplug them while spinning and not risking damage?

Link to comment
Share on other sites

  • 0

The hot swap on the motherboard would only apply to drives connected to the onboard intel controller - not the HBA card.

Now, ttbomk (as I don't see any reason for using it) the intel hot swap option works the same as 'safe removal' with USB - so it'll treat the drive as being an external drive & you then have the option of ejecting it on the taskbar... ...randomly pulling it out & you'd stand the chance of data loss as there's obviously not a BBU cache.

 

In terms of DP though, in my experience then having missing drives in a pool prevents writes to that pool until you add a replacement drive, assign it to the pool & remove the missing drive as being part of the pool... ...or you switch the machine off, reinsert the original & turn on again.

&, whilst I haven't tried this with hot swapping, that's the way it's worked for me with having a drive from a pool in a USB dock, when (without duplication) swapping one drive for a larger capacity one...

...as, whilst DP can recognise that a drive's suddenly gone missing, it doesn't appear to be able to tell that it's been added again - with that check seemingly only happening at boot.

 

Anyway, irrespective of whether you were looking at hot swapping on the onboard controller or the HBA...

...if you had, say, a couple of drives which you were looking at spasmodically backing stuff onto & then storing elsewhere, I would personally look at an external USB3 dock instead.

 

Link to comment
Share on other sites

  • 0

Thanks for the valuable info!

I recently had an hdd failure and had to unplug the hdd and plug it in an external hub in a laptop to try to recover the data as it would take days, and was still able to use Drive Pool no problem.

By the way if an hdd has duplicate data/same file with another hdd in the drive pool, does DP detects and deletes the duplicate file automatically? If not is there an option to make it do it automatically?

Link to comment
Share on other sites

  • 0
On ‎3‎/‎28‎/‎2019 at 3:48 PM, billis777 said:

By the way if an hdd has duplicate data/same file with another hdd in the drive pool, does DP detects and deletes the duplicate file automatically? If not is there an option to make it do it automatically?

I'm not quite sure what you're meaning here.

Well, if for example, you had 2x duplication enabled for a folder (so it was stored twice on 2 drives) & then disabled it, it would delete one copy of each file & sub-folder.

Whereas if, instead, you're meaning that you chose to copy (not move) a file from one folder to a 2nd folder on the pool - then it would exist in 2 folders x the duplication for those folders... ...& you'd have to manually delete it from one or other folder if you only wanted 1 copy x the duplication.

Link to comment
Share on other sites

  • 0

Thanks for the info pocket demon, can i use ccleaner's duplicate file finder to find and delete duplicated files i made by mistake safely from drivepool?

Another concern i have is if i have more hard drives than drive letters will it be a problem with windows 10 or drivepool?

 

Link to comment
Share on other sites

  • 0
6 hours ago, billis777 said:

Thanks for the info pocket demon, can i use ccleaner's duplicate file finder to find and delete duplicated files i made by mistake safely from drivepool?

Another concern i have is if i have more hard drives than drive letters will it be a problem with windows 10 or drivepool?

 

Taking these the other way around as I think it possibly makes better sense -

Whilst I suppose you could assign every drive a drive letter & then run out of them... ...what I do is to not assign drive letters to any of the drives in a pool, but instead only to each pool.

Yeah, this is one of the reasons for choosing to use DP or RAID (inc JBOD & R0 which aren't technically RAID of course) or Storage Spaces or...

...& I can personally see no value whatsoever in being able to randomly see the contents of each individual drive in a pool constantly... …& it doesn't affect any other s/w that I'm using (ie for defragging or virus scans or whatever)… ...though I do still name all of the drives (in Computer Management &, where appropriate, in the RAID & expander cards s/w) for organisational & maintenance purposes.

 

Well, providing you tell it to only search the drive letter assigned to the pool (as opposed to all of the drives within the pool together, as this would clearly cause issues if you're using DP's duplication), I can see no reason whatsoever why you couldn't use a 3rd party app to search for duplicates - though, quite obviously, you need to manually verify what you're deleting.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...