Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. You could create a Pool from disks B, C en D, no duplication, let's call this Pool Q. Then you can create a Pool P that consists of Disk A and Pool Q, duplication x2. That ensures duplication with one copy on disk A. Personally, I would create a Pool R with disk A so that it is easy to add another speedy disk. I would want each Pool Q and R to have a enough space so that the failure of one can easily be corrected by DP. I think Stablebit has an add-in that prioritises certain disks in a Pool, that might help for allocation between disks B, C and D.
  2. Umfriend

    Disk not seen

    Screenprints might help. 31% overhead seems a bit much, as in impossible. Perhaps the counter does not include certain hidden directories. I have had the System Volume Information folder using loads of data.
  3. Chris is no fan of Highpoint. You can search for Highpoint on this forum and find some of his posts on this. But perhaps the 750 is different. The LSI card won't increase the performance of the Archive (or any other) HDD. Perhaps you were benefiting from the cache for a bit or maybe it was writing to more than one HDD at the same time or somesuch. How did you measure this?
  4. There is a difference between using VSS and supporting it. If you use VSS, then VSS will do all sorts of stuff for you and in turn use, for instance, NTFS. NTFS must therefore be able to support functions that VSS uses. It is not that easy to know what functions VSS uses and that is what you need to know to support it. As a, probably faulty, analogy, say you want to program something for your video card to do. You need to make calls to the, say, nVidia driver. These will be documented as nVidia wants people to program for their drivers. Now suppose that driver makes calls to the OS and you want to write your own OS on which that nVidia driver runs. Then you need to know what that driver needs from the OS and facilitate that. However, what that driver needs from the OS may not be documented. And there is a really good reason not to want to backup the entire volume. Let's say you have a 4 x 4TB Pool, duplicated. You may have 6TB of (unduplicated) data in that Pool. If you then try a BMR, you would need one HDD of at least 6TB to be able to restore that _or_ the recovery software must be able to load/support DrivePool as well. I don;t know what you use but the Windows Server Backup recovery software won't do that. So yes, I backup the underlying disks.
  5. I did briefly look but found no such trigger unfortunate;y. The closest I got to was https://sumtips.com/how-to/run-program-windows-wakes-up-sleep-hibernate/ where you could define a task that first stops and then starts the service but that may be to late/slow.
  6. So in the dashboard where you see the shared folders, IIRC, you select one and then in the options on the right of the dashboard ("Music Tasks"?) there is an option to Move the folder. I'm on WSE2016 since end of December, used to rock WHS2011 but I can't precisely remember or make pics. Edit: So you can do this from the Server desktop or from a client through the dashboard. Personally I always go through the desktop through RDP.
  7. So my guess is that with the clean install, the Dashboard has the standard shared folders back on the C:\ drive again. I think that if you move these to the Pool, then all of a sudden these will be fine again. Try it with one folder that you backed up somehow first (because although it is unlikely, I am not 100% sure moving a shared folder will not overwrite at the target location). Also, you may want to reset NTFS permissions: http://wiki.covecube.com/StableBit_DrivePool_Q5510455
  8. If there is x2 duplication then you need two SSDs for tj SSD optimizer/write cache. I would advise against using an OS drive for data. But certainly you could do what you propose with 2 SSDs. It can even be done with one if you use hierarchical Pools but I would advise against that too. What I do not know is whether you can use two SSD as cache _and_ designate the same as targets for, say, *.db files. Chris would know i guess.
  9. I am not certain but to the extent a program postpones execution until a write is completed, I would expect it to receive the write-finished confirmation only when all duplicates are written. One thing to remember is that in DP, there is no primary/secondary idea. All duplicates are equal. Well, I think DP is intended to support this but it may not be that straightforward to get to work easily.
  10. I don't think that is correct. It seems to me that you can cause DP to place all files that conform to *.db to a single HDD/SSD (that is part of the Pool though). This can be done through the File Placement options. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=File Placement And that would allow for the use of an SSD as part of the Pool dedicated to *.db files only. It is a bit complex to set up (see e.g. I can't say I would look forward to setting this up but it is intended to support what you want I think. Of course, if you have duplication then it is a bit more complex as you need two SSDs for this to work, unless the .db files only need to be read fast and writes can be "slow". I am curious though as to what program you are using that has these *.db files.
  11. Doh, yeah. I wonder whether going to sleep is an event that can trigger a task in task scheduler. If so, you could stop the service when going to sleep and start it when waking up perhaps.
  12. AFAIK, there is no reason why programs could or should not be located on a DrivePool so perhaps you could relocate all to DP?
  13. What you could try is to set the DrivePool service to a delayed start.
  14. In a way I am lucky I only have a 1Gb network.
  15. I doubt Stablebit would want to go the RamCache route because of the risk of any system failure causing the loss of (more) data (compared to SSD Cache or normal storage). I don;t but I know there are people here that succesfully use the SSD Cache. And it really depends on what SSD you are using. If it is a SATA SSD then you would not expect the 10G to be saturated. In any case, @TeleFragger (OP) does use duplication so he/you will need two SSDs for this to work.
  16. Is the pool you are copying to duplicated? If so then you will need more SSDs because DP will want to write multiple copies right away.
  17. Yeah, so I agree that it is very likely that a 1600W PSU will be way overkill. A good indication of what OP wants to run off of the PSU would help providing a bit more specific advice. @billis777: 1. Yes, you can connect more HDDs with splitters 2. I would advise to try to divide the number of HDDs evenly over the seperate cables coming from the PSU if you connect more than 16 HDDs, but that is just a to be sure thing. AFAIK, 5V is all one rail whereas 12V is sometimes divided between various rails (not in the EVGA case it seems but with my Be quiet straightpower 11 750W it is). 3. Any SATA-splitter will do I guess. Something like this https://www.startech.com/Cables/Computer-Power/Internal/4-SATA-Power-Y-Splitter-Cable-Adapter~PYO4SATA but there are cheaper options. 4. Don't get scared. But I would consider to use the supplied cables and add splitters instead of trying to find more expensive 6-pin to more than 4 SATA power ports cables.
  18. I guess it would also help if you could tell what is is you ant to run off of the PSU. The guidance by PocketDemon seems sensible but it does depend on the usecase. As a side note, I *think* that if you would be running many SSDs, then the 5V rail might become an issue as SSDs, AFAIK, fully run off of the 5V rail as opposed to HDDs which run off of the 12V rail for the motor and 5V rail for the electronics. Another thing: Do you already have the 8TB Archive HDDs? If not, then why would you choose those. I have some experience with these drives but when I bought them (early 2015) the price difference between those and regular HDDs was big. Nowadays it is small and more likely that you can find cheaper (and/or larger) alternatives. Aside from price, there is no reason I can think of to choose those.
  19. There is no database in DP so if files were lost there is no telling which (unless you have a file listing / database yourself somewhere). However, no files that were duplicated should be lost.
  20. Yes, sorry, I did not actually look at the specs.
  21. One thing to note is that a PS may be able to deliver 1600W but that typically does not mean you can get 1600W at any voltage or rail. You need to look at the specs of the PS combined with the HDDs as PocketDemon did. How many HDDs are you looking to connect?
  22. No, don't remove in the GUI. That would cause DP to try to move files off of the HDDs. You do not want to change your Pool at all, you just want to migrate it to a new environment. So: just do my new Windows HS install, install DrivePool and then physically re-connect the drives? -> Exactly right.
  23. In this case, it is a physical disconnect. It is not really neccessary but it ensures that you do not accidentally select a Pool HDD to install the OS on...
  24. Hmmm, with that I can't help you I'm afraid. I am sure you run scanner as well. How are the HDDs connected? Do you keep a log of the errors the drives throw up?
×
×
  • Create New...