Jump to content
Covecube Inc.

Umfriend

Members
  • Content Count

    936
  • Joined

  • Last visited

  • Days Won

    48

Everything posted by Umfriend

  1. Yes, you did state that. I just had not taken into account that that plug-in might not be available at some point. However, it seems to me that if you physically disconnect the Pool prior to re-install, then do the re-install, install DP AND install the plug-in and _then_ attach the Pool, it should work fine. Can't be sure but that is what I would think.
  2. Hmm, I had not thought about plug-ins that are not installed by default. I can't be sure but in the screenprint, is that right after re-install and prior to the installation of the SSD Optimzer plug-in? If so, then I notice that the plug-in is in fact listed so DP did pick up something at least. Perhaps the unavailability caused default balancing setting then. Did you also use File Placement Rules and, if so, were they intact or not?
  3. No need to first activiate the trial version. If you have a Pool, you can actually physically disconnect and move over to another PC. If DP is installed there, it will recognise the Pool and the settings and you are done! So yes. Personally, I would physically disconnect all drives before installing Windows (just to ensure you do not select the wrong disk and to avoid any funny placement of boot partitions), install windows, reconnect the Pooled HDDs and install DP. That's all.
  4. If you are using Seagate Archive HDDs, then this may well be because you use Seagate Archive HDDs. I have had cases where I would get high write speads for a long long time on these but that basically required the HDD to be rather empty and sequential writes. If the HDD has some data on it then this occurs indeed. Having said that, I have never seen writes to the Archives be fast, slow and then fast (as in > 40MB/s) again, so something else may be going on as well.
  5. As I said, afaik the settings are stored and applied at connect time. OP has not reacted but if there is a thread or experience that refutes that then I would be very interested.
  6. It must be me but afaik, the setting are stored in the pools itself and will apply as soon as you connect the pool. Have you a thread where it says otherwise?
  7. The thing is, the question was not to keep files together on a disk, it was to not have them on disk D to the extent possible. File Placement Rules may do that for you but I am not sure once disks B & C are full. And keeping files on the same HDD using FPR will not work with a *.* rule. You would actually need to work out which folders to place where etc. and it could cause folder X, targeted for C, being split between Disk B and D when C is full.
  8. Yeah, but read the notes. Depending on the setting, as I read it, it will either work only at the time of file creation or actually move existing files. That would be nice, I think, for you as, if and when you delete files from HDD E and/or F so that it can empty G.
  9. Yes, that is the idea. I don;t think that file placement rules are the best way deal with this for you. Rather, I would try the Ordered File Placement balancer. It is not installed by default, you'll need to download and install a plug-in (https://stablebit.com/DrivePool/Plugins). Read the notes carefully as the default behaviour is not what you want but it has the options to get it suited for you I think. Caveat: I have not used this plug-in. Oh, and on: I would configure duplication first, so set-up, set duplication, stop service, move files, start service, wait.
  10. None of the drives are in a Pool yet? Then it is simple: No. In fact, DP will not delete any data on any HDD when you add it to the Pool and such data will not show up in the Pool(s). Imagine a disk D:\ with data on it in the root and a folder D:\Data. When you add that disk to a Pool, DP will create a hidden PoolPart.* folder. So you would have: D:\ D:\Data\ D:\PoolPart.*\ Only what is in the PoolPart.* folder is part of the Pool and everything that has to do with duplication and balancing only applies to the contents of that folder. Root and D:\Data will not be affected in
  11. You could create a Pool from disks B, C en D, no duplication, let's call this Pool Q. Then you can create a Pool P that consists of Disk A and Pool Q, duplication x2. That ensures duplication with one copy on disk A. Personally, I would create a Pool R with disk A so that it is easy to add another speedy disk. I would want each Pool Q and R to have a enough space so that the failure of one can easily be corrected by DP. I think Stablebit has an add-in that prioritises certain disks in a Pool, that might help for allocation between disks B, C and D.
  12. Umfriend

    Disk not seen

    Screenprints might help. 31% overhead seems a bit much, as in impossible. Perhaps the counter does not include certain hidden directories. I have had the System Volume Information folder using loads of data.
  13. Chris is no fan of Highpoint. You can search for Highpoint on this forum and find some of his posts on this. But perhaps the 750 is different. The LSI card won't increase the performance of the Archive (or any other) HDD. Perhaps you were benefiting from the cache for a bit or maybe it was writing to more than one HDD at the same time or somesuch. How did you measure this?
  14. There is a difference between using VSS and supporting it. If you use VSS, then VSS will do all sorts of stuff for you and in turn use, for instance, NTFS. NTFS must therefore be able to support functions that VSS uses. It is not that easy to know what functions VSS uses and that is what you need to know to support it. As a, probably faulty, analogy, say you want to program something for your video card to do. You need to make calls to the, say, nVidia driver. These will be documented as nVidia wants people to program for their drivers. Now suppose that driver makes calls to the OS and yo
  15. I did briefly look but found no such trigger unfortunate;y. The closest I got to was https://sumtips.com/how-to/run-program-windows-wakes-up-sleep-hibernate/ where you could define a task that first stops and then starts the service but that may be to late/slow.
  16. So in the dashboard where you see the shared folders, IIRC, you select one and then in the options on the right of the dashboard ("Music Tasks"?) there is an option to Move the folder. I'm on WSE2016 since end of December, used to rock WHS2011 but I can't precisely remember or make pics. Edit: So you can do this from the Server desktop or from a client through the dashboard. Personally I always go through the desktop through RDP.
  17. So my guess is that with the clean install, the Dashboard has the standard shared folders back on the C:\ drive again. I think that if you move these to the Pool, then all of a sudden these will be fine again. Try it with one folder that you backed up somehow first (because although it is unlikely, I am not 100% sure moving a shared folder will not overwrite at the target location). Also, you may want to reset NTFS permissions: http://wiki.covecube.com/StableBit_DrivePool_Q5510455
  18. If there is x2 duplication then you need two SSDs for tj SSD optimizer/write cache. I would advise against using an OS drive for data. But certainly you could do what you propose with 2 SSDs. It can even be done with one if you use hierarchical Pools but I would advise against that too. What I do not know is whether you can use two SSD as cache _and_ designate the same as targets for, say, *.db files. Chris would know i guess.
  19. I am not certain but to the extent a program postpones execution until a write is completed, I would expect it to receive the write-finished confirmation only when all duplicates are written. One thing to remember is that in DP, there is no primary/secondary idea. All duplicates are equal. Well, I think DP is intended to support this but it may not be that straightforward to get to work easily.
  20. I don't think that is correct. It seems to me that you can cause DP to place all files that conform to *.db to a single HDD/SSD (that is part of the Pool though). This can be done through the File Placement options. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=File Placement And that would allow for the use of an SSD as part of the Pool dedicated to *.db files only. It is a bit complex to set up (see e.g. I can't say I would look forward to setting this up but it is intended to support what you want I think. Of course, if you have duplication then it is a bi
  21. Doh, yeah. I wonder whether going to sleep is an event that can trigger a task in task scheduler. If so, you could stop the service when going to sleep and start it when waking up perhaps.
  22. AFAIK, there is no reason why programs could or should not be located on a DrivePool so perhaps you could relocate all to DP?
  23. What you could try is to set the DrivePool service to a delayed start.
  24. In a way I am lucky I only have a 1Gb network.
  25. I doubt Stablebit would want to go the RamCache route because of the risk of any system failure causing the loss of (more) data (compared to SSD Cache or normal storage). I don;t but I know there are people here that succesfully use the SSD Cache. And it really depends on what SSD you are using. If it is a SATA SSD then you would not expect the 10G to be saturated. In any case, @TeleFragger (OP) does use duplication so he/you will need two SSDs for this to work.
×
×
  • Create New...