Jump to content
Covecube Inc.

Umfriend

Members
  • Content Count

    632
  • Joined

  • Last visited

  • Days Won

    34

Umfriend last won the day on March 16

Umfriend had the most liked content!

About Umfriend

  • Rank
    Advanced Member

Recent Profile Visitors

567 profile views
  1. If you are using Seagate Archive HDDs, then this may well be because you use Seagate Archive HDDs. I have had cases where I would get high write speads for a long long time on these but that basically required the HDD to be rather empty and sequential writes. If the HDD has some data on it then this occurs indeed. Having said that, I have never seen writes to the Archives be fast, slow and then fast (as in > 40MB/s) again, so something else may be going on as well.
  2. Umfriend

    Reinstall WTF!!

    As I said, afaik the settings are stored and applied at connect time. OP has not reacted but if there is a thread or experience that refutes that then I would be very interested.
  3. Umfriend

    Reinstall WTF!!

    It must be me but afaik, the setting are stored in the pools itself and will apply as soon as you connect the pool. Have you a thread where it says otherwise?
  4. The thing is, the question was not to keep files together on a disk, it was to not have them on disk D to the extent possible. File Placement Rules may do that for you but I am not sure once disks B & C are full. And keeping files on the same HDD using FPR will not work with a *.* rule. You would actually need to work out which folders to place where etc. and it could cause folder X, targeted for C, being split between Disk B and D when C is full.
  5. Yeah, but read the notes. Depending on the setting, as I read it, it will either work only at the time of file creation or actually move existing files. That would be nice, I think, for you as, if and when you delete files from HDD E and/or F so that it can empty G.
  6. Yes, that is the idea. I don;t think that file placement rules are the best way deal with this for you. Rather, I would try the Ordered File Placement balancer. It is not installed by default, you'll need to download and install a plug-in (https://stablebit.com/DrivePool/Plugins). Read the notes carefully as the default behaviour is not what you want but it has the options to get it suited for you I think. Caveat: I have not used this plug-in. Oh, and on: I would configure duplication first, so set-up, set duplication, stop service, move files, start service, wait. I have to say, I am rather curious about the exact HDDs you are using. And, to be frank, if a disk is suspect I would at the least ensure I have enough disks in any Pool to ensure that, should it fail, DP has the space available to reduplicate to other disks (and as such I would have a second large fast HDD in the Pool with the 1 big disk). Sure, you would already have duplication but in case of a disk failure, say the big fast one, you would still suffer downtime as the Pool will be in read-only mode and you would not actually have duplication until the matter is resolved. Also, got scanner? I would advise it.
  7. None of the drives are in a Pool yet? Then it is simple: No. In fact, DP will not delete any data on any HDD when you add it to the Pool and such data will not show up in the Pool(s). Imagine a disk D:\ with data on it in the root and a folder D:\Data. When you add that disk to a Pool, DP will create a hidden PoolPart.* folder. So you would have: D:\ D:\Data\ D:\PoolPart.*\ Only what is in the PoolPart.* folder is part of the Pool and everything that has to do with duplication and balancing only applies to the contents of that folder. Root and D:\Data will not be affected in any way. So let's assume you have disks D:\ to G:\ with D:\ being the big fast disk. What I would do is: 1. Create Pool Q, add only disk D:\ 2. Through explorer, _move_ the data on D:\ that you want to be in the Pool to the hidden D:\PoolPart.* folder. 3. Create Pool R, add disks E:\ F:\ and G:\ And here it becomes tricky. If you now create a Pool P by adding "disks" Q and R, with x2 duplication, DP will think that R does not have a duplicate (because the PoolPart.* folders are empty) and copy everything from Q to R. You *could*, I guess, first move the data on E:\ to G:\ (disk by disk) to the PoolPart.* folders. However, I am not sure how DP will react to potential differences in time stamps on the files (date created/modified). So, personally, I would copy all data from E to G to some other disk (external perhaps) and then format them first. Then do step 3 and then 4. Create Pool P using Q and R and set duplication to x2. 5. Let DP do its magic (which may run for quite a bit of time). If there is no possibility to backup the data but the data on Q is less in size then the free space on R, then you could still do steps 4 and 5 and then, after you are satisfied that everything is in order, delete the data on E to G that is not in the PoolPart.* folder (but was present on D:\ or Q:\, don;'t delete data that you have not elsewhere). Failing that space, you could consider simply formatting and do steps 4 and 5 with the idea that you still have a copy on D:\ (=Q:\) and will have duplicates again soon. Alternatively, you could move on those disks but again, I am not sure what issues you might run into. As an example, say there is a file called "MyFile.doc". If on Q:\ it is in folder "MyFiles" and on R:\ it is in folder "MyBackupFiles", then DP will not know that these are the same files. Rather, it will create duplicates of both causing 4 copies to exist. One last thing: If you ever use the move files trick to "seed" a Pool, remember tostop the DrivePoolService before you do that and restart it when finished. Hope this helps.
  8. You could create a Pool from disks B, C en D, no duplication, let's call this Pool Q. Then you can create a Pool P that consists of Disk A and Pool Q, duplication x2. That ensures duplication with one copy on disk A. Personally, I would create a Pool R with disk A so that it is easy to add another speedy disk. I would want each Pool Q and R to have a enough space so that the failure of one can easily be corrected by DP. I think Stablebit has an add-in that prioritises certain disks in a Pool, that might help for allocation between disks B, C and D.
  9. Umfriend

    Disk not seen

    Screenprints might help. 31% overhead seems a bit much, as in impossible. Perhaps the counter does not include certain hidden directories. I have had the System Volume Information folder using loads of data.
  10. Chris is no fan of Highpoint. You can search for Highpoint on this forum and find some of his posts on this. But perhaps the 750 is different. The LSI card won't increase the performance of the Archive (or any other) HDD. Perhaps you were benefiting from the cache for a bit or maybe it was writing to more than one HDD at the same time or somesuch. How did you measure this?
  11. Then I do not know. If duplication is applicable then you need as many SSDs as the duplication factor for the caching to work.
  12. There is a difference between using VSS and supporting it. If you use VSS, then VSS will do all sorts of stuff for you and in turn use, for instance, NTFS. NTFS must therefore be able to support functions that VSS uses. It is not that easy to know what functions VSS uses and that is what you need to know to support it. As a, probably faulty, analogy, say you want to program something for your video card to do. You need to make calls to the, say, nVidia driver. These will be documented as nVidia wants people to program for their drivers. Now suppose that driver makes calls to the OS and you want to write your own OS on which that nVidia driver runs. Then you need to know what that driver needs from the OS and facilitate that. However, what that driver needs from the OS may not be documented. And there is a really good reason not to want to backup the entire volume. Let's say you have a 4 x 4TB Pool, duplicated. You may have 6TB of (unduplicated) data in that Pool. If you then try a BMR, you would need one HDD of at least 6TB to be able to restore that _or_ the recovery software must be able to load/support DrivePool as well. I don;t know what you use but the Windows Server Backup recovery software won't do that. So yes, I backup the underlying disks.
  13. I did briefly look but found no such trigger unfortunate;y. The closest I got to was https://sumtips.com/how-to/run-program-windows-wakes-up-sleep-hibernate/ where you could define a task that first stops and then starts the service but that may be to late/slow.
  14. So in the dashboard where you see the shared folders, IIRC, you select one and then in the options on the right of the dashboard ("Music Tasks"?) there is an option to Move the folder. I'm on WSE2016 since end of December, used to rock WHS2011 but I can't precisely remember or make pics. Edit: So you can do this from the Server desktop or from a client through the dashboard. Personally I always go through the desktop through RDP.
×
×
  • Create New...