Jump to content

Spider99

Members
  • Posts

    544
  • Joined

  • Last visited

  • Days Won

    39

Posts posted by Spider99

  1. Was the PC turned off correctly the last time it was used? Is the PSU ok - are these and other components old - i.e. end of life?

     

    if you use the disk and it corrupts data - how would dp "know" which copy was correct?

     

    You could end up with a lot of headaches and heartache

     

    Does the smart data indicate any issues?

     

    I would be very careful what you trust this disk to do.

     

    If its under warranty (and you have a defined issue) i would see if you can get a replacement - if not then possibly borrow a replacement from a friend while you wait for a new drive to arrive

  2. The other data is in the svi folder or you have "lost" files - your C drive and D drive figures show a similar discrepancy 35gb and 91gb respectively

     

    For example my c drive - OS drive

     

    post-2627-0-48905300-1503132442_thumb.jpg

    Showing all data used including SVI

     

    post-2627-0-96943200-1503132443_thumb.jpg

    With sys folders hidden

     

    post-2627-0-72526200-1503132444_thumb.jpg

    with sys folders visible including SVI - and showing an approx 4gb difference which is in the svi folder and the NTFS master database of files

     

    You have much higher differences so you might have other issues or applications (say backup) and system restore points etc etc using space

     

    Have you run chkdsk on your drive to make sure you do not have issues?

     

     

  3. If you look at the screenshot I attached, I do have show protected folders and show hidden files/folders.

    Also, you can see that the PoolPart.GUID folder contains all but 3GB (closer to 2.5GB really) of the data that's stored on the disk.

    Yet, DrivePool reports 100GB worth of other data.

     

    That would mean either DrivePool is wrong, or there is 97GB worth of data in the pool that it is considering "Other."

    Unfortuantely SVI always shows as Zero size and unless you take ownership of the folder from "system" you can see how big it is - and its not included in your calcs. As chris said its probably VSS snapshots or similar

  4. Turn off "Hide protected operating system files (recommended)" - you will probably see a Recycle bin folder and a SVI folder

     

    and also empty your recycle bin - as this would count towards your "other" as its outside the pool

     

    post-2627-0-48510800-1503059396_thumb.jpg

  5. in a non vm environment those errors mean something is up with the cable/controller or the hard disk controller/sata connector

     

    as its all your disks - it could be the controller

     

    as your other drive is not reporting errors because its on a different controller

     

    not seen it with VM's before

     

    if you connect one of the affected disks to the m/b sata controller do the errors stop?

  6. No there are no ACL problems on the pool and all disks have been checked by scanner in the last two weeks as it takes that long with 20 disks to check. - i did that on my 2012r2 machine when i submitted the trouble shooter logs and you mentioned it before.

     

    If there were ACL problems that i would see those errors accessing the pool - and there is nothing in the event log either

     

    Remember this happens on two different machines so ACL problems very unlikely.

     

    I have randomly checked whatever files get listed as unduplicated and can read the files fine - so again not an ACL problem.

  7. Hi Chris

     

    Well there are three things it does

     

    1. remove the extrafanart folder - if it exists - first time it may not

    2. Copy over 10 random images into the extrafanart folder from Photos - create extrafanart directory if it does not exist

    3. Create the .ignore file in the photos directory - i doubt its this as its a simple file copy and rename

     

    My guess as to what might be causing it are :-

    1. It might be the folder deletion and creation happening i quick succession that is "too" fast for DP/system to pick up.

    2. However the last time i did it (post #41) the undup files listed were the 10 random files copied - so it could be that as well

    3. Also it could be Powershell doing something different to what is expected - MS being MS and all :)

     

    I would setup a folder on a pool with just a photos directory with some random photos ( mine vary in size but say approx 1meg each or bigger) then run the PS script

    wait a second after it finishes then run it again wait a second and run it again etc

     

    If you see what i see on two different machines - DP will "complain" after a couple of runs - i had the GUI open on screen to see the change.

     

    T

  8. Chris

     

    upgraded to 798 beta - win10 machine

     

    ran dpcmd check-pool-fileparts - no errors

     

    ran script 2 - twice and got unduplicated files

     

    ran dpcmd again - 10 files listed as unduplicated - this time though they are the files that script 2 amended - not sure if thats luck or 798 reports better? :)

     

    Anyway over to you and Alex to test

     

    Tim

  9. Chris

     

    I have duplicated the problem on a separate machine (win10) rather than my 2012r2 which we have been talking about until now in this thread

     

    I tried to duplicate the problem with 742 beta on win10 and it did not appear to be affected. So i upgraded DP to 773 (same as 2012r2 machine)

     

    To see the problem quickly - i ran Script 2 (PS script) multiple times in the same directory - so it deletes the extrafanart directory and the files within each time - if you do this three or four times in a row - the pool will show unduplicated files.

     

    I thought initially that the win10 pool being only 2 sata ssd's was going to be too quick to see the problem but that did not turn out to be the case.

     

    Checking the files that were unduplicated - as i noticed before - the files that are listed are nothing to do with the files modified by the script. Completely different directory tree off a different root folder on the pool??

     

    So i reset the duplication cache via dpcmd - but it still lists the same "unrelated" files plus a handful extras which are related to work done by script 1 - 4 "new" unduplicated files in this case which was running at the time.

     

    I then used dpcmd to remeasure the pool and then re ran the check-pool-fileparts again - still listing the same files - not sure if this a concern in that its loosing files/over writing them/miss reading them????

     

    This is what the GUI looks like at the moment

     

    post-2627-0-47144900-1502261630_thumb.jpg

     

    I will get the pool balanced again and upgrade to the latest beta and test again

     

    Interesting times :)

     

     

  10. 10x/x2 means the folder containing the file is across 10 disks (poolparts) and the file is duplicated twice on 2 of the 10 disks

     

    x2 is the expected number of parts

     

    if you look at the example in the dpcmd thread it shows a x3/x2 with 3 poolparts and two copies

     

    if you have 10 disks the root folder will be on all disks - sub directories may or may not be on all disks as you go down the tree of directories the number of disks containing directories generally decreases down to x2(or more) at the bottom of the tree.

     

    In the gui if you put your mouse over each disk in turn it will amongst other things show you how much "other" is on each disk - you might find one disk has a lot more than the others

     

    On a pool of 13 disks - i have 16.3gb on another pool of 20 disks i have 11.9GB - it goes up and down while it balances as it moves data from my ssd's to the pool as the "in transit" data gets added to the "other" until its fully copied - hence why you see it going up and down - if its large files being moved you can see the "other" data as its added to a disk "grey" bit which then turns blue when fully copied. Each disk has a few hundred meg of other - mostly ntfs data that you cannot see or access and its only because DP shows it to you that its an "issue" - in normal windows you cant see this data so you dont know its there but it is - its usually the master files records for the disk a database managed by windows all normal as long as its does no get into the multi 10's gb range which it looks like your are not. Also as Chris said if you have data outside the poolparts directory this will be counted as other as well. Also any System Volume Information directories are another culprit with volume snapshots etc (pia mostly)

     

    Have fun

  11. However, my guess is that the script is triggering a "smart move", where the data is not actually touched, but the file location is updated.  This ... would definitely cause the data to not be duplicated properly.  - Yes this is very likely to be what is happening - but this is no different to a normal move of data?

     

    That said, if you are scripting this, run a read pass of the data afterwards. Accessing it on the pool *should* trigger a duplication pass, IIRC. - Nope that does not happen

     

    Real time duplication happens in the kernel, and writes the files to both copies in parallel.  - hmm only for a new file/copy but not a move it would seen - but its catching some but not all

     

    The other thing i have noticed is that the files it reports as not duplicated are nothing to do with the files being moved or copied etc - they are completely unrelated - in most cases on another physical disk - i wonder if the index is getting out of sync so it knows somethings wrong but reports the wrong files???

  12. I have a suspicion that when i run a powershell script across multiple directories that delete/copy and rename files/directories - DP cant keep up - as the "edits" are across multiple physical disks

    Also the script runs multiple processes in parallel - so lots of amendments going on at the "same" time.

    I dont do this all the time but when i get chance i will test it to see if i can get it to be reproducible

    Confirmed - running a script across the pool will create Un-duplicated files - i think this is because the real-time duplication cannot keep up and/or the real time duplication does not happen on the pool drives but only on the ssd's

     

    I have just run a script that creates N sub directories, moves matching files to the sub directory - creates a sub sub directory and creates (via ffmpeg) screen shots (jpg) of the movie at 1 minute intervals - i am reorganising my movie libraries for Emby so each movie is now in its own subdirectory and has a subdirectory (extrafanart) for screen shots etc

     

    Nothing complicated - now have 5.79GB of un-duplicated files

     

    If you want a copy of the script for testing let me know

  13. Using .773 beta

     

    I am seeing a "Ensure file duplication consistency on Pool X" when i copy some updated files to the pool - i.e. over write - its not every time but this task has run 5 times in the last 24hrs????

     

    e.g. video file (old) replaced with a new file (by name) - but it will be much smaller file

     

    So it might go from 15GB down to 5GB

     

    Also it appears that the new file is copied to the pool directly rather than via the ssd cache - is that expected behaviour?

     

    Just copied a 5GB file to the pool and watched the performance info and you can see it being copied to the HDD rather than the SSD's

     

    I have no file placement rules set

  14. you could use the "dpcmd" to run against your pool which will list all files and folders (depending on options used) - which might help you spot which folders and files have gone missing

     

    it sounds like you had no duplication on your pool so unless you had a "dpcmd" run from before its going to be difficult to spot all the missing data

     

    http://community.covecube.com/index.php?/topic/1587-check-pool-fileparts/

  15. Hi Chris

     

    1. Good - needs a lot of work - as nothing in the logs re access denied - did not even flag it had an issue - Alex found it in the troubleshoot data it uploaded - needs to be visible to the user

    2. I used the Option at the bottom of the Balancing page - if that does not reset everything - then its a chocolate tea pot  :P

    3. Fine if thats what they contain - just odd to encrypt them

     

    It is the SSD Optimiser that is/was the issue - as a reset (had to do this a few times over the last few months) clears the lockup where it will stop working and a re balance will leave data on the ssd's no matter how many times you run it

    I only have the scanner plugin enabled - all others are disabled and have no file placement rules on the pool.

     

    I have a suspicion that when i run a powershell script across multiple directories that delete/copy and rename files/directories - DP cant keep up - as the "edits" are across multiple physical disks

    Also the script runs multiple processes in parallel - so lots of amendments going on at the "same" time.

    I dont do this all the time but when i get chance i will test it to see if i can get it to be reproducible

  16. Was just checking my event log - as you do :)

     

    and spotted that on server reboot i get two errors from DP and two from scanner see below

    Error report file saved to:
    
    C:\ProgramData\StableBit DrivePool\Service\ErrorReports\ErrorReport_2017_07_18-03_50_46.8.saencryptedreport
    
    Exception:
    
    System.Net.Sockets.SocketException (0x80004005): Only one usage of each socket address (protocol/network address/port) is normally permitted
       at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
       at System.Net.Sockets.Socket.Bind(EndPoint localEP)
       at #lwd.#nwd..ctor()
       at #lwd.#owd.#v9e()
       at CoveUtil.ReportingAction.Run(Action TheDangerousAction, Func`2 ErrorReportExceptionFilter)
    
    

    and

    Exception:
    
    CoveTroubleshooting.Reporter+ReporterLogException: {reporter_exception}
       at CoveTroubleshooting.Reporter.ThrowLogReportN(Exception TheException, Object[] TheParams)
       at CoveUtil.ErrorReporting..(Exception )
    
    

    for scanner

    Error report file saved to:
    
    C:\ProgramData\StableBit Scanner\Service\ErrorReports\ErrorReport_2017_07_18-03_51_32.1.saencryptedreport
    
    Exception:
    
    CoveTroubleshooting.Reporter+ReporterLogException: {reporter_exception}
       at CoveTroubleshooting.Reporter.ThrowLogReportN(Exception TheException, Object[] TheParams)
       at CoveUtil.ErrorReporting..(Exception )
    
    

    and

    Error report file saved to:
    
    C:\ProgramData\StableBit Scanner\Service\ErrorReports\ErrorReport_2017_07_18-03_51_31.9.saencryptedreport
    
    Exception:
    
    System.Net.Sockets.SocketException (0x80004005): Only one usage of each socket address (protocol/network address/port) is normally permitted
       at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
       at System.Net.Sockets.Socket.Bind(EndPoint localEP)
       at #J5d.#nwd..ctor()
       at #J5d.#owd.#Bpf()
       at CoveUtil.ReportingAction.Run(Action TheDangerousAction, Func`2 ErrorReportExceptionFilter)
    

    anything to worry about? would this stop any error reporting?

     

    win 2012r2E and as of today DP 773 beta

×
×
  • Create New...