Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Posts posted by Jaga

  1. 7 hours ago, electo said:

    I understand that this is a niche usage and therefore not a priority. Will keep an eye on the ticket and hope for an update some time in the future.

    And Clouddrive is relatively new, so you never know what requested features might make it in.

  2. I couldn't tell if all the files made it to your server, and don't want to overload it with duplicates.  Let me know if it didn't make it up and I'll re-test/re-send the lot of em with BD off.  :)

  3. Just noticed this alert on another piece of software when it updated.  Sounds like it could be related to the issues people are seeing in this topic.

    Quote

    - MSW: Microsoft broke the ICopyHook interface in Windows 1803. Until Microsoft fixes the bug in Windows, drag & drop from FileZilla into Explorer will not work on Windows 1803.

     

  4. You can use a product like Primocache on the underlying physical disks as a read cache.  It's a great program, been using it for many years now.  Just needs sufficient RAM (or a good SSD as a Level 2 cache) to help the system.

  5. Got it, not terribly worried about it either way.  The error message hasn't reappeared in the event log since.  And stopping the Clouddrive service before running a Sync with Snapraid means it's data files are closed for writes, eliminating that possible problem.

    Thanks!

     

  6. Update:  Disabling the VPN and going with a Gigabit wired Ethernet connection only had no affect on the error message continuing to count up, even after a cold boot.

    Verified that BitDefender is allowing the clouddrive.service.exe through the firewall also.  

  7. Done and done (Contact request #3784232).  I set the Troubleshooter for All data, was a rather large upload.

    Bitdefender complained at the end that several log files, the Troubleshooter's .exe, and select other files were infected with "Atc4.Detection", and quarantined them.  If it turns out the upload didn't make it to your server, let me know and I'll re-run with A/V disabled.  Not sure if BD is throwing false positives, but I won't blindly run exe's without BD on unless I'm sure.

    In the meantime, I'm going to disable the VPN temporarily and do a cold boot, to see how Clouddrive responds.

  8. Saw Clouddrive throwing this error a few days after adding a FTP drive, no actual data on it currently, but it's pinning 76.4 MB and Caching 75.4MB.

    I/O error
    Cloud drive My_FTP (E:\) is having trouble uploading data to FTP.
    
    Error: Not enough data.
    
    This error has occurred 1,937 times.
    
    Make sure that you are connected to the Internet and have sufficient bandwidth available
    
    Your data will be cached locally if this persists.

    The machine Clouddrive is on is connected to a 250/10 Mbit internet connection over a VPN, which is working fine.  It was used in the same configuration to setup the FTP cloud drive.

    The only part of this I can see that could be responsible, is a "To upload" amount of 168 KB, which doesn't seem to want to go 'up' to the FTP server.  I have yet to toss any files on the drive, but I can if you'd like me to see if it clears the error.  I'm unsure if this is a normal result of having a perpetually empty cloud drive, or something else.  I'm not worried about it either way, unless it's actually connecting to the FTP server 1,937 times.

    Edit:  in the process of writing this post, the error count has risen another 4, to 1,941 total.  Seems like it's trying vigorously but cannot due to some limitation.

  9. I went and installed CloudDrive yesterday, just to get more of a feel for it's inner workings, though I don't expect to use it much (much of my data is local, FTP'd down).

    And I wouldn't have personally noticed it since I don't spend a lot of time in the logs, but when my SnapRAID sync helper script kicked off this morning it flagged two events in the Windows System Event Log (Windows 7 Pro configured as a server) in an email to me, and then terminated the sync.  Those Event Log entries were:

    • Systen/NTFS, ID: 57, "The system failed to flush data to the transaction log. Corruption may occur."
    • System/Disk, ID: 32, "The driver detected that the device \Device\Harddisk6\DR6 has its write cache enabled. Data corruption may occur."

    They were timestamped within 20 minutes of each other yesterday, right around when I first installed CloudDrive and setup a simple 10 gig FTP drive across SFTP, and assigned a drive letter to it.  CloudDrive had a chance to sync up/down and finish cleanly, no problems were witnessed.  No actual data was put on the volume.

    I went and allowed SnapRAID to do it's sync today after investigating, but I wanted to let the team out here know what I saw and the effect it had on the Event Logs and SnapRAID's helper script.  For now - I have the helper script stop/start the CloudDrive service before/after running a sync, in an attempt to flush any cached or open files.

    If you want me to gather any more info or test anything in this config, I'd be glad to.

  10. 49 minutes ago, NickM said:

    The first thing you mentioned: DP machine as already installed, and I updated the machine to 1803. I later uninstalled and reinstalled DP on this same machine after the update to 1803, the BSODs kept occurring.

    I have two other WIn 10 machines, one updated to 1803, the other not. The crash occurs when copying TO the DP machine from the Win 10 1803 machine, but not from the Win 10 ...1709 update??? machine.

    I can copy from MacOS to a shared pool folder without error.

    I can copy from a Windows Home Server 2011 machine to the shared pool folder without error.

    That's excellent info, thanks for clarifying.  I'm sure Christopher and Alex can use it to good effect.  W10 1803 as a client is doing something DP doesn't like on a W10 1803 DP server.

  11. 18 minutes ago, darkly said:

    Not sure what balancing you're talking about. I'm only DrivePool with the partitions of my cloud drive (the configuration I described 2 posts above). The SSD would only be used as a cache for the letter mounted drive (top level pool)

    That's how the SSD cache works - DP "balances" the stuff off of that drive, onto your Pool drives, on a schedule according to what you set in the balancing area of DP.  It's a slight misnomer, but that's how the cache plugin works with DP.

    Anything 'caught' on the cache SSD if it failed, would be lost.  So a more aggressive balancing schedule in DP (to move from cache SSD to your cloud partitions) would protect better against drive issues.

  12. 8 minutes ago, darkly said:

    Yeah I probably need to do this. The only problem for me is that I'm regularly working with around a terabyte at a time so I was hoping to use a 1tb SSD. Two of those is getting $$$$$

    That sure would get expensive quickly, especially for SSDs of that size.  Depending on the type of data you were placing on the Pool, NTFS volume compression might help a little.  I wouldn't count on it however.  For now, probably just stick with 1 SSD and set balancing to happen immediately.  1 minute when you have that much space and other local volumes to copy off to isn't too long.  You couldn't saturate network bandwidth enough to overflow with immediate balancing.

    I will still do some write-cache testing with Primocache if I can get around to it.  The drawback there is I can only manage about a 26 gig write-only cache in the software, so stuff like large backups would spill over and defeat the purpose without a highly aggressive write policy.  Smaller files and media files would still work fine.

  13. So I went and did some testing, answered my own question.  With the right RAMdrive software (I used SoftPerfect RAMDisk 3.4.8 which is free) and Hard Drive Emulation turned on, it's very easy to add a RAMdrive into the Pool and set it up as the SSD drive for for SSD Optimizer Plugin.

    The only drawback, is that the Pool Balancer doesn't immediately (on-the-fly) move files off that SSD to the larger Archive drives, so that newer files coming into the Pool have a place to land.  Basically the SSD Optimizer allows any drive used for the SSD Drive in the Pool, to count as a totally valid Pool drive and hold on to files as long as it cares to.  That may be by design, but I didn't find DP treats files on that SSD cache drive with any sense of true urgency - even with 100% immediate balancing turned on, it took an entire minute (?) for it to check and start moving files off the RAMdrive I had in place for it.

    So from that perspective, I don't think I'd recommend a RAMdrive in place of the SSD cache drive, due to volatility and SD's non-urgency.  Might be better off just buying a larger SSD and using it as designed.  Or something like Primocache with a write-only cache set on the underlying DP volumes, which I might try next for write performance testing.

    If you get to reading this Christopher - have you and the rest of the team done any testing with Primocache, to be able to say if it's compatible with the rest of DP?

  14. Sounds like a good reason to make the Cache SSD a RAID 1 mirror - two smaller but fast SSDs with some redundancy to protect the Pool data.  Would also help with write speeds to the Pool on concurrent transfers, with enough Pool drives present.

    That makes me wonder - could you also use a RAMdrive in place of the SSD cache drive, to act as the write buffer for DrivePool?  If it's just picking a logical volume as the cache volume..

  15. I've never heard of a SMART query crashing the OS or a drive somehow..  you'd think it would just timeout or give a bad dataset.

    What does Scanner show you for SMART data on all the drives it monitors?  If everything looks okay in it's main panel, then SMART queries aren't the issue.  You -can- however tell Scanner to throttle SMART updates (just like throttling scans), and I admit I take advantage of this so as not to constantly spin up the disks if they are sleeping.  My current interval for queries is set to 60 minutes.

    Based on the size of the drives you have and their quality, the type of motherboard and guessing it's age... the first thing I'd recommend is an uninstall/reboot/reinstall of Scanner.  Perhaps Christopher has a better version for you to try?  :unsure:

  16. Sounds like a Windows 10 build issue for DP at this point, based on everything posted.

     

    6 hours ago, NickM said:

    3. Copying a file from a Windows 10 machine that has NOT been updated to 1803 does NOT cause a BSOD, at least in my setup :/

    Nick - was this a test using DP installed on a machine running 1803 and sharing the pool folder, while doing copy/write tests from another W10 machine that was not running 1803?  Or did you install DP onto a non-1803 W10 machine to test?

  17. Have you tried "chkdsk /r /b"?  It will scan for unreadable sectors, and try to re-evaluate sectors already marked as bad.  The report will at least tell you if some are marked bad, and at best may fix questionable ones currently marked bad.

    You can also click the + to the left of the drive in Scanner, and check out what Scanner thinks of the block map.  Anything unreadable to it should show up there.

  18. Just as a sidenote:  I increased duplication on a folder the other day, and noticed while watching it do copies, that the stats for Pool Performance didn't register any of the Duplication copying.  Probably that way since it's an internal routine, but it'd be nice to see just how fast the Duplication is being processed in MB/s.

  19. Sounds like the editor might be triggering an A/V alert MadSquabbles - many of them do that with trainers, editors, serial key generators, etc like you mentioned.  They trigger the Generic.Trojan flag which then kicks off the AV's quarantine routine, which could easily be interfering with DP's routine (as a filter) and causing the BSOD.

    In your case, it could also have been contention between Avira and Windows Defender, if they are both running at the same time.  W10 is normally smart enough to deactivate Defender when another AV is installed, but if not deactivate Defender yourself.

    I think NickM is still having issues too.

  20. You might be encountering "thrashing", which is where Scanner is actively testing drive(s) at the same time data is being read/written on them.  There's a setting in Scanner (under the Throttling tab) to get around that, but it needs to be set to "High" sensitivity so that Scanner behaves and there isn't any fighting over drive resources.  It will drastically improve system responsiveness.

    I too use Snapraid and it have it set to Sync on a totally different schedule, so that it never works when I have network or server activity going.  It kicks off at 3am and usually only runs for 10-15 minutes tops.  I then tell Scanner that it has a window from 4 am to 8am (for surface testing) so that it can do it's work without interruption, and the two never fight.

  21. PIA isn't an issue, I use that as well.

    Sounds like you might need help with Drivepool beyond what I can supply.  It's clearly something with the build you have installed that's creating the bluescreens, since a normal network share worked for you fine.  The only thing I can think of off-hand is perhaps the NiC drivers, and if they are doing something funky when accessing network shares.  I think (as an example) the Killer Network NiCs do some funky low-level stuff that might cause issues.

    While you wait for DP support to reply (usually Christopher), perhaps look into what NiC(s) you have and what drivers they use?

  22. 9 hours ago, MangoSalsa said:

    x, are we talking about the actual installation directory? Otherwise, I haven't configured anything in regards to a database location. 

    Yep - the default Plex data directory (for it's database, thumbnails, etc) resides in the "%LOCALAPPDATA%\Plex Media Server" directory for Windows, which usually resides on the C: drive.  If you haven't moved it, you're good to go.  

    Don't know about moving the Server 2012 Essentials folder(s), so I won't steer you wrong there.

  23. I haven't bluescreened using DP yet, but I have mine running on a Win7 server doing simple file sharing on a good sized pool.  Bad Pool Header is usually either a RAM issue, or it can be caused by Anti-Virus apps (in my experience).  Run a memory test using the Windows 10 tool, and try disabling your MalwareBytes.

    A rule of thumb I have when enabling AV on a file/media server, is to add an exception for the data storage volume(s) so they are never scanned.  Any files I add to the volumes in the pool were already checked when they were downloaded/saved to the local drive first.  I never download directly to the pool, so that AV can be disabled for it.

    Christopher might have more input if you can tell him what version of MalwareBytes you are running.  They used to have issues with a couple if I remember what I read right, one of which was BitDefender which they worked around.

  24. Were you using Pool duplication (which duplicates everything), or folder duplication?  If it was folder, you might want to double-check to ensure none of the folders has anything except "x1" on it.

    And when you've checked those two things, click the up arrow ^ next to "Manage Pool" and choose Re-measure.., so that the pool can update itself.  It should kick off a Duplication update at that point, if it hasn't already, to remove duplicate files.

×
×
  • Create New...