Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by JasonC

  1. Recently I had an issue that caused me to move my array over to another disk. I think the issue caused a problem, as 5 disks show up with damage warnings. In each case the damaged block is shown to be the very last one on the disk. I assume this isn't some weird visual artifact where blocks are re-ordered to be grouped? Anyway, clicking on the message about the bad disk of course gives a menu option to do a file scan. I decide to go ahead and do this, and of the 5 disks, I think only one actually apears to do anything- you can start see it trying to read all the files on the disk. The others, the option just seems to return, pretty much instantly, which makes no sense to me. They don't error or fail, or otherwise indicate a problem, they all just act like maybe there are no files on those disks to scan (there are). The disks vary in size, 2x4TB and the others are 10TB. One of the 4TB disks scanned fine, and indicated no problems, but it's not clear as to why it would work and the others wouldn't. Ideas? Thanks!
  2. I have just under 10TB free on my array, but any time I try to copy something it, it's giving me a "not enough space free" error. This is even on very small files. I'm not sure how long it's been doing this for, just accessing files for read(which is what I'm doing most of the time) is fine. I haven't yet rebooted because the machine is in use, so I thought I'd check for other ideas in the mean time. I forgot to mention, I seem able to create folders fine, so it's not locked into a total read-only state. I just can't seem to write if a free space check is performed, I guess? Thanks!
  3. Right, except the problem is chkdsk wants to occur on an underlyng disk (2 disks, actually), not the pool. And it wants to umount the disk to be able to perform fixes, i.e. I need to remove the disk from the pool, since it would be unmounted, or I need to have it do it at reboot before the disk is mounted and has files in use by the pool. Not all chkdsk operations can be performed on a mounted file system, basically. So while it's not directly a DrivePool operation, because it's affecting underlying NTFS structures in a repair scenario, it impacts DrivePool.
  4. I need to run a chkdsk on a few drives in my pool, and I'll need to have them be offline to do so. Rather then take the entire machine down, can I remove it, check all the check boxes, and then, after I run the repairs, if I re-add the disk to the pool, will the contents just get merged like it had never left? Thanks!
  5. As much as I'd like to have a 256PB disk, it's a little disconcerting. It's also doing some odd things to the metrics...for instance it shows unduplicated file system storage as 256PB, but it still shows free space of the pool correctly. The rest of Windows is seeing the disk correctly (disk manager sees the size correctly etc) I am on the beta, so if it's that, that's fine, as long as it's just a visual error.
  6. Well, found the problem. Didn't really put together what having an I/O default in the app settings implied...and just found that the I/O settings for the drive are at 1.5Mbps. I must have set it to that when I first installed and configured, and didn't realize what I had done at the time. So...I've also just reverted to the generic key, so I'll see how that goes. Thanks!
  7. The main reason it seemed like a good idea to use my own key is that the rclone documentation specifically mentions using your own key, as the rclone internal key is low performance. Seeing the capped performance I was getting, I had just figured that it was something similar happening with CloudDrive. I'll check the logs and see if I'm still having the slow uploads tomorrow. Thanks!
  8. I've configured Google Drive to use a custom API key I believe, based on Q7941147. It connects fine, so I think that's ok, but I'm still only seeing 1.54Mbps upload. While I don't have symmetric fiber, I do have a lot more bandwidth than that(I've capped the upload at 20 Mbps, but really it doesn't matter so far!). It also seems to be nailed to exactly that number, which makes me feel like it's an enforced cap. What kind of performance should I be expecting? I can get much more (in line with what I'm expecting) when I use rclone, which I'm also using OAuth for (different credentials). Do I need to unmount and remount the drive for that to take effect? I deleted the original connection already, which the drive was built on, so the only connection I can see is the one I made since I modified the json file. I'm reasonably sure the new connection is good, since I didn't quote the strings original so CloudDrive threw an error. Once I fixed that, it seemed fine. But I am not clear if maybe the drive itself still has cached the connection it was originally made with? It didn't seem to error when I deleted the original and made a new one, so I got a little suspicious as to if it's using the new one or not. I don't think I'll be able to consider CloudDrive viable with Google Drive if I can't get more upload performance than that though! Thanks!
  9. Ok, so I just noticed that I wasn't seeing the whole UI, and once expanded I could manage the cache. I've flushed the cache, and told Plex to stream something loaded on the cloud drive. It still doesn't look like anything is pulling from the cloud (no download indicated). It could be system level caching, or maybe I'm expecting it to behave differently then it will?
  10. I'm trying out CloudDrive with Google Drive, and have created a small drive for testing. If I go this route, I'll have a lot of stuff there, including some things I'd like to have served by Plex. I'd like to test the performance when the content has to be pulled from the cloud. I haven't put enough into the cloud drive to over flow the cache yet. Will it just sit in the cache forever, or do things ever stale out? Or failing that, can I flush the cache, so I can force CloudDrive to have to pull from GDrive to fulfill a request? Thanks!
  11. I'm thinking about temporarily moving all the data off of a local disk onto my pool disk, reformatting the disk, and pulling it all back down. Doing so could easily fill the SSD I am using as a cache via SSD optimizer. I know it's supposed to offload files as it approaches whatever the threshold is, but if I'm writing continuously, I could see it being able to fill the disk. What happens as I approach that state (the SSD starts getting close to being full, but write requests are still continuing in)? Thanks!
  12. So, trying to fix some issues I have related to mounting a pool with Windows sharing to a Linux client, it occurred to me that perhaps it would work better if I used NFS. Which leads me to: - Any issues doing this? - I have this on a Windows 10 machine, so no native NFS server. Any recommended NFS server software I can run on Win10 that's known to work well for this sort of thing? Thanks!
  13. I have an automated process that I moved from Windows to Linux, that since that move I get occasional file move errors, which appear to be from the new files not being visible quickly enough. I can't quite tell where the source of this latency is, though it didn't ever seem to happen when the process ran on another Windows machine. I've adjusted the caching settings in the mount point to what I thought they should be to be optimal, but not sure if I've got it optimally set or not. If someone has a set of mount options that works really well for them, I'd love to see them. I'm asking here, because I think it's a interaction with the pool underlying the share. Thanks!
  14. JasonC

    Activity logging?

    So I just went to enable that. It was already enabled, with a valid email address, that passed the test message. I searched my mail and didn't get a notification. Best guess: does it only notify if a disk disappears _during_ operations? So, if it starts up with a disk missing from the pool, would it notify? That was the scenario I was in, and I think that might be why it didn't notify me? Nope, not that. It just didn't send one as far as I can tell, when the problem first occurred. I did find one when I searched my email, but it sent it right after I had rebooted again, but before I realized on my own the disk was gone, because 2 minutes after I rebooted, and it sent that first email, it sent the re-connect message(because I had re-attached in Hyper-V). The problem had been persisting for at least several days, so there was plenty of time to send a message but I don't appear to have gotten one. *shrug*
  15. Lots of modern drives have protections built in to automatically move drive heads to safe zones in the event of power loss. That said, power fluctuations, which a UPS would also normally handle (power conditioning features) probably could causes something weird to happen which could cause a head crash. Frankly, it sounds like you're pretty lucky your electronics didn't fry. Even if your power is reliable, you should have computer equipment you really care about not losing, on UPSes. UPSes are more then just providing power when power goes out. They often/always(?) on the protected outlets, disconnect your equipment from direct line power. Line power charges the batteries and the batteries power the gear. Think of them as an electrical buffer. But as Drashna says, the only way to be completely safe in questionable circumstances is to disconnect from line power. You can put everything on 2 disks, there are numerous ways to do this, in software, you could just have programs that makes sure your disk is duplicated to another disk all the time. Drivepool itself supports doing this. You can create a RAID array(, RAID 1 specifically, mirrored disks), literally everything that happens to one disk happens to another. You can do this right inside Windows, or use specialized hardware. NAS gear usually supports this configuration if you so desire). But of course it's not cheap, you are buying twice the disks for the same capacity. Businesses do this when uptime is the most important thing, and the budget is there. But ultimately, you want backups that are not connected to the hardware all the time, and ideally, are stored in a physically different location. The easiest way to do this for most people is cloud backups, but it may take a long time to get backed up if you don't have a fast upload on your Internet connection. Historically, IT did this with tape backups that are rotated offsite. Most people aren't going to go to this trouble, and it's much simpler and cheaper, but usually slower, to pay for cloud backups. Good tape backup gear is in the realm of pretty expensive to stupidly expensive.
  16. JasonC

    Activity logging?

    Oh, it hadn't occurred to me this was a bug. I thought it must have been something I did. I'll have to get that installed. Thanks!
  17. JasonC

    At-Rest encryption?

    Is there anyway to do any sort of At-Rest encryption with DrivePool? I'm trying to visualize how this would even work, so far everything I can think of would involve manual intervention, which isn't totally out of the question, just a little annoying with Win10 forced reboots. Thanks!
  18. Is this over a network? What's DrivePool showing in the activity screen you when you are copying to the pool? If it's over a network, have you tried copying directly to a disk over the network to see if the problem happens that way(trying to isolate if the problem is indeed DrivePool, or something else in your setup).
  19. JasonC

    Activity logging?

    So, I still like the log thing if I can get it, but I did find the issue. I realized that I was having a pretty wide spread issue with a lot of files missing, I just didn't realize it until I went into a high level library. This made me check for a disk issue. Turns out it was a combination of a VM problem, for some reason one disk didn't get mounted to the VM at startup, and I didn't realize it because somewhere along the lines the notification settings I had in the Stablebit Scanner were lost, so I wasn't getting notifications about a disk issue. Once I re-mounted the disk into the VM, my stuff all came back.
  20. JasonC

    Activity logging?

    I realize this is probably a asking a bit much, but is there any kind of rolling activity log option for DrivePool? I ask because sometimes I question my sanity, like files go missing. But they aren't things I've touched often, so I have no idea when they went away, or if I did it or what. Case in point: I told Plex to rescan a folder that contains unusual variants of movies. I generally don't ever touch anything in here once it's in there. The re-scan suddenly tells me a file is gone. I go look...yep it's gone (but the folder is still there). But it's not something I would have deleted. I don't know where it went, or how long it's been gone. I'll scan my FS and hope it's an accidental drag and drop, but I'm not hopeful. Since I have Plex scan my folders regularly, it had to be a recent removal. So...it'd be nice if I could just have a high level log tracking operations like that (log delete operations). Thanks!
  21. Anyone ever seen any odd with a file named like this: "C:\ProgramData\Microsoft\Windows Defender\Support\MpWppTracing-20190202-183619-00000003-ffffffff.bin" Odd defined as conitnual writing for extended periods at fairly high speeds (relative to any other Windows instasll) Obviously it's some sort of Defender related file, but on most of my machines not much happens with that file (1-2KB/sec of writes). On my machine running DrivePool, I'll go through phases where it's writing to that file continually at 300-400KB/sec, sometimes more. It's only a 4K file so that seems a little excessive, and it's particularly annoying because I have All Defender related things turned off on this machine. Obviously something Defender related is still doing something though. Anyway, since this machine doesn't do anything but run DrivePool for me, I thought I'd check here. Thanks!
  22. Well, so new wrench in the works, but maybe a new avenue to explore. I haven't yet changed the tcp auto-tune, but partly because I've gotten sick of Windows 10 reboots, so I've been migrating a lot of my things over to Linux. I'm still seeing the pauses in file operations, and I've had the index turned off for a long time so I know it's not that, now. On the Linux side, I think I've got my mount points correctly set to not do any caching, presumably similar to how I've got the Windows client set, but I'm still seeing some odd behavior when I've got the identical tasks happening, but via Linux/samba mounts, but otherwise the same software on the Linux side. I did just notice something though...I think whenever I am getting those weird pauses, I get this in the app log on the machine running DrivePool: I've just started investigating it, and haven't proven a correlation to myself beyond a couple data points, but the early things I've found make me feel like this could be related to the pause behaviors I'm seeing on the clients. So two things: - if you have run into that EseDiskFlushConsistency and can tell me if that's something I should look into addressing or it's not anything that impacts DrivePool things... - If you have any suggestions with regard to configure a Linux/Samba mount of the a share backed by DrivePool (I have cache=none set, I'm not 100% sure if that's the correct/equivalent to Windows SMB caching settings, but it looks it from the docs) that would be great too. Thanks!
  23. Thanks, I'll give a look at the other options you've listed, I'll want to read up the potential impacts of changing the autotune on tcp before I put that in place. I've been running without the indexing service enabled for some time now, and I still see the weird pause. In particular, I see it at the end of file operations, usually when I'm moving folders and files to other places. I get this long pause at 99% like there is some cleanup or ack that the client is waiting for before it marks the operation complete. Long being defined here as 5 seconds to say...30? I don't time it, but it's just a file move between folders on the same target typically, so I don't know what it could be doing. I'll try and cap a video of it sometime just to show what I see.
  24. Well, I wouldn't say exactly, I'm not getting any errors. It just is slower than I expect, or has a pause. It does succeed though. Which is why I hadn't disabled the indexing service, because I actually use the remote search capability. So it'll be a little unfortunate to lose indexed searches, if that is it. But, for science, I'll turn off the indexer, and see if it addresses those strange hangs
  25. I've always assumed this was DrivePool, but I'm double checking/asking if there are any mitigations. I've noticed with a couple of fairly common operations, I'll run into pauses on the Drivepool disk that I don't see typically. Primarily I noticed this when I create a new folder, rename a folder, or a delete a lot of things. When creating or deleting, there is a several second pause. Same with a rename. It's somewhat annoying because I'll often get an error because of the pause, because I'll create a new, rename it, and try and enter it. But the pause between the create and rename means my system tried to enter "New Folder" not the name I just renamed it to. With deleting, it'll stick at 99%, sometimes for quite sometime. I'm guessing it's something to do with DP updating it's indexes, before telling the OS the operations are complete, but if there's a way to improve performance of this, I'd be interested in knowing what it is. Or especially if it's not supposed to do that. Thanks!
  • Create New...