Jump to content
Covecube Inc.

JasonC

Members
  • Content Count

    86
  • Joined

  • Last visited

  • Days Won

    6

JasonC last won the day on December 27 2017

JasonC had the most liked content!

About JasonC

  • Rank
    Advanced Member

Recent Profile Visitors

598 profile views
  1. Recently I had an issue that caused me to move my array over to another disk. I think the issue caused a problem, as 5 disks show up with damage warnings. In each case the damaged block is shown to be the very last one on the disk. I assume this isn't some weird visual artifact where blocks are re-ordered to be grouped? Anyway, clicking on the message about the bad disk of course gives a menu option to do a file scan. I decide to go ahead and do this, and of the 5 disks, I think only one actually apears to do anything- you can start see it trying to read all the files on the disk. The oth
  2. I have just under 10TB free on my array, but any time I try to copy something it, it's giving me a "not enough space free" error. This is even on very small files. I'm not sure how long it's been doing this for, just accessing files for read(which is what I'm doing most of the time) is fine. I haven't yet rebooted because the machine is in use, so I thought I'd check for other ideas in the mean time. I forgot to mention, I seem able to create folders fine, so it's not locked into a total read-only state. I just can't seem to write if a free space check is performed, I guess?
  3. Right, except the problem is chkdsk wants to occur on an underlyng disk (2 disks, actually), not the pool. And it wants to umount the disk to be able to perform fixes, i.e. I need to remove the disk from the pool, since it would be unmounted, or I need to have it do it at reboot before the disk is mounted and has files in use by the pool. Not all chkdsk operations can be performed on a mounted file system, basically. So while it's not directly a DrivePool operation, because it's affecting underlying NTFS structures in a repair scenario, it impacts DrivePool.
  4. I need to run a chkdsk on a few drives in my pool, and I'll need to have them be offline to do so. Rather then take the entire machine down, can I remove it, check all the check boxes, and then, after I run the repairs, if I re-add the disk to the pool, will the contents just get merged like it had never left? Thanks!
  5. As much as I'd like to have a 256PB disk, it's a little disconcerting. It's also doing some odd things to the metrics...for instance it shows unduplicated file system storage as 256PB, but it still shows free space of the pool correctly. The rest of Windows is seeing the disk correctly (disk manager sees the size correctly etc) I am on the 2.3.0.1244 beta, so if it's that, that's fine, as long as it's just a visual error.
  6. Well, found the problem. Didn't really put together what having an I/O default in the app settings implied...and just found that the I/O settings for the drive are at 1.5Mbps. I must have set it to that when I first installed and configured, and didn't realize what I had done at the time. So...I've also just reverted to the generic key, so I'll see how that goes. Thanks!
  7. The main reason it seemed like a good idea to use my own key is that the rclone documentation specifically mentions using your own key, as the rclone internal key is low performance. Seeing the capped performance I was getting, I had just figured that it was something similar happening with CloudDrive. I'll check the logs and see if I'm still having the slow uploads tomorrow. Thanks!
  8. I've configured Google Drive to use a custom API key I believe, based on Q7941147. It connects fine, so I think that's ok, but I'm still only seeing 1.54Mbps upload. While I don't have symmetric fiber, I do have a lot more bandwidth than that(I've capped the upload at 20 Mbps, but really it doesn't matter so far!). It also seems to be nailed to exactly that number, which makes me feel like it's an enforced cap. What kind of performance should I be expecting? I can get much more (in line with what I'm expecting) when I use rclone, which I'm also using OAuth for (different credentials). Do
  9. Ok, so I just noticed that I wasn't seeing the whole UI, and once expanded I could manage the cache. I've flushed the cache, and told Plex to stream something loaded on the cloud drive. It still doesn't look like anything is pulling from the cloud (no download indicated). It could be system level caching, or maybe I'm expecting it to behave differently then it will?
  10. I'm trying out CloudDrive with Google Drive, and have created a small drive for testing. If I go this route, I'll have a lot of stuff there, including some things I'd like to have served by Plex. I'd like to test the performance when the content has to be pulled from the cloud. I haven't put enough into the cloud drive to over flow the cache yet. Will it just sit in the cache forever, or do things ever stale out? Or failing that, can I flush the cache, so I can force CloudDrive to have to pull from GDrive to fulfill a request? Thanks!
  11. I'm thinking about temporarily moving all the data off of a local disk onto my pool disk, reformatting the disk, and pulling it all back down. Doing so could easily fill the SSD I am using as a cache via SSD optimizer. I know it's supposed to offload files as it approaches whatever the threshold is, but if I'm writing continuously, I could see it being able to fill the disk. What happens as I approach that state (the SSD starts getting close to being full, but write requests are still continuing in)? Thanks!
  12. So, trying to fix some issues I have related to mounting a pool with Windows sharing to a Linux client, it occurred to me that perhaps it would work better if I used NFS. Which leads me to: - Any issues doing this? - I have this on a Windows 10 machine, so no native NFS server. Any recommended NFS server software I can run on Win10 that's known to work well for this sort of thing? Thanks!
  13. I have an automated process that I moved from Windows to Linux, that since that move I get occasional file move errors, which appear to be from the new files not being visible quickly enough. I can't quite tell where the source of this latency is, though it didn't ever seem to happen when the process ran on another Windows machine. I've adjusted the caching settings in the mount point to what I thought they should be to be optimal, but not sure if I've got it optimally set or not. If someone has a set of mount options that works really well for them, I'd love to see them. I'm asking here, beca
  14. JasonC

    Activity logging?

    So I just went to enable that. It was already enabled, with a valid email address, that passed the test message. I searched my mail and didn't get a notification. Best guess: does it only notify if a disk disappears _during_ operations? So, if it starts up with a disk missing from the pool, would it notify? That was the scenario I was in, and I think that might be why it didn't notify me? Nope, not that. It just didn't send one as far as I can tell, when the problem first occurred. I did find one when I searched my email, but it sent it right after I had rebooted again, but before I reali
  15. Lots of modern drives have protections built in to automatically move drive heads to safe zones in the event of power loss. That said, power fluctuations, which a UPS would also normally handle (power conditioning features) probably could causes something weird to happen which could cause a head crash. Frankly, it sounds like you're pretty lucky your electronics didn't fry. Even if your power is reliable, you should have computer equipment you really care about not losing, on UPSes. UPSes are more then just providing power when power goes out. They often/always(?) on the protected outlets
×
×
  • Create New...