Jump to content

JasonC

Members
  • Posts

    92
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by JasonC

  1. Hello, I've been encountering an error when attempting to have multiple simultaneous downloads from a single Firefox client go to a network share of a DrivePool disk. When I attempt to start the second download, I usually(but not always, but it's more than 90% of the time) get a "No More Files" error. This will persist until the initial download is complete, and then it works fine. It doesn't happen if I save to a local disk. I'm just looking for possible causes, as DrivePool is the "unsual" bit of the setup here- I wouldn't normally expect to have a problem opening multiple files for writing at once to a Windows share. So I am basically asking if anyone has seen this before and can attach it to being a DrivePool related thing. I do need to do more testing with DrivePool out of the mix etc, it's just a bit of a chore to set all of the various scenarios I can think of to narrow the cause down, so I haven't gotten that far yet. I saw that thread "Beware of DrivePool corruption / data leakage / file deletion / performance degradation scenarios Windows 10/11" and it does sort of make me wonder if I could be running into something related to that with some of information from that thread. Thanks!
  2. Is anyone else out there using Backblaze to do their backups? I've been experimenting with it, trying to maximize my backup performance, and it seems to be quite bad when using the pool as the backup source. Individual files are backing up at a decent speed, but there seems to be some kind of latency between small and moderate sized files which really adds up over time. Backblaze thinks it can do between 2-3TB a day, from the raw speed, but in practice, it's not approaching that at all. I'm wondering what other peoples experience is with this. I'm currently testing mounting the drive to a letter and directly backing it up, and this so far seems to be doing significantly better, it's a little unfortunate, because I have 14 disks in my pool currently. Mounting to directories doesn't help, because Backblaze doesn't follow reparse points from what I've seen(Which is how disks are mounted to folders work), so I will have to also mount to disk letters if that's the only way to get optimal performance out of Backblaze. I've of course seen the arguements this is better anyway, since you are likely to only have to replace a single disk, and identifying what's missing in a backup for a restore will be more difficult if you are choosing across the whole pool rather than an individual disk. I think I worked that problem out using Everything to index at the mount point level, but it's a moot point if I need to use driver letters to get reasonable performance out. Anyway, I wondered what other people's experiences with using Backblaze might be, and if anyone knows how to optimize the configuration so as to get good performance out of Backblaze+DrivePool. While I'm on the subject, I am not clear if I should have DrivePool bypass filters or not while using Backblaze...I've read the examples of where you'd maybe want to/not want to do so, but I've still not wrapped my head around it well enough to decide if this could be problematic for DrivePool or not. I suspect though, that it probably doesn't matter right now, since I'm leaning towards just hitting the individual disks directly and excluding the pool disk. Thanks!
  3. Hmmm interesting idea. I have PrimoCache, I thought it only worked with memory though- I'll have to investigate that possibility. The disk itself is actually sorta dedicated to caching operations, it also is where my rclone mounts perform caching.
  4. Title is the question pretty much. I have a disk I've added to act as a temp SSD cache(related to my other recent questions) until I can get another disk added as a more permanent cache. That disk has other files on it which are (intentionally) not part of the pool. If\when I eject the disk from the pool, I know it copies pool data off- does it leave any other data on the disk intact i.e. things not in the hidden pool folder? I assume it does, I'm just looking for confirmation. Thanks!
  5. With the duplication feature, is it possible to designate the target disks? I ended up having a Windows upgrade garble all the disks in an enclosure, so duplication wouldn't really help me if the duplicates were on disks all in that enclosure/connected to that controller. Thanks!
  6. I'd like to get the SSD Optimizer as close to a true write cache as I can, at least as far as keeping the amount of space used on the SSD as minimal as possible, over the long term. From the comments in SSD Optimizer problem I've configured my Balancing settings to: activate with at least 2GB data Balance ratio of 100% Run not more then every 30 minutes The SSD Balancer itself I have configured as: Fill SSD to 40% (around 400GB) Fill archive drives to 90% or Free space is at 300GB Does this seem like it should do roughly what I want? The balancing ratio I was a little concerned was going to cause my archive disks to all re-balance, though none of them hit either of the fill settings I have, so I think I'm ok there? I'm a little fuzzy on that ratio. With it set to 100%, does that mean it should trigger the move on the SSD as long as it's above 2GB data, or is it still waiting to the 40% mark (the interaction of the ratio to % full triggers is where I'm unclear). These are semi-short term settings, I'm performing a data recovery at the moment, so I'm doing a lot of writes, which is why I want to cache to fast storage, but offload ASAP. Thanks!
  7. Recently I had an issue that caused me to move my array over to another disk. I think the issue caused a problem, as 5 disks show up with damage warnings. In each case the damaged block is shown to be the very last one on the disk. I assume this isn't some weird visual artifact where blocks are re-ordered to be grouped? Anyway, clicking on the message about the bad disk of course gives a menu option to do a file scan. I decide to go ahead and do this, and of the 5 disks, I think only one actually apears to do anything- you can start see it trying to read all the files on the disk. The others, the option just seems to return, pretty much instantly, which makes no sense to me. They don't error or fail, or otherwise indicate a problem, they all just act like maybe there are no files on those disks to scan (there are). The disks vary in size, 2x4TB and the others are 10TB. One of the 4TB disks scanned fine, and indicated no problems, but it's not clear as to why it would work and the others wouldn't. Ideas? Thanks!
  8. I have just under 10TB free on my array, but any time I try to copy something it, it's giving me a "not enough space free" error. This is even on very small files. I'm not sure how long it's been doing this for, just accessing files for read(which is what I'm doing most of the time) is fine. I haven't yet rebooted because the machine is in use, so I thought I'd check for other ideas in the mean time. I forgot to mention, I seem able to create folders fine, so it's not locked into a total read-only state. I just can't seem to write if a free space check is performed, I guess? Thanks!
  9. Right, except the problem is chkdsk wants to occur on an underlyng disk (2 disks, actually), not the pool. And it wants to umount the disk to be able to perform fixes, i.e. I need to remove the disk from the pool, since it would be unmounted, or I need to have it do it at reboot before the disk is mounted and has files in use by the pool. Not all chkdsk operations can be performed on a mounted file system, basically. So while it's not directly a DrivePool operation, because it's affecting underlying NTFS structures in a repair scenario, it impacts DrivePool.
  10. I need to run a chkdsk on a few drives in my pool, and I'll need to have them be offline to do so. Rather then take the entire machine down, can I remove it, check all the check boxes, and then, after I run the repairs, if I re-add the disk to the pool, will the contents just get merged like it had never left? Thanks!
  11. As much as I'd like to have a 256PB disk, it's a little disconcerting. It's also doing some odd things to the metrics...for instance it shows unduplicated file system storage as 256PB, but it still shows free space of the pool correctly. The rest of Windows is seeing the disk correctly (disk manager sees the size correctly etc) I am on the 2.3.0.1244 beta, so if it's that, that's fine, as long as it's just a visual error.
  12. Well, found the problem. Didn't really put together what having an I/O default in the app settings implied...and just found that the I/O settings for the drive are at 1.5Mbps. I must have set it to that when I first installed and configured, and didn't realize what I had done at the time. So...I've also just reverted to the generic key, so I'll see how that goes. Thanks!
  13. The main reason it seemed like a good idea to use my own key is that the rclone documentation specifically mentions using your own key, as the rclone internal key is low performance. Seeing the capped performance I was getting, I had just figured that it was something similar happening with CloudDrive. I'll check the logs and see if I'm still having the slow uploads tomorrow. Thanks!
  14. I've configured Google Drive to use a custom API key I believe, based on Q7941147. It connects fine, so I think that's ok, but I'm still only seeing 1.54Mbps upload. While I don't have symmetric fiber, I do have a lot more bandwidth than that(I've capped the upload at 20 Mbps, but really it doesn't matter so far!). It also seems to be nailed to exactly that number, which makes me feel like it's an enforced cap. What kind of performance should I be expecting? I can get much more (in line with what I'm expecting) when I use rclone, which I'm also using OAuth for (different credentials). Do I need to unmount and remount the drive for that to take effect? I deleted the original connection already, which the drive was built on, so the only connection I can see is the one I made since I modified the json file. I'm reasonably sure the new connection is good, since I didn't quote the strings original so CloudDrive threw an error. Once I fixed that, it seemed fine. But I am not clear if maybe the drive itself still has cached the connection it was originally made with? It didn't seem to error when I deleted the original and made a new one, so I got a little suspicious as to if it's using the new one or not. I don't think I'll be able to consider CloudDrive viable with Google Drive if I can't get more upload performance than that though! Thanks!
  15. Ok, so I just noticed that I wasn't seeing the whole UI, and once expanded I could manage the cache. I've flushed the cache, and told Plex to stream something loaded on the cloud drive. It still doesn't look like anything is pulling from the cloud (no download indicated). It could be system level caching, or maybe I'm expecting it to behave differently then it will?
  16. I'm trying out CloudDrive with Google Drive, and have created a small drive for testing. If I go this route, I'll have a lot of stuff there, including some things I'd like to have served by Plex. I'd like to test the performance when the content has to be pulled from the cloud. I haven't put enough into the cloud drive to over flow the cache yet. Will it just sit in the cache forever, or do things ever stale out? Or failing that, can I flush the cache, so I can force CloudDrive to have to pull from GDrive to fulfill a request? Thanks!
  17. I'm thinking about temporarily moving all the data off of a local disk onto my pool disk, reformatting the disk, and pulling it all back down. Doing so could easily fill the SSD I am using as a cache via SSD optimizer. I know it's supposed to offload files as it approaches whatever the threshold is, but if I'm writing continuously, I could see it being able to fill the disk. What happens as I approach that state (the SSD starts getting close to being full, but write requests are still continuing in)? Thanks!
  18. So, trying to fix some issues I have related to mounting a pool with Windows sharing to a Linux client, it occurred to me that perhaps it would work better if I used NFS. Which leads me to: - Any issues doing this? - I have this on a Windows 10 machine, so no native NFS server. Any recommended NFS server software I can run on Win10 that's known to work well for this sort of thing? Thanks!
  19. I have an automated process that I moved from Windows to Linux, that since that move I get occasional file move errors, which appear to be from the new files not being visible quickly enough. I can't quite tell where the source of this latency is, though it didn't ever seem to happen when the process ran on another Windows machine. I've adjusted the caching settings in the mount point to what I thought they should be to be optimal, but not sure if I've got it optimally set or not. If someone has a set of mount options that works really well for them, I'd love to see them. I'm asking here, because I think it's a interaction with the pool underlying the share. Thanks!
  20. JasonC

    Activity logging?

    So I just went to enable that. It was already enabled, with a valid email address, that passed the test message. I searched my mail and didn't get a notification. Best guess: does it only notify if a disk disappears _during_ operations? So, if it starts up with a disk missing from the pool, would it notify? That was the scenario I was in, and I think that might be why it didn't notify me? Nope, not that. It just didn't send one as far as I can tell, when the problem first occurred. I did find one when I searched my email, but it sent it right after I had rebooted again, but before I realized on my own the disk was gone, because 2 minutes after I rebooted, and it sent that first email, it sent the re-connect message(because I had re-attached in Hyper-V). The problem had been persisting for at least several days, so there was plenty of time to send a message but I don't appear to have gotten one. *shrug*
  21. Lots of modern drives have protections built in to automatically move drive heads to safe zones in the event of power loss. That said, power fluctuations, which a UPS would also normally handle (power conditioning features) probably could causes something weird to happen which could cause a head crash. Frankly, it sounds like you're pretty lucky your electronics didn't fry. Even if your power is reliable, you should have computer equipment you really care about not losing, on UPSes. UPSes are more then just providing power when power goes out. They often/always(?) on the protected outlets, disconnect your equipment from direct line power. Line power charges the batteries and the batteries power the gear. Think of them as an electrical buffer. But as Drashna says, the only way to be completely safe in questionable circumstances is to disconnect from line power. You can put everything on 2 disks, there are numerous ways to do this, in software, you could just have programs that makes sure your disk is duplicated to another disk all the time. Drivepool itself supports doing this. You can create a RAID array(, RAID 1 specifically, mirrored disks), literally everything that happens to one disk happens to another. You can do this right inside Windows, or use specialized hardware. NAS gear usually supports this configuration if you so desire). But of course it's not cheap, you are buying twice the disks for the same capacity. Businesses do this when uptime is the most important thing, and the budget is there. But ultimately, you want backups that are not connected to the hardware all the time, and ideally, are stored in a physically different location. The easiest way to do this for most people is cloud backups, but it may take a long time to get backed up if you don't have a fast upload on your Internet connection. Historically, IT did this with tape backups that are rotated offsite. Most people aren't going to go to this trouble, and it's much simpler and cheaper, but usually slower, to pay for cloud backups. Good tape backup gear is in the realm of pretty expensive to stupidly expensive.
  22. JasonC

    Activity logging?

    Oh, it hadn't occurred to me this was a bug. I thought it must have been something I did. I'll have to get that installed. Thanks!
  23. JasonC

    At-Rest encryption?

    Is there anyway to do any sort of At-Rest encryption with DrivePool? I'm trying to visualize how this would even work, so far everything I can think of would involve manual intervention, which isn't totally out of the question, just a little annoying with Win10 forced reboots. Thanks!
  24. Is this over a network? What's DrivePool showing in the activity screen you when you are copying to the pool? If it's over a network, have you tried copying directly to a disk over the network to see if the problem happens that way(trying to isolate if the problem is indeed DrivePool, or something else in your setup).
  25. JasonC

    Activity logging?

    So, I still like the log thing if I can get it, but I did find the issue. I realized that I was having a pretty wide spread issue with a lot of files missing, I just didn't realize it until I went into a high level library. This made me check for a disk issue. Turns out it was a combination of a VM problem, for some reason one disk didn't get mounted to the VM at startup, and I didn't realize it because somewhere along the lines the notification settings I had in the Stablebit Scanner were lost, so I wasn't getting notifications about a disk issue. Once I re-mounted the disk into the VM, my stuff all came back.
×
×
  • Create New...