Jump to content

KingMotley

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

640 profile views

KingMotley's Achievements

Member

Member (2/3)

2

Reputation

  1. The errors all look like this: Error report file saved to: C:\ProgramData\StableBit DrivePool\Service\ErrorReports\ErrorReport_2022_09_13-06_48_37.0.saencryptedreport Exception: System.ComponentModel.Win32Exception (0x80004005): Access is denied at DrivePoolService.Pool.CoveFsDriver.FsControl.#rlc(String #kkc) at DrivePoolService.Balancing.FileMover.#tyc(Int32 #GBc, String #nKk, String #aKk, Boolean #oKk, Boolean #pKk) at DrivePoolService.Balancing.FileMover.#tyc(Int32 #GBc, String #nKk, String #aKk, Boolean #oKk, Boolean #pKk) at DrivePoolService.Balancing.FileMover.#tyc(Int32 #GBc, String #nKk, String #aKk, Boolean #oKk, Boolean #pKk) at DrivePoolService.Balancing.FileMover.#syc(PoolPartInfo #UJk, Boolean #lKk, Boolean #mKk) at DrivePoolService.Balancing.FileMover.#ryc(ControlPoolPartInfo #UJk) at DrivePoolService.Balancing.FileMover.#ohc(Boolean& #Ksf) at DrivePoolService.Pool.Tasks.Rebalance.#F7p.#F6c() at CoveUtil.ReportingAction.Run(Action TheDangerousAction, Func`2 ErrorReportExceptionFilter) Is there a way to determine what is wrong?
  2. My Server: Case: Chenbro 48 bay NR-40700 OS: Windows 10 Professional CPU: Intel Xeon X5675 x2 MoBo: Intel S5520HC Fans: 6x Noctua NF-A12x25 RAM: 3 Micron 2GB PC3-10700, 3 Micron 8GB PC3-12800 (36GB total) GFX: NVidia 1050 PSU: 3x redundant power supplies OS Drive: 2x Samsung 830 128GB in RAID-1 Storage Pool: 270TB: 11x Seagate NAS 3TB (ST3000VN), 1x Seagate Barracuda 3TB (ST3000DM)*, 20x Seagate Ironwolf 10TB (ST10000VN), 2 Seagate Exos x16 14TB (ST14000NM), 1 WD 6TB Red (WDCWD60EFR) HDD Controller card: LSI MegaRAID SAS 9280-16i4e *The single Seagate ST3000DM is the only drive surviving from the original 12 ST3000DM's I had. As they died, I replaced them with the ST3000VN's. The ST3000DM is the worst drive ever. Each died within 2 years, many in the first year. One died in the first year, and the replacement died within the first 12 months of the original drive, after that, I didn't even bother RMAing them.
  3. Yeah, torrents is about the worst case scenario for CloudDrive. I wouldn't torrent directly to/from it if at all possible. You COULD try upping the readahead to something larger like 40MB read ahead, and a much larger cache (100GB+) that might help.
  4. KingMotley

    Typo

    The following error should probably not have "hs" in it:
  5. Ah well yeah, if you are using a VPN for everything then yeah, your router will see everything as just being VPN traffic. In that case you might want to look for a client-side software solution like netlimiter. I'd probably prefer this to using whatever stablebit comes up with because then it works with all my applications, not just stablebit's.
  6. Doesn't your router have the option to handle that? Most call this QoS or Quality of Service, or something similar. You can usually define either hard limits per (port/destination), or let it only consume a percentage or just lower the priority. Seems this is more what you are after really.
  7. KingMotley

    I give up

    Well as a side note, as I'm evacuating my CloudDrive from my DrivePool, I cranked everything up as follows: and I got some decent speeds from Google Drive (but with ~1 second response times):
  8. KingMotley

    I give up

    Ok, I really wanted to like CloudDrive. The concept is pretty awesome, but unfortunately, it's just too unstable for my liking to consider using it any longer. In the two months I've been using it, I've had numerous "failures": I've had my drive (Using Google Drive) require a complete check and reupload of all the cached data (Which took the drive offline for ~17 hours), and since that is part of my DrivePool, it then made the entire DrivePool read-only. It has also disconnected from Google Drive three times, and it doesn't/didn't auto reconnect, so my server was offline until I could get back to it. Once it went into a never ending loop of trying to move files off the drive that hosted the write cache for the CloudDrive to the CloudDrive to make space (which then just puts it in the write cache for the cloud drive which is on the same physical disk, so no free space was ever gained). So then it'd try another file. And another. Until I finally caught it with ~4TB of cached data to upload and disabled all the balancers. Tonight, I went to rearrange one of my raid drives that hosted the cache portion of the cloudrive, and I couldn't use any of the normal utilities to change the cluster size because I'm guessing it didn't like a 63TB worth of sparse files on the drive. It only dawned on me that was the reason the utilities were failing because one of them was saying not enough free space when the drive only had a few TB used on a 9TB parition, and after I gave up, I went to go move the files temporarily to another drive so I could reformat it and put them back and... Explorer came back and said my drive with 6TB free space couldn't hold it. I'll admit that the utilities and file explorer should be able to handle sparse files, but programs that actually use it are so rare, most things don't account for it, and want to write the entire 63TB instead of the 2-3TB that is actually used. So I gave up, and figured I'd just detach the drive and reattach it on a different drive temporarily instead of copying over the cache files... And when I reattached it, one of the folders was "corrupt". DrivePool told me it was corrupt, chkdsk told me it was corrupt, and now half the 10TB of data I had in the cloud is borked because the folder metadata is semi-scrambled and it can't read the entire folder or any of it's subfolders. CHKDSK can't fix it, and none of the NTFS utils I tried can fix it, but a few can actually see the correct files, but they all want me to copy them to another drive... UGH. It took me 3 months to get that data up in to the cloud, and now half of it has to come back down. Even when I get that half down, I'm still sort of borked because no utility I have can actually DELETE the folder, so it's going to drive DrivePool crazy. The only way to "fix" it (other than sitting down with a disk hex editor and trying to fix it by hand) is to download the ENTIRE 10TB, wipe the drive, and then re-upload it AGAIN. So far I've tried Paragon's utilities, Acronis's, EASE US's drw, Recovery4all, Auslogics File Recovery, Active Parition Recovery, NTFS Recovery Toolkit, and cgsecurity's Testdisk. Some can see the files, but will only restore to a different drive (Meaning it has to be downloaded then reuploaded), and some will try to find them by doing a "deep scan" (which requires reading the entire 63TB drive). Considering that I only got ~10TB up over a few months, once it's downloaded, I've just decided it isn't worth the headaches, and I'll just buy another 10-16TB drive and be done with it, without any of the extra headaches that come with the cloud drive's quirks. Oh, and Plex doesn't like it when half it's library disappears/turns read-only, and NzbDrone/Sonarr/Radarr really don't like it -- it sees the files as gone, and then dutifully tries to go get them again, queueing up a zillion downloads that I already have until I catch it and shut everything down until CloudDrive comes back and then spend half a day trying to clean up the mess. It's possible I would have better luck if I had an array of CloudDrives all with redundant copies, and a symmetrical connection, but even then, it'd have to contend with Google throttling issues (I could only ever get ~150mbs download speed from the drive on a 330mbs/30mbs connection), and with that the drive was very laggy, and causing some other system stability issues -- Opening file explorer taking a whole minute, copies to OTHER drives somehow getting delays occasionally, some random performance issues copying to the write cache occasionally, etc etc. It's CLOSE, but just not close enough for my use case. Again, it might work better as a real-time backup using DrivePool to put multiple copies up on many different CloudDrives to keep the impact of losing a drive lower, and increase the performance with smaller chunk sizes rather than a tiered storage solution I was going for SSD -> RAID -> Cloud automatically moving seldom used files (minus metadata) that it can to slower and slower tiers and moving them back up as necessary. I had hoped with DrivePool's balancing API could I throw together a quick balancer for my needs to do just that and handle some personal edge cases like never ever put 4K video in the cloud because it can't download it fast enough to serve it, and try to keep unwatched stuff to local, and only push watched stuff go to the cloud. I hope this saves someone the pain I did, and gets them to really think about how they want their system to work, and how CloudDrive might (or might not) fit in that design the right way rather than all the wrong ways I did it. --Robert
  9. I don't shut down the machine that is running CloudDrive often, but I had to last night when I upgraded DrivePool from 2.1 to 2.2. It didn't cleanly shutdown, and event logs say that CloudDrive didn't respond to the shutdown attempt. I guess windows shutdown anyway (after 20 minutes?). Now I'm sitting here going on ~15 hours of "Recovery". The drive in inaccessible, and I've got a bunch of failing processes because it wants to copy stuff to that drive and they are failing (obviously). Hopefully the recovery won't take that much longer, but 15 hours seems a bit excessive.
  10. Any ideas on an ETA yet? It's been over 10 months...
  11. KingMotley

    unusual activity

    Do you have indexing turned off on the CloudDrive?
  12. Any way of doing it while the CloudDrive has a month's worth of write cache data in it?
  13. Set drives in the DrivePool as being already duplicated. Part of my DrivePool is a Raid-6 array (another part is RAID-1), so anything that gets put there wouldn't need to be duplicated to another disk. -- Possibly fixed if you could create Pool-in-Pool. I'd like to be able to set individual max fill settings per drive. Some drives I don't want completely filled, but others I do. I'd like DrivePool to understand that one of my disks is actually a cache for a CloudDrive, and not attempt to move files off of it to the CloudDrive to make space (because it won't... The files will just sit in the CloudDrive write cache for a LONG time), which sends the balancing mechanism into an endless loop of trying to empty the drive... Essentially turning the entire disk into a CloudDrive write cache. My current DrivePool: 14 3TB drives in a RAID-6. 2 10TB drives in a RAID-1 (CloudDrive cache here). 1 63TB CloudDrive. Not in DrivePool: 1 6TB drive (mostly empty) 1 3TB drive (half empty) 2 256GB SSD drives in a RAID-0 (mostly full)
×
×
  • Create New...