Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Yesterday
  3. I also started getting the checksum errors: "Checksum mismatch. Data read from the provider has been corrupted" Literally thousands of the errors each day.
  4. Last week
  5. Well, no replies to this one, so I thought I would post my own results. I used SFTP. The connection seemed stable, but Hetzner is damn slow. I was only able to get a speed of 3mbps, which meant that it would take over 180 days to copy the 10TB I wanted. So, back to looking for another provider with decent speeds.
  6. hi. please do that. in this way i can accessmy files without the software from any place
  7. So since switching back to using Windows with DrivePool on my fileserver again a few weeks ago. We have been running into issues with MacOS clients accessing fileshares on it. Browsing and especially creating folders/files, where they create, but then disappear until remount of the share. (One weird thing too is that if you type in a path for a folder and try to browse to it in finder or terminal, then you can't, but if you use terminal to copy a file you know the path of, it can do that.) Been doing a lot of testing, and it seems like MacOS most reliably sees the content of volume with the lowest "\Device\HarddiskVolumeXX\" as seen in resource monitor. The file creation problem can be help by using the SSD Optimizer plugin, and making sure that the volumes marked ssd has the lowest HarddiskVolumeXX. Which worked decently but didn't entirely fix the issuses. I have ended up creating a powershell script, that puts a filewatcher on the entire drivepool, so that every time a folder is created then on all drivepool drives, a folder is create, so no what drive MacOS gets the directory listing from, that folder exists, which seems to have fixed both folder, and somehow file creation problems. https://gist.github.com/eeveelution1313/e25984d7f7ef582357bc8899ca59d25a (As mentioned in the bottom of the script a majority of it is from: https://powershell.one/tricks/filesystem/filesystemwatcher)
  8. Ok, thanks for responding. I’ll experiment a bit more before I give up on drivepool. I think it worked flawlessly until I added the SSDs a couple years ago.
  9. I ended up just moving to SnapRAID and this seems to have resolved the issue. The downside is there's no real-time parity, but I do like that I can configure it to scrub files routinely to detect corruption, which FlexRAID did not do.
  10. Did you ever figure this out? I’m having the same problem. Was it the SSD optimizer, read striping, the cables, bypass system filters, real time duplication, or something else? I’ve set pretty much everything to default, and I’m trying one by one to see what’s doing it, but it is sporadic so it’s tough.
  11. Hi Doug, Thank you for your answer. I was using the Adaptec 72405 with Server 2016 Essentials and has worked flawlessly. Now I have tried the "Adaptec RAID Driver v7.5.0.52013 for Windows, Microsoft Certified" dated Aug 2017, from the Microsemi site, but this also was provided by the Operating System. It didn't work either. I think it has do to with the Server 2022 or something. My Scanner settings are nothing but the default one, aside the Unsafe and NoWMI option which I manually checked, to see if bring something to my issue in question. But nothing is working. In the end, I think it's a Server 2022 issue, because under Server 2016, the same controller and disks, worked without issues. I will try further with the same Stablebit Drivepool/Scanner version, that was on the Server 2016 (same Adaptec RAID Driver v7.5.0.52013 also) installed, but I don't think it will change something
  12. Did you try expanding the drive?
  13. Should be with the OS. Site also shows Server 2022 as a supported OS also. I am not running this OS but have this card on Windows 10 with those settings and have that data. Hmm just noticed mine has not update to that version though. Still working after manually updating. What drivers did you use? Mind showing the other advanced settings?
  14. Tried also the "Advanced Settings > Direct I/O > Unsafe" checked, together with the "NoWmi" option for Smart section. But it didn't help! Anyone with further ideas?
  15. Earlier
  16. Hi everyone, Using Server 2022 Standard with Stablebit Scanner v2.6.2.392 with a controller card Adaptec 72405 (HBA Mode), I'm missing the Temperature, Performance, DiskQueue, DiskActivity infos. Is there a incompatibility between Server 2022 or the Adaptec card, with the Stablebit software. Best regards
  17. Tomsk

    Business Licensing

    As a personal user of Drivepool, I've recommended it to my company for their new server. The server will be running Essentials 2019 and all it does it hold files for about a dozen PCs and 20 users. Drivepool will be installed on the server only. Which license does my company need?
  18. I did the same thing as you... I deleted my drive, and started a new one. I uploaded about 7TB to that drive. It's been working fine for about a month, but in the last few days as started giving Checksum errors again, and not uploading. What is going on? How have I been able to use Clouddrive for years and not have this issue, and now twice in a month? Is there anything else to try?
  19. Greetings, I would appreciate education and guidance on how I can duplicate data in one pool to another similar sized pool. Setup: WHS v2011 server (HP ProLiant G7 N54L MicroServer) with four storage drives (2x4TB WD Red Plus and 2x8TB WD Red Plus) and a 250 GB boot drive that came with the HP server. With DrivePool, two pools of 12 TB each, a combination of one 4TB and one 8TB drive are setup. Presently, all data is stored in the first pool. Data is family pictures, family videos, music, personal documents such as Word, Excel, etc. and WHS enabled client backups from multiple laptops of different family members. Second pool is completely empty and has no data in it. I want to replicate all the data in the first pool (all of the past and anything added going forward) into the second pool. Can this be achieved using DrivePool? I noticed in the DrivePool GUI, there is a 'Pool File Duplication' feature. Does this do what I want to get done? Will it duplicate only files or will it duplicate laptop backups also? There is a checkbox for 'Protect my files from more than one drive failure at a time'. Checking this on, I assume, will make three additional copies of the original and store it on each of the other three drives. Since the drives are not exactly the same size and there is not enough room for three copies, I assume it is not good to check this box. If DrivePool cannot do what I need, I would appreciate any suggestions on how this can be achieved using a different product. Thank you.
  20. think i answered my own question based on this, its expected behavior i guess. to my chagrin. anything over 16tb and youre getting 8k and as such losing the compression option https://support.microsoft.com/en-us/topic/default-cluster-size-for-ntfs-fat-and-exfat-9772e6f1-e31a-00d7-e18f-73169155af95
  21. okay, so with some digging it turns out ALL of my new 18TB exos were formatted the same way as the rest of my drives. simple GUI via disk management in windows. setting the allocation unit size to default. this always resulted in 4096/4k clusters. apparently windows sets 8k for drives over a certain size.. or.. something? because these were all set to that as the default, and i tried a dry run format and opening the dialogue doesnt even present 4k as an option. now assuming I wanted to back up 20TB of data then move it back again... how would one (if one could) make 4k cluster size if the GUI doesnt allow it? Would/could you do it in diskpart? Why is it not accepted outside of command line? My 2nd largest drives are 12TB and and 4k all is possible and was set to default there. so somewhere between 12 & 18 for whatever reason its not a thing? trying to understand if i have an issue and something went awry or if this is expected/normal behavior? If its the latter I can leave it, if it is not.. its likely ill have to spend days transferring data around and reformat the things via whatever avenue will afford me the ability to get a 4k cluster size if its possible. even outside of the shown T pool (2 18tbs) I also have 2 more 18TB disks which are parity drives. No pooling and nothing on them except a parity file each. Those too are set to 8k cluster by default with no option for 4k. what does it all mean? all drives but the 18tb'ers all the 18tb disks
  22. I use drivepool and have had no major issues until this oddity cropped up. All my non pooled drives seem to support file-based compression fine. all my pools have supported it fine as well (until now) but i recently purchased 2 seagate exos 18tb and made another pool. for some reason the new pool with the 18tb exos do not support file based compression. checked with fsutil and checked via right click > properties on folders therein. I'm unsure if this is something expected from enterprise drives or? (all my others are non enterprise). The file system is NTFS so no idea what is going on. Anyone have anything for me? non pooled drive pool with compression working pool with compression working pool with compression not working results in this on the drives and pools working properly (aka all but my T: pool) and this on the problem child
  23. Hi all, While Drive Scanner has worked well for me, it's always bugged me that we can't read SMART information when the controller is set to RAID mode - Setting up a RAID-0 stripe on my NVME drives puts the entire controller into RAID mode. CrystalDiskInfo manages this through a plug in that works well, any chance we can get Drive Scanner to work for that for AMD X570, B550 and Intel chipsets? Or... is there a way to do this that I'm completely missing?
  24. Hi all, I am setting up CloudDrive on my Windows server to access a Hetzner Storage Box which I am trying out with a view to using it as an offsite copy of my media files. The plan is to set up the Hetzner box as a CloudDrive pool, then use DrivePool to dupe the data. There is about 10 to 12 TB in total. So, I was wondering what the best way to connect might be? Hetzner supports a number of protocols, but which is generally the bestest? Interested more in robustness than speed: this is for an offsite copy that won't be accessed that often, but needs to be kept up with the local copy. Any thoughts?
  25. Shu

    Files being corrupted

    It's a bit hard to test since my pool has changed a bit since posting (I removed two drives and the list of corrupted files and which drive they are on is no longer accurte). I don't have any anti-virus software (other than windows defender); it's a fairly barebones windows install on a dedicated media machine. Unfortunately, I've had this issue with corrupted files for a while now and the pain has pushed me towards trying a different solution (unraid...). I do still appreciate drivepool and I use it on my main PC (a 4-ssd pool). If I encounter corrupted files on my main PC, I'll come back to the thread and do some more testing. If it helps, I attached a graph of my 13-drive pool and which disks had corrupted vs okay files ("clear" files). The bars are labeled with RPM/cache-size-in-mb (though, for one drive, I couldn't find info on cache size). There are 57 data points (57 corrupted files that I looked through, there were others but it was a pain to do more). In order, the size of the drives are: 8tb, 4tb, 4tb, 4tb, 4tb, 3tb, 3tb, 3tb, 2tb, 1tb, 1tb (there are only 11 here as the 2 others are my SSDs). Also, I said 12 drive pool in the first post, I forgot a drive - it's 13.
  26. Jonibhoni

    Files being corrupted

    Just out of curiousity: Does the problem still appear if you disable Read Striping (Manage Pool -> Performance -> Read striping)? And do you have an anti virus software active that checks files on access? I once had an issue with the combination of both.
  27. Hello I have 8 disks and often the pool drive (d:\) doesn"t appear immediately. It seems windows checks something before giving access to the drive. Anyone has that problem ? Thank you
  28. I believe I figured it out....seems Scanner marked a single block as bad, and since I had the option for DrivePool to allow Scanner to evacuate on any issue, it did so. I'm going to add a 3rd drive to the pool, but for now I have disabled the scanner option to evacuate, the drive is healthy except for that 1 block, but will be ready for when it craps out entirely (and I have backups).
  29. As you can see in the attached image, Data2-DAS1 (Q:) is showing some strange info, like it isnt part of the pool, but yet it is? I wasn't getting any errors until I checked the scrub status of SnapRaid, and it was having fatal errors trying to access the PoolPart ID info..... Any ideas on what is going on, or what I should do to ensure the pool is working properly?
  1. Load more activity
×
×
  • Create New...