Jump to content
Covecube Inc.

Atarres

Members
  • Content Count

    17
  • Joined

  • Last visited

  • Days Won

    2
  1. Ok I'll create a VM and migrate data off that then recreate it as a NTFS. Much appreciated.
  2. I did format on Windows 10 creators, but the drive is being used on Windows Server 2016 now.
  3. I am unable to mount it as it has existing data on it. I had to move it from another machine. When I attempt to attach it, it attaches successfully but shows as RAW. It was formatted as REFS. I have two other drives formatted as NTFS that work fine. Is there a way to recover from this?
  4. I'm in the middle of doing a setup on G-suite. Seeing they don't cap space (yet) I want to go that route (I'll upgrade to the required users if it becomes an issue later on). For a cloud drive setup (aiming at 65TB to mirror local setup) Any recommended methods for storage? I saw another post here recommending various prefetch settings, however was not clear on whether new drives should be sized a certain way? Various 2TB drives or max out at 10TB for each one then combine into one share using DrivePool? Ultimately hoping to get an optimal setup out of the gate. And probably would not hurt to document that here for others to follow as well. Ideally using it with DrivePool so that it can be treated as one massive drive.
  5. Just curious if there are plans or even consideration as to going into Linux?. Cloud Drive on Linux would be an absolute killer app due to it's flexibility and ease of use. Altho having a CLI version as well might be preferable for many. Hopefully it's in the cards.
  6. Thanks this is good info. I have multiple backup tools (backblaze for cold storage) so not terribly concerned IF/WHEN I lose data on ACD. But thanks for the info. I'll stick to NTFS then. My main use case with ACD is just simply to store an online mirror of existing data on local storage. IF I were to lose it, I would still have the constantly updating cold storage backup to rely on. I additionally use it as a home surveillance drive as well. But yea I work with AWS as a DevOps Eng, so I get the aggravations in dealing with Amazon and ACD on your end (with all the weird issues they have and undocumented limitations) and why ACD is no good for important info.
  7. Is this recommended? Or even necessary? I guess my main concern is bitrot. I lost an (empty) drive to this and am currently attempting recovery operations as a test to see if I can recover from this state. I got the dreaded REFS The Volume repair was not successful. This tells me that REFS was corrupted in some form. While I lost no data, I want to see if I can recover from this state. (running a days long operation to recover the ACD). In the meantime I just want to know if it is a recommended to run REFS on an Amazon Cloud Drive.
  8. FYI the ONLY way you can fix this (as someone who has both those cards and their 8i variants can attest) is to flash them into IT mode. Get the correct firmware and adapt this guide: https://www.servethehome.com/lsi-sas-2008-raid-controller-hba-information/to your particular firmware, but what you are doing is converting the controller into a JBOD setup. So your disks are detected by smart and therefore the scanner. FYI this is what I did as I encountered the same problem. EDIT: Further note you can crossflash your Dell Perc card into an LSI version. Plenty of good guides on that and some youtube videos even.
  9. Yea it was a nasty one for sure. I usually only contact support as a last resort :S but you got me up and running and I appreciate that greatly. I will take your advice and go back to the CB build of windows, seems a number of issues with the build that did not get exposed until I did this inplace install. Rolling back seems the safest best at this point.
  10. No worries, I can see how that could happen. Still I've been unable to fix it. I'll put in a ticket to address it. If it gets a solve there I'll post the details if support allows it.
  11. Yea actually one of the first things I did. Last comment I made was regarding that. The driver is not showing up anywhere and attempts to manually install are not working. Pretty sure the covefs driver not being installed upon reinstall is part of the problem.
  12. I had some issues with Windows 10 ENT 64bit and needed to do an inplace install to fix the issues. Kept apps and files and such, however my drives are not showing in DrivePool (data is intact and in Disk Manager). While the data is there, the drivepool is not showing up in the app or in my computer. But the drives are fully accessible. Also notable the Covefs driver is not showing up (after many reinstalls following the KB articles on full uninstall/reinstall). Any other things I can try?
  13. I am using build 14986 and noticed that the GUI to Drivepool and Scanner don't work any more. The drivepool is still accessible and writable so it is functional still. However the user interface no longer works (I use it to monitor performance). Likewise Scanner is unusable as well. I since rolled back to 14931 (which still works but Windows has other issues on that build). I don't mind upgrading to a newer beta. I have the 735 beta for Drivepool installed currently.
  14. Atarres

    Nested Drivepools?

    I currently have 68TB worth of drives set up, 55 on one system directly hooked up with a Mellanox ConnectX2 via IPoIB so ide, the IO controllers are LSI 9211-8i for internal drives and 2x RocketRaid 642L Port multipliers (replacing it with this next week) https://www.amazon.com/gp/product/B019R09N10/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1 I have a 9200-8e but even after flashing to IT firmware it did not pick up my esata boxes. At any rate my real question here is about nested drivepools. On server (A) I have 55TB of drives including to SSDs I'll be using for staging data with the SSD optimizer, on server ( where I do most of my day2day and initiate most of my downloads from I have 12.7TB worth of drives (both setups excluding the boot drives). My goal here is to use clouddrive and drivepool to create one massive pool of data, the underlying storage 55+12.7 are drivepool themselves. I wanted to carve out 10TB drivepools and rope those together into one large drivepool. Would this be a recommended (and supported) setup? All of it is formatted in REFS as well for the bitrot factor. The underlying pools would have duplication (so I get some semblance of parity) and the main pool would have no duplication in the drivepool. Is there any flaw anyone can see here, or a better approach to handle this? I want to make sure I do this right as my goal here is to have a more long term storage setup for media streaming and general data storage. I was considering an alternative of setting up an iSCSI target/initiator setup seeing as I have a dedicated connection between the machines. However I am trying to get the optimal suggestions to making sure this setup is a good balance on performance, redundancy and capacity.
  15. I have this setup. I had to boot into a WinPE setup and set up individually formatted REFS drives 7 of them @ 2TB each. No storage spaces. I installed DP on top of them, copied 270GB of data over to them and the performance was about 2x the speed I saw with the same hardware on Storage spaces. It also exceeded a ZFS set up I had on the same hardware a while back as well. This combination has proven to be very fast. And with REFS as the underlying FS, my biggest concern is gone which is bitrot.
×
×
  • Create New...