Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Yesterday
  2. denywinarto

    Scanner reporting almost all drives system damaged

    LSI SAS 9300 8e I havent even started any scans, it just popped up after starting the scanner. Havent had another chance to run chkdsk again, but last time there were 2 drives that also reports the same error using CHKDSK, and they were fixed already. But the rest of the drives is giving " task failed, shutting down" when i try to fix them using scanner Edit : Hmm a minute, i'm running the drive repair one by one to the damaged drives again.and this is what i got : So out of 14 damaged drives, 1 showing will be repaired on next shutdown 6 failed to repair, throwing task failed, shutting down 6 can be repaired successfully 1 showing : the volume cannot be repaired, there was no additional information provided as to why, you may run chkdsk from the command line to try to repair this volume Another thing i noticed chris, the poolpart on each drives is no longer on hidden state, IIRC werent they supposed to be hidden? i will try running chkdsk tomorrow morning
  3. Hello, I currently have 5x 8TB drives (3 of which are nearly full, using ordered file placement plugin) and I'm hoping to back up the data on them and be able to keep backing up data moving forward. I have bought 3x Seagate Backup Plus Hub 8TB. My plan was to use a USB3.0 hub to connect all of them up, have them assigned to a pool. Then use ordered file placement plugin to manage that pool. Then i'd use something like robocopy or some other utility to start the backup. The data i'm backing up is movies and tv shows and music. Sometimes the files get replaced with higher quality versions (same file name) and sometimes just the metadata changes (like fixing mp3 tags). I'd like to make sure these changes are synchronized weekly, and have the ability to connect additional drives as time goes on. Am i doing the right thing? are there any caveats I should be aware of? Or is it pretty much exactly as i describe? I honestly dont even need to use ordered file placement, I just figured it would make the transfer faster/easier on the OS if it was only reading/writing to 1 drive on the hub at a time. I have a secondary question about leaving all those drives plugged in/powered on. Is that safe? How can I ensure the drives will park and be suspended if there is no disk activity required of them?
  4. FirstAidPoetry

    Stablebit Scanner loses all settings on unexpected shutdowns.

    Thank you so much Christopher!
  5. Sonicmojo

    Drive removal seems stalled out?

    Well - the drive is gone now so enabling logging will not help. And as far as unduplicated data - I used the same process for 4 consecutive drive removals. The first three went like clockwork...the "remove" drive process (whatever that entails) went to 100% and then DP did a consistency check and duplication AFTER the drive was removed completely. This last drive "removal" not go to 100% - it simply sat at 94% for like 18 hours. For me there is long and then there is REALLY long. So I eventually got fed up - cancelled the removal and pulled the drive. My concern is that this "remove" process did not go to 100%. There was zero file activity on the pools for hours and hours - so if DP was doing something - it should have been communicated. Oddly - the only files left on this drive (after I killed it at 94% for 18 hours) - oddly - were just the duplicates. So I do not understand what the right conclusion to this process should be. I am assuming that if I choose to "process duplicates later" the removal process should be successful and go to 100%. Yes? No? In this case it seems like it was set up to sit at 94% forever. Something was not right with this removal - the seemingly non-existent communication of the software (telling me exactly nothing for 18 straight hours) - should be looked at. S
  6. Christopher (Drashna)

    Source for Disk Info

    We pull the info from the performance counters in Windows, actually. To that point, in some cases, we've had to reset them to get performance to show: http://wiki.covecube.com/StableBit_DrivePool_Q2150495 Also, StableBit Scanner does expose some info to WMI. You can look at phpsysinfo for details on what it exposes, actually.
  7. Christopher (Drashna)

    DrivePool and Windows Containers

    Are you trying to pass the pool drive through? If so, that may be the problem. It contains no actual data, and is a emulated drive. Meaning that it requires that the underlying drives be connected to the system.
  8. Christopher (Drashna)

    Drivepool good with torrent activities?

    Thanks for the troubleshooting. That may definitely help! https://stablebit.com/Admin/IssueAnalysis/28059
  9. Christopher (Drashna)

    Drivepool on new Windows install

    Yes.
  10. Christopher (Drashna)

    Remove duplicated data

    This. If needed, remeasure the pool, and it will kick off a duplication pass.
  11. Christopher (Drashna)

    Make a Network Share act as a Hard Drive?

    Not really. The two best ways to do this, are StableBit CloudDrive and iSCSI. But nether show existing content. There are other ways to do this (I think netDrive does), but YMMV.
  12. Christopher (Drashna)

    How to set it to duplicate to certain drives

    The best way would be to add the cloudDrive disks to a separate pool, then add both pools to a new pool, seed that top level pool, and enable duplication on the top level pool. This way, one copy is hosted on the cloudDrive disks, and one is on the local disks
  13. Christopher (Drashna)

    Scanner reporting almost all drives system damaged

    What controller card are you using now? Also, is this for the surface scan? If so, try running the burst test on the drives in question. If that comes back with errors, there are communication issues with the drive, and not actually damaged sectors (but it ends up with the same result). And do these errors happen when manually running CHKDSK?
  14. Christopher (Drashna)

    Drive removal seems stalled out?

    Could you enable logging, and then remove the drive? http://wiki.covecube.com/StableBit_DrivePool_2.x_Log_Collection Also, as for "stalled out", any unduplicated data will cause it to take much longer. And it still has to check the data.
  15. Christopher (Drashna)

    Stablebit Scanner loses all settings on unexpected shutdowns.

    Make sure that you're on the latest beta. There are some fixes that should hopefully prevent this from happening in the future. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3220_BETA.exe If it's still happening on that version, let us know.
  16. Christopher (Drashna)

    Possible to pool two machines together?

    Not really. I mean, you can use the other system to host a CloudDrive, but it would need to use UNC paths to do so, and the data on that system would not be accessible locally.
  17. Christopher (Drashna)

    SDD optimizer can't be installed

    That's.... really odd then. Well, glad you got it resolved.
  18. bzowk

    Source for Disk Info

    Good Morning - I'm a long time DrivePool & Scanner user and love the pair of apps which I use in my home lab. Just wanted to ask a quick question, please. I'm wrapping up designing a Grafana dashboard for my lab PCs which gets its data from InnoDB which in turn is populated by both telegraf and open hardware monitor. Unfortunately, I've been unable to find a good source for performance of individual disks - just basics for logical ones which obviously doesn't help with a pool. Just wanted to post and ask where exactly Scanner pulls its disk stats from - especially "Performance", "Drive Activity %,", & "Temp". If you can't share, I understand, but assume it's specific WMI paths. Thanks
  19. Umfriend

    Mirroring pool

    One Pool Scenario: If a 10TB HDD fails then you can only store 6TB duplicated. This is because duplication requires both/each duplicate to be stored on a different physical HDD and after 6TB (duplicates on both 3TB HDDs and the 10TB HDD) you have no duplicate space left. Wrt Backup, yes. All data is stored in in plain NTFS format and I indeed backup one set of underlying HDDs. I run Windows Server 2016 Essentials and use Windows Server Backup. Not everyone is a fan but it has as yet never failed me (and I have had to recover). On your scenario, item #4: No. What you need to understand is where/how pooled files are stored. Let's talk about the first Pool on HDDs, say, F: and G:. You can still store files on F: outside of the Pool. On F:, you will find a (hidden) PoolPart.* folder. Anything stored in this folder is part of Pool 1. Same on G:. If you now add Pool 1 to the MOAP, then within that PoolPart.* folder, you will find another PoolPart.* folder. Anything stored in there is part of MOAP. Althoug you can copy/delete, this actually takes a lot of time. The easier way to go about this is: 4. Enable duplication on MOAP. 5. Stop the DrivePool Service 6. On F:\ Move all files from the upper PoolPart.* folder to the underlying PoolPart.* folder 7. Same on G:\ 8. Start DrivePool Service The advantage of steps 6 and 7 is that moving files onthe same HDD is very fast as it does not actually entail a read/write/delete of the files but simply rewrites the folder/location information. After step 8, DP will find that in MOAP, files are not duplicated and then perform a balancing/duplication pass. It will take some time depending on size and number of files but it will work well.
  20. Last week
  21. Michael Carson

    Mirroring pool

    I think I understand the backup reasons now for doing the two 13TB pools. There don't seem to be a lot of good tools for doing backup of the pool so if you can backup the individual drives without any duplication then that's the way around backing up the pool. The best backup scheme that I've come up with so far is to use freefilesync on the pool. For non-pooled drives, I've been using Terabyte's IFW.
  22. srcrist

    Previous OneDrive data missing

    I actually think CloudDrive's strongest unique use case is the opposite of that. Because CloudDrive can uniquely support partial transfers and binary diffs, CloudDrive can create working drives that are hosted in the cloud that can be used to actively edit even large files without re-uploading the entire thing. The chunk-based nature of CloudDrive also makes it ideal for large amounts of smaller files, since it will upload uniform, obfuscated chunks regardless of the file sizes on the disk--thus reducing API load on the provider. If you're just doing archival storage, uploading an entire file and letting it sit with something like rClone or NetDrive works just fine. But if you need to store files that you'll actively edit on a regular basis, or if you need to store a large volume of smaller files that get modified regularly, CloudDrive works best. I think another good way to think of it is that rClone and NetDrive and ExpanDrive and all of their cousins basically *compliment* what the cloud provider already does, while CloudDrive aims to make your cloud provider operate more similarly to a local drive--and with a fast enough connection, can basically replace one.
  23. ebalders

    DrivePool and Windows Containers

    I am trying to get DrivePool/CloudDrive working with Windows Containers. I've run into an issue that may be specific to DrivePool. I am working on setting up a number of Windows Containers. These are native windows images running in Hyper-V using the Docker engine. I have several drives created in CD/DP. I have 1 Cloud Drive that is connected to a container with no problems. The other drive is 2 CloudDrives combined into a single DrivePool instance. This is the drive I am unable to mount to a container. The error I receive is this: Cannot start service radarr: CreateComputeSystem d88999382fbcd36ea97c92fdbcb5935dab2a1e6e52aaffecb3bc4779c46005d9: Do not attach the filter to the volume at this time. I was able to mount the drive using SMB a Global Map on the host, but that seems to have problems of its own. Is anyone else running a DrivePool instance attached to Windows Containers?
  24. TerryMundy

    Previous OneDrive data missing

    Wow, I really appreciate the super fast response! Thanks for explaining it in such a way that I now completely understand. I guess I can't drop ExpanDrive after all. But CloudDrive definitely has a use it sounds for long term archival storage.
  25. srcrist

    Previous OneDrive data missing

    Yeah, this is a fundamental confusion about how CloudDrive operates. CloudDrive is a block-based virtual drive solution, and not a convenient way to sync content with your cloud provider. ExpanDrive, NetDrive, rClone, and Google File Stream are all 1:1 file-based solutions, and all operate similarly to one another. CloudDrive is something else entirely, and will neither access nor make available native content on your provider. CloudDrive simply uses the cloud provider to store encrypted, obfuscated chunks of disk data.
  26. TerryMundy

    Previous OneDrive data missing

    Hi All, Maybe I misunderstood how CloudDrive works. I already have ExpanDrive and mounted the OneDrive for Business as drive Y:. Everyone in the company can see the contents. After installing CloudDrive it wanted me to create what seems to be a whole new drive, requested the size, then formatted it. After doing so none of the previous OneDrive data is there, nor anything that is copied there can be seen by other employees using their OneDrive in File Explorer. Can anyone shed light on what I did wrong? I would like to trash ExpanDrive and configure CloudDrive to be drive Y: and allow everyone to see the contents regardless of what computer they access it from. Am I asking for too much? Thanks, Terry Mundy Independence, Missouri
  27. Michael Carson

    Mirroring pool

    So, how does this work logistically? Maybe the following: Create a primary pool of 10TB + 3TB Create a backup pool of 10TB + 3TB Create a mother of all pools containing just the backup pool Copy all information from primary pool to mother of all pools Delete information from primary pool and add it to the mother of all pools Enable duplication on mother of all pools How does this help with backup solutions that don't like pools? Do you go down to the underlying disks themselves in the backup pool and just back them up? Right now, I have a single pool consisting of all the HDD's with pool duplication turned on. If a 10 TB HDD dies, I should still have all of my files on the remaining drives but duplication stops if I have more than 8TB of information in the pool. 10TB Drive dies: 26TB - 10TB = 16TB/2 = 8TB max of duplication 3TB drive dies: 26TB - 3TB = 23TB/2 = 11.5TB max of duplication Unless I'm missing something, it doesn't really matter with drivepool where the data resides. If duplication is turned on, I can lose a drive and still have all of my data. The bad drive just has to be removed from the pool and another of equal or larger size is needed to replace it (or multiple smaller drives adding up to the size of the drive lost). I guess I just need to stop being hung up on the RAID paradigm. What's a good backup program that works with a Drivepool volume rather than having to go to the base disks?
  1. Load more activity

Announcements

×