Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 07/03/19 in all areas

  1. 4 points
    malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
  2. 3 points
    hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
  3. 3 points
    srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  4. 2 points
    Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God's because this morning I awoke with back pain and two of my drives mounted and functioning. The original drive which had completed the upgrade, had been working, and then went into "drive initializing"...is now working again. The drive that I had tried to mount and said it was upgrading with no % given has mounted (FYI 15TB Drive with 500GB on it). However my largest drive is still sitting at "Drive queued to perform recovery". Maybe one more night in the dogs bed will complete the offering required to the Data God's End Diary entry. (P.S. Just in case you wondered...that spoiled dog has "The Big One" Lovesac as a dog bed.. In a pretty ironic fashion their website is down. #Offering)
  5. 2 points
    Unintentional Guinea Pig Diaries. Day 7 - Entry 1 OUR SUPREME LEADER HAS SPOKEN! I received a response to my support ticket though it did not provide the wisdom I seek. You may call it an "Aladeen" response. My second drive is still stuck in "Drive Queued to perform recovery". I'm told this is a local process and does not involve the cloud yet I don't know how to push it to do anything. The "Error saving data. Cloud drive not found" at this time appears to be a UI glitch and it not correct as any changes that I make do take regardless of the error window. This morning I discovered that playing with fire hurts. Our supreme leader has provided a new Beta (1.2.0.1306). Since I have the issues listed above I decided to go ahead and upgrade. The Supreme leaders change log says that it lowers the concurrent I/O requests in an effort to stop google from closing the connections. Upon restart of my computer the drive that was previously actually working now is stuck in "Drive Initializing - Stablebit CloudDrive is getting ready to mount your drive". My second drive is at the same "Queued" status as before. Also to note is that I had a third drive created from a different machine that I tried to mount yesterday and it refused to mount. Now it says "drive is upgrading" but there is no progress percentage shown. Seems that is progress but the drive that was working is now not working. Seeking burn treatment. I hope help comes soon. While my data is replaceable it will take a LONG TIME to do. Until then my Plex server is unusable and I have many angry entitled friends and family. End Diary entry.
  6. 2 points
    Unintentional Guinea Pig Diaries. Day 5 - Entry 2 *The Sound of Crickets* So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued" Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out. End Diary entry.
  7. 2 points
    gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
  8. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  9. 2 points
    I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
  10. 1 point
    Umfriend

    Mixed speed disks

    So I think it is a matter of use case and personal taste. IMHO, just use one Pool. If you're going to replace the 5900rpm drives anyway over time anyway. I assume you run things over a network. As most are still running over 1Gbit networks (or slower), even the 5900rpm drives won't slow you down. I've never used an SSD Optimizer plugin but yeah, it is helpful for writing, not reading (except for the off-chance that the file you read is still on the SSD). But even then it would need a datapath that is faster than 1Gbit all the way. What you could do is test a little by writing to the disks directly outside the Pool in a scenario that resembles your usecase. If you experience no difference, just use one Pool, makes management a lot easier. If anything, I would more wonder about duplication (do you use that?) and real backups.
  11. 1 point
    OK. My mistake, then. I haven't started this process, I just thought I remembered this fact from some rClone work I've done previously. I'll remove that comment.
  12. 1 point
    I'm enjoying your diary entries as we are in the same boat... not laughing at you... but laughing/crying with you! I know that this must be done to protect our data long term... it's just a painful process.
  13. 1 point
    Hey guys, So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations. I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.
  14. 1 point
    I had the same issue. Contacted support and was told that they (Covecube) need to update the DrivePool virtual driver and have it re-sign by Microsoft before it can be used with the latest Insider Build. I have no idea how long it would take them. Hopefully soon.
  15. 1 point
    Just to clarify for everyone here, since there seems to be a lot of uncertainty: The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why: Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified) Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work) If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated. I hope this helps clear up some of the confusion!
  16. 1 point
    In the DP GUI, see the two arrows to the right of the balancing status bar? If you press that, it will increase the I/O priority of DP. May help some. Other that that, ouch! Those are more like SMR-speeds.
  17. 1 point
    Hi all, I am using CloudDrive for a while now and I am very happy with it, nevertheless I have a question if it is also possible to script it? At the moment I have configured DrivePool and CloudDrive on my personal server. CloudDrive is my solution for a backup to the cloud, but as I don't want to have the drive always mounted (virusses, ransomware, etc), I'd like to script it. I already have a script which syncs my data to the CloudDrive but I'd like to enhance it so it mounts the drive. And it would be very nice if it could be dismounted when Cloud synchronization is completed. Regards, Wout
  18. 1 point
    You will never see the 1 for 1 files that you upload via the browser interface. CloudDrive does not provide a frontend for the provider APIs, and it does not store data in a format that can be accessed from outside of the CloudDrive service. If you are looking for a tool to simply upload files to your cloud provider, something like rClone or Netdrive might be a better fit. Both of those tools use the standard upload APIs and will store 1 for 1 files on your provider. See the following thread for a more in-depth explanation of what is going on here:
  19. 1 point
    srcrist

    Optimal settings for Plex

    You're just moving data at the file system level to the poolpart folder on that volume. Do not touch anything in the cloudpart folders on your cloud storage. Everything you need to move can be moved with windows explorer or any other file manager. Once you create a pool, it will create a poolpart folder on that volume, and you just move the data from that volume to that folder.
  20. 1 point
    Pools behave as if they are regular NTFS formatted volumes. However, any software that uses VSS (which many backup solutions do) is not supported. I don't know Crashplan so couldn't say. Having said that, you could backup the underlying drives. If you use duplication, then Hierachical Pools can ensure that you only backup one instance of the duplicates.
  21. 1 point
    srcrist

    GSuite Drive. Migration\Extension

    A 4k (4096) cluster size supports a maximum volume size of 16TB. Thus, adding an additional 10TB to your existing 10TB with that cluster size exceeds the maximum limit for the file system, so that resize simply won't be possible. Volume size limits are as follows: Cluster Size Maximum Partition Size 4 KB 16 TB 8 KB 32 TB 16 KB 64 TB 32 KB 128 TB 64 KB 256 TB This is unfortunately not possible, because of how Cloud Drive works. However, a simple option available to you is to simply partition your drive into multiple volumes (of a maximum 16TB apiece) and recombine them using DrivePool into one large aggregate volume of whatever size you require (CloudDrive's actual and technical maximum is 1PB per drive).
  22. 1 point
    Have you tried remeasuring?
  23. 1 point
    I guess that would really depend on the client/server/service, and how it handles uploaded files. However, it shouldn't be an issue, in most cases.
  24. 1 point
    Umfriend

    2nd request for help

    Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
  25. 1 point
    No issues. I've been using SMR drives personally since Seagate released the Archive drives. And I have a pool of mixed drives. WD Reds, Seagate NAS, Seagate Archive. The only caveat is that the SMR drives have a "write cache" of non-SMR space. If you fill that (fairly easy to do so), the write speeds get atrociously slow. Because of this, I would recommend using the SSD optimizer, to effectively bypass this issue. Doesn't matter if balancing is slow, since it is any ways. In fact, that's the setup that I have (for years now), and it works wonderfully.
  26. 1 point
    Thanks for the response. Turns out, I was clicking on the down arrow and that did not give the option of enable auto scanning. So after reading your response, I clicked on the "button" itself and it toggled to enabled. Problem solved. Auto scanning immediately started so I know that it is working. Thanks.
  27. 1 point
    It is. However, you should be able to set the "override" value for "CloudFsDisk_IsDriveRemovable" to "false", and it may fixe the issue. But the drive will no longer be treated as a removable drive. https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings
  28. 1 point
    I believe that this is related to your windows settings for how to handle "removable media." CloudDrive shows up as a removable drive, so if you selected to have windows open explorer when removable media is inserted, it will open when CloudDrive mounts. Check that Windows setting.
  29. 1 point
    fattipants2016

    Manualy delete duplicates?

    Inside each physical disk that's part of the pool, exists a hidden folder named with a unique identification ID. Inside these folders is the same folder structure as the pool, itself. Your duplicated files / folders would simply be on the appropriate number of disks. They're not actually denoted as duplicates in any way. If files are now duplicated (that shouldn't) be, it may be enough to simply re-check duplication.
  30. 1 point
    RG9400

    Google Drive API Key switch

    You should be able to Re-Authorize the drive, which would start using the new API key
  31. 1 point
    Now I have an image of you having some sort of steampunk mechanical networking setup...
  32. 1 point
    xazz

    Unusable for duplication 1.56TB

    From my reading elsewhere on the forum, i see that i should have enabled the "Duplication Space Optimizer" balancer. I will do that now and let you know how it goes.
  33. 1 point
    In the SMART details for that drive look for "Reallocated sector count", "Reallocation Event Count", "Uncorrectable Sector Count"... The Raw Values should be zero, if not that means there's some bad sectors. It's not always end of the world, if there's only a few. But it may be an indication why Scanner is maybe showing an issue. If those are all zero, then I'm not sure what else to look for. Does scanner show what that "1 warning" is anywhere? You'd think that it would show you somewhere what that "1 warning" is. I'm fairly new to Stablebit Scanner myself, but hopefully you can figure this out somehow (and a Stablebit rep stops by too, hopefully). Otherwise I'd put in a ticket with them. I did ask a question to them about DrivePool and they responded within 24 hours.
  34. 1 point
    mondog

    QNAP Hardware?

    Hello - I have been using WHS2011 and Stable Bit Drive Pool on a HP Proliant N54L for several years. I have been happy with it, and don't really want to change, however it is 2020 now and as i understand it, support for WHS2011 ended in 2016... So, I got my hands on a QNAP TVS-671 (Intel Core i5 based) NAS and was wondering if there is any way to still use Windows/Stable Bit Drive Pool with this hardware? The NAS does support VM's, but it has to be set up in some way (either JBOD or RAiD) before I can create/use VM's, so I don't a VM running Windows/Drive pool would work.. Would it? Or even there's a way to run Windows/Stable Bit Drive Pool on a separate physical machine, and then use the NAS as a 6 bay enclosure connected by a single cable, with Drive Pool providing the fault tolerance, etc, I would be interested in that..Or any other suggestions for a way that I could use this QNAP NAS hardware with Windows/Stable Bit Drive Pool... Any ideas/suggestions will be appreciated! Thanks!!
  35. 1 point
    No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
  36. 1 point
    Great thanks. fixed it for me.
  37. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  38. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  39. 1 point
    Hello! I'm fairly new to StableBit, but liking the setup so far! I've got a few pools for various resources in my server. In case it's important, here's a quick idea of what I've got running: Dell R730 running Windows Server 2019 Datacenter, connected to a 24 disk shelf via SAS. Each drive shows up individually, so I've used DrivePool to create larger buckets for my data. I'd like to have them redundant against a single drive failure, but I know that means duplicating everything. I will eventually have a pool dedicated to my VMs, and VMs deduplicate very well since each one requires essentially a copy of the base data, and while I will have backups of this data, I'd still like to have a quicker recovery from a drive failure in case that does happen so they'd also be duplicated... (Scanner currently tells me one of my disks is throwing a SMART error, but i don't know how bad it is... I'd like to run it into the ground before replacing it to save money on purchasing new hardware before it's actually dead...) So, I know deduplication isn't supported against the pool itself, but I was curious if people have deduplicated the physical disks, and if windows dedupe sees the pool data and tries to deduplicate it? I noticed this thread, unfortunately it's locked for further comments as it's fairly old, was talking about deduplicating the drives that a pool uses, but I don't know if they meant the files that weren't part of a pool, or if they were talking about the files from the pool. If possible, I'd love to hear an official answer, since I'd rather not run this in an unsupported way, but I'm really hoping there's a way to deduplicate some of these files before I run myself out of space... Thanks for any info that you can provide!
  40. 1 point
    This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  41. 1 point
    I believe you need to seed the pool. See Pool Seeding
  42. 1 point
    exterrestris

    Drivepool With Snapraid

    My snapraid.conf is pretty standard - I haven't really changed any of the defaults (so I haven't included them). I choose to keep a copy of the content file on every disk, but that's not strictly necessary. # Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE [,FILE] ..." parity C:\Snapraid\Parity\1\snapraid.parity # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE" content C:\Snapraid\Parity\1\snapraid.content content C:\Snapraid\Data\1\snapraid.content content C:\Snapraid\Data\2\snapraid.content content C:\Snapraid\Data\3\snapraid.content content C:\Snapraid\Data\4\snapraid.content # Defines the data disks to use # The name and mount point association is relevant for parity, do not change it # WARNING: Adding here your boot C:\ disk is NOT a good idea! # SnapRAID is better suited for files that rarely changes! # Format: "data DISK_NAME DISK_MOUNT_POINT" data d1 C:\Snapraid\Data\1\PoolPart.a5f57749-53fb-4595-9bad-5912c1cfb277 data d2 C:\Snapraid\Data\2\PoolPart.7d66fe3d-5e5b-4aaf-a261-306e864c34fa data d3 C:\Snapraid\Data\3\PoolPart.a081b030-04dc-4eb5-87ba-9fd5f38deb7b data d4 C:\Snapraid\Data\4\PoolPart.65ea70d5-2de5-4b78-bd02-f09f32ed4426 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ exclude \.covefs As for DrivePool balancers, yes, turn them all off. The Scanner is useful to keep if you want automatic evacuation of a failing drive, but not essential, and the SSD Optimiser is only necessary if you have a cache drive to use as a landing zone. If you don't use a landing zone, then you can disable automatic balancing, but if you do then you need it to balance periodically - once a day rather than immediately is best, as you ideally want the SnapRAID sync to happen shortly after the balance completes. I'm not sure what the default behaviour of DrivePool is supposed to be when all balancers are disabled, but I think it does split evenly across the disks.
  43. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  44. 1 point
    Umfriend

    moving drives around

    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  45. 1 point
    Umfriend

    moving drives around

    TL;DR but yes, DP will recognise the Pool. You could disconnect them all and plug them in on another machine and DP would see the Pool again. One small caveat is that if you use plug-ins that are not installed on the new machine then you may have some unwanted behaviour. Other than that, it should work.
  46. 1 point
    Thank you everyone who has commented on this thread - with your help I was able to get everything working again! Thanks for being patient !
  47. 1 point
    Yes, that's definitely a false positive. It's just some of the troubleshooting stuff for the UI. It's nothing harmful. And if you check, the file should be digitally signed. A good indicator that it's legit.
  48. 1 point
    Sure. I purchased and installed PartedMagic onto a USB. I then booted using this USB to run a Secure Erase, but it was not able to complete successfully. So I ran DD (through PartedMagic as well) on the drive around 5 times. I then converted the disk to GPT using diskpart and installed a fresh copy of Windows. I used CHKDSK, StableBit Scanner, and Intel SSD Toolbox (Full Diagnostic) to confirm that read/writes were functioning as intended. Based on what I could understand from Intel, it seems like the Optane drives are fairly unique due to their usage of 3D XPoint technology which caused the specific/strange behavior I was facing.
  49. 1 point
    I definitely suggest configuring snapraid so it points to the drivepool folder with the GUID inside the config file so that it's much easier to restore. Snapraid doesn't have to be the root of the drive, it can be anywhere you like (as long as they are on different psychical drives). So instead of doing: data d1 z:\ data d2 y:\ data d3 x:\ You do: data d1 z:\drivepool.{guid}\ data d2 y:\drivepool.{guid}\ data d3 x:\drivepool.{guid}\ That way after a failure e.g d2 dies, you drop your new drive in, add it to the pool, get the new GUID from the new drive, and edit your snapraid conf to comment out the old drive and add the new one by changing d2 y:\drivepool.{guid}\ to d2 y:\drivepool.{newguid}\ like so: data d1 z:\drivepool.{guid}\ #data d2 y:\drivepool.{guid}\ data d2 y:\drivepool.{newguid}\ data d3 x:\drivepool.{guid}\ Then run your fix and it all just works - and you don't have to move your files around.
  50. 1 point
    Christopher (Drashna)

    Read Striping

    Full Stop. No. The Read Striping may improve performance, but we still have the added overhead of reading from NTFS volumes. And the performance profile for doing so. RAID 1 works at a block level, and it's able to split IO requests between the disks. SO one disk could read the partition table information, while the second disk starts reading the actual file data. This is a vast oversimplification of what happens, but a good illustration of what happens. So, while we may read from both disks, in parallel, there are a number of additional steps that we have to perform to do so. To be blunt, the speed is never going to rival hardware RAID. However, between with disk switching, and the reading blocks of the file and caching into memory. That said, you should still see at least the native disk speeds, or a bit better. But this depends on what the disks are doing, specifically. CrystalDiskInfo probably isn't going to get great stats because of how it tests the data. At best, enable pool file duplication, so that EVERYTHING is duplicated. Make sure Read Striping is enabled, and then test.

Announcements

×
×
  • Create New...