Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 06/06/19 in Posts

  1. 3 points
    malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
  2. 3 points
    srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  3. 2 points
    gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
  4. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  5. 2 points
    I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
  6. 1 point
    hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
  7. 1 point
    srcrist

    Setting Cache Drive desitination

    There are some inherent flaws with USB storage protocols that would preclude it from being used as a cache for CloudDrive. You can see some discussion on the issue here: I don't believe they ever added the ability to use one. At least not yet.
  8. 1 point
    srcrist

    GSuite Drive. Migration\Extension

    A 4k (4096) cluster size supports a maximum volume size of 16TB. Thus, adding an additional 10TB to your existing 10TB with that cluster size exceeds the maximum limit for the file system, so that resize simply won't be possible. Volume size limits are as follows: Cluster Size Maximum Partition Size 4 KB 16 TB 8 KB 32 TB 16 KB 64 TB 32 KB 128 TB 64 KB 256 TB This is unfortunately not possible, because of how Cloud Drive works. However, a simple option available to you is to simply partition your drive into multiple volumes (of a maximum 16TB apiece) and recombine them using DrivePool into one large aggregate volume of whatever size you require (CloudDrive's actual and technical maximum is 1PB per drive).
  9. 1 point
    Have you tried remeasuring?
  10. 1 point
    Christopher (Drashna)

    Hiding Drives

    welcome! And yeah, it's a really nice way to set up the system. It hides the drives and keeps them accessible, at the same time.
  11. 1 point
    I guess that would really depend on the client/server/service, and how it handles uploaded files. However, it shouldn't be an issue, in most cases.
  12. 1 point
    Umfriend

    2nd request for help

    Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
  13. 1 point
    Christopher (Drashna)

    Hiding Drives

    You can remove the drive letters and map to folder paths. We actually have a guide on how to do this: https://wiki.covecube.com/StableBit_DrivePool_Q4822624 It would be, but the only problem is that it would be too easy to break existing configurations. Which is why we don't have the option to do so.
  14. 1 point
    Thanks for the response. Turns out, I was clicking on the down arrow and that did not give the option of enable auto scanning. So after reading your response, I clicked on the "button" itself and it toggled to enabled. Problem solved. Auto scanning immediately started so I know that it is working. Thanks.
  15. 1 point
    It is. However, you should be able to set the "override" value for "CloudFsDisk_IsDriveRemovable" to "false", and it may fixe the issue. But the drive will no longer be treated as a removable drive. https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings
  16. 1 point
    Nope, definitely not dead! But if it's urgent, or basically, it's the best way, direct any support queries to https://stablebit.com/Contact
  17. 1 point
    I believe that this is related to your windows settings for how to handle "removable media." CloudDrive shows up as a removable drive, so if you selected to have windows open explorer when removable media is inserted, it will open when CloudDrive mounts. Check that Windows setting.
  18. 1 point
    Umfriend

    Using Drives inside Pool?

    No, that is just fine. There is no issue with adding a disk to a Pool and then place data on that disk besides it (i.e. outside the hidden PoolPart.* folder on that drive).
  19. 1 point
    Christopher (Drashna)

    Manualy delete duplicates?

    StableBit DrivePool doesn't have a master and subordinate/duplicate copy. Both are equally viable, and treated as such. This is very different from how Drive Bender handles things.5 As for being documented, not really, but sort of. Eg: https://wiki.covecube.com/StableBit_DrivePool_Knowledge_base#How_To.27s That said, if the data is in the same relative path under the PoolPart folders, they're considered duplicates. Changing the duplication settings, or even remeasuring can kick off a duplication pass that will automatically prune the duplicates, as needed. Also, the "dpcmd" utility has an option to disable duplication for the entire pool, recursively. However, that kicks off a duplication pass that actually manages the files. Just have both products installed. That's it. You can fine tune settings in StableBit DrivePool, in the balancing settings, as the "StableBit Scanner" balancer is one of the 5 preinstalled balancer plugins. That should be fixed now. Though, the file system scan won't trigger the drive evacuation. And yeah, that fix was shipping in the 2.5.5 version, and the latest stable release is 2.5.6, so this definitely shouldn't be an issue anymore. (we haven't seen it in a while).
  20. 1 point
    fattipants2016

    Manualy delete duplicates?

    Inside each physical disk that's part of the pool, exists a hidden folder named with a unique identification ID. Inside these folders is the same folder structure as the pool, itself. Your duplicated files / folders would simply be on the appropriate number of disks. They're not actually denoted as duplicates in any way. If files are now duplicated (that shouldn't) be, it may be enough to simply re-check duplication.
  21. 1 point
    Christopher (Drashna)

    Drive evacuation priority

    Honestly, I'm not sure. I suspect that the processing is per drive, rather than duplicated/unduplicated, first. Since I'm not sure, I've opened a request, to ask Alex (the Developer). https://stablebit.com/Admin/IssueAnalysis/28301 Not currently. It would require a change to both StableBit DrivePool and StableBit Scanner to facilitate this. My recommendation would be to just evacuate unduplicated data on SMART warnings.
  22. 1 point
    In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
  23. 1 point
    In the SMART details for that drive look for "Reallocated sector count", "Reallocation Event Count", "Uncorrectable Sector Count"... The Raw Values should be zero, if not that means there's some bad sectors. It's not always end of the world, if there's only a few. But it may be an indication why Scanner is maybe showing an issue. If those are all zero, then I'm not sure what else to look for. Does scanner show what that "1 warning" is anywhere? You'd think that it would show you somewhere what that "1 warning" is. I'm fairly new to Stablebit Scanner myself, but hopefully you can figure this out somehow (and a Stablebit rep stops by too, hopefully). Otherwise I'd put in a ticket with them. I did ask a question to them about DrivePool and they responded within 24 hours.
  24. 1 point
    mondog

    QNAP Hardware?

    Hello - I have been using WHS2011 and Stable Bit Drive Pool on a HP Proliant N54L for several years. I have been happy with it, and don't really want to change, however it is 2020 now and as i understand it, support for WHS2011 ended in 2016... So, I got my hands on a QNAP TVS-671 (Intel Core i5 based) NAS and was wondering if there is any way to still use Windows/Stable Bit Drive Pool with this hardware? The NAS does support VM's, but it has to be set up in some way (either JBOD or RAiD) before I can create/use VM's, so I don't a VM running Windows/Drive pool would work.. Would it? Or even there's a way to run Windows/Stable Bit Drive Pool on a separate physical machine, and then use the NAS as a 6 bay enclosure connected by a single cable, with Drive Pool providing the fault tolerance, etc, I would be interested in that..Or any other suggestions for a way that I could use this QNAP NAS hardware with Windows/Stable Bit Drive Pool... Any ideas/suggestions will be appreciated! Thanks!!
  25. 1 point
    No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
  26. 1 point
    Great thanks. fixed it for me.
  27. 1 point
    To reauthorize, you shouldn't have to look at the providers list. Although, it's not in the list because you have not enabled 'experimental providers'. See below. To reauthorize, you need to click on 'manage drive' as shown below.
  28. 1 point
    So an API key is for an application. It's what the app uses to contact Google and make requests of their service. So your new API key will be used by the application to request access to all of your drives--regardless of the Google account that they are on. The API key isn't the authorization to use a given account. It's just the key that the application uses to request access to the data on whatever account you sign-in with. As an obvious example, Stablebit's default key for CloudDrive was obviously created on their Google account, but you were still using it to access your drives before changing it to your own key right now. When you set it up, you'll see that you still have to sign in and approve your app. It'll even give you a warning, since, unlike actual CloudDrive, Google can't actually vouch for the app requesting access with your key. This just isn't how an API key works. Are you sure you're logging in with the correct account for each drive once you've added the new key? You don't log in with the account you used to create the key. You still log in with whatever credentials you used to create each drive.
  29. 1 point
    I've been puzzled why unduplicated content on a drive with SMART warnings and damage was not being removed, despite setting various balancers to do so. I discovered today part of the reason - possibly all of it. I believe all the remaining unduplicated content were backup images from Macrium Reflect. Reflect has an anti-ransomware function that can prevent any changes to backup files. This was preventing drivepool from removing them. I realized this when I shut down drivepool service and tried to manually move those files to another drive. I'd have expected drivepool to report that files were inaccessible - but apparently it did not know other software was actually blocking it - which brings me to the next issue. Stablebit Scanner reported 20 unreadable sectors on this drive and 3 damaged files. SMART had indicated 5 pending sectors. I decided to re-run the scan after disabling the Macrium Image Guard - so far, it appears the unreadable sectors may have been caused by Image Guard and may not be bad. Remains to be seen whether the 5 pending sectors will end up being reallocated or become readable. The damaged files however were NOT image backups, so it's unclear if there was any connection. Bottom line: don't use Macrium Image Guard (or any other similar software) with pooled files. I may just move my image files out of the pool to avoid the issue.
  30. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  31. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  32. 1 point
    Hello! I'm fairly new to StableBit, but liking the setup so far! I've got a few pools for various resources in my server. In case it's important, here's a quick idea of what I've got running: Dell R730 running Windows Server 2019 Datacenter, connected to a 24 disk shelf via SAS. Each drive shows up individually, so I've used DrivePool to create larger buckets for my data. I'd like to have them redundant against a single drive failure, but I know that means duplicating everything. I will eventually have a pool dedicated to my VMs, and VMs deduplicate very well since each one requires essentially a copy of the base data, and while I will have backups of this data, I'd still like to have a quicker recovery from a drive failure in case that does happen so they'd also be duplicated... (Scanner currently tells me one of my disks is throwing a SMART error, but i don't know how bad it is... I'd like to run it into the ground before replacing it to save money on purchasing new hardware before it's actually dead...) So, I know deduplication isn't supported against the pool itself, but I was curious if people have deduplicated the physical disks, and if windows dedupe sees the pool data and tries to deduplicate it? I noticed this thread, unfortunately it's locked for further comments as it's fairly old, was talking about deduplicating the drives that a pool uses, but I don't know if they meant the files that weren't part of a pool, or if they were talking about the files from the pool. If possible, I'd love to hear an official answer, since I'd rather not run this in an unsupported way, but I'm really hoping there's a way to deduplicate some of these files before I run myself out of space... Thanks for any info that you can provide!
  33. 1 point
    This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  34. 1 point
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  35. 1 point
    I believe you need to seed the pool. See Pool Seeding
  36. 1 point
    exterrestris

    Drivepool With Snapraid

    My snapraid.conf is pretty standard - I haven't really changed any of the defaults (so I haven't included them). I choose to keep a copy of the content file on every disk, but that's not strictly necessary. # Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE [,FILE] ..." parity C:\Snapraid\Parity\1\snapraid.parity # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE" content C:\Snapraid\Parity\1\snapraid.content content C:\Snapraid\Data\1\snapraid.content content C:\Snapraid\Data\2\snapraid.content content C:\Snapraid\Data\3\snapraid.content content C:\Snapraid\Data\4\snapraid.content # Defines the data disks to use # The name and mount point association is relevant for parity, do not change it # WARNING: Adding here your boot C:\ disk is NOT a good idea! # SnapRAID is better suited for files that rarely changes! # Format: "data DISK_NAME DISK_MOUNT_POINT" data d1 C:\Snapraid\Data\1\PoolPart.a5f57749-53fb-4595-9bad-5912c1cfb277 data d2 C:\Snapraid\Data\2\PoolPart.7d66fe3d-5e5b-4aaf-a261-306e864c34fa data d3 C:\Snapraid\Data\3\PoolPart.a081b030-04dc-4eb5-87ba-9fd5f38deb7b data d4 C:\Snapraid\Data\4\PoolPart.65ea70d5-2de5-4b78-bd02-f09f32ed4426 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ exclude \.covefs As for DrivePool balancers, yes, turn them all off. The Scanner is useful to keep if you want automatic evacuation of a failing drive, but not essential, and the SSD Optimiser is only necessary if you have a cache drive to use as a landing zone. If you don't use a landing zone, then you can disable automatic balancing, but if you do then you need it to balance periodically - once a day rather than immediately is best, as you ideally want the SnapRAID sync to happen shortly after the balance completes. I'm not sure what the default behaviour of DrivePool is supposed to be when all balancers are disabled, but I think it does split evenly across the disks.
  37. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  38. 1 point
    Umfriend

    moving drives around

    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  39. 1 point
    Spider99

    Samsung 9xx NVMe support

    It depends on the OS Win10 will work but say 2012r2 will not or thats how it works for me with my 950 Pro's - unless 960's work differently
  40. 1 point
    Umfriend

    moving drives around

    Do you have Scanner? And yeah, even though I have a far smaller Pool (6 HDD in a 9 HDD setup), I label them with a sticker.
  41. 1 point
    Christopher (Drashna)

    moving drives around

    Yeah, as long as the drives show up correctly, then it will maintain the pool. As for the power savings, that entirely depends on the power consumption, both at peak and idle. And to be honest, you probably won't see a lot of power savings. The best way to get that savings is to use large capacity SSDs, which is what data centers do. However, they can be a LOT more expensive.
  42. 1 point
    Umfriend

    moving drives around

    TL;DR but yes, DP will recognise the Pool. You could disconnect them all and plug them in on another machine and DP would see the Pool again. One small caveat is that if you use plug-ins that are not installed on the new machine then you may have some unwanted behaviour. Other than that, it should work.
  43. 1 point
    Thank you everyone who has commented on this thread - with your help I was able to get everything working again! Thanks for being patient !
  44. 1 point
    Yes, that's definitely a false positive. It's just some of the troubleshooting stuff for the UI. It's nothing harmful. And if you check, the file should be digitally signed. A good indicator that it's legit.
  45. 1 point
    Sure. I purchased and installed PartedMagic onto a USB. I then booted using this USB to run a Secure Erase, but it was not able to complete successfully. So I ran DD (through PartedMagic as well) on the drive around 5 times. I then converted the disk to GPT using diskpart and installed a fresh copy of Windows. I used CHKDSK, StableBit Scanner, and Intel SSD Toolbox (Full Diagnostic) to confirm that read/writes were functioning as intended. Based on what I could understand from Intel, it seems like the Optane drives are fairly unique due to their usage of 3D XPoint technology which caused the specific/strange behavior I was facing.
  46. 1 point
    If you'd like to see the genesis of this script, check out my original thread here Since I finally got my PowerShell script running, and I thought I'd post it here in case anyone else might find it helpful. SYNOPSIS: Script will move files from one DrivePool to another according to FIFO policy REQUIRED INFRASTRUCTURE: The expected layout is a DrivePool consisting of two DrivePools, one magnetic and one solid state. The main variables are pretty obviously documented. I added the file archive limit for people like me who also run SnapRAID Helper. That way the script doesn't trip the 'deleted' file limit (I'm assuming moves would trip it, but I didn't actually test it). Warning, I've obviously only tested this on my system. Please test this extensively on your system after you have ensured good backups. I certainly don't expect anything to go wrong, but that doesn't mean that it can't. The code is full of on-screen debugging output. I'm not a great coder, so if I've done anything wrong, please let me know. I've posted the code here so that you can't C&P it into a script of your own, since Windows can be annoying about downloaded scripts. Please let me know if you have any questions. Set-StrictMode -Version 1 # Script drivePoolMoves.ps1 <# .SYNOPSIS Script will move files from one DrivePool to another according to FIFO policy .DESCRIPTION The script can be set to run as often as desired. The expected layout is a DrivePool consisting of two DrivePools, one magnetic and one solid state. .NOTES Author : fly (Zac) #> # Number of files to move before rechecking SSD space $moveCount = 1 # Path to PoolPart folder on magnetic DrivePool drive $archiveDrive = "E:\PoolPart.xxxxx\Shares\" # Path to PoolPart folder on SSD DrivePool drive $ssdSearchPath = "F:\PoolPart.xxxxx\Shares\" # Minimum SSD drive use percent. Below this amount, stop archiving files. $ssdMinUsedPercent = 50 # Maximum SSD drive use percent. Above this amount, start archiving files. $ssdMaxUsedPercent = 80 # Do not move more than this many files $fileArchiveLimit = 200 # Exclude these file/folder names [System.Collections.ArrayList]$excludeList = @('*.covefs*', '*ANYTHING.YOU.WANT*') # Other stuff $ssdDriveLetter = "" $global:ssdCurrentUsedPercent = 0 $fileNames = @() $global:fileCount = 0 $errors = @() Write-Output "Starting script..." function CheckSSDAbove($percent) { $ssdDriveLetter = $ssdSearchPath.Substring(0, 2) Get-WmiObject Win32_Volume | Where-object {$ssdDriveLetter -contains $_.DriveLetter} | ForEach { $global:ssdUsedPercent = (($_.Capacity - $_.FreeSpace) * 100) / $_.Capacity $global:ssdUsedPercent = [math]::Round($ssdUsedPercent, 2) } If ($ssdUsedPercent -ge $percent) { Return $true } Else { Return $false } } function MoveOldestFiles { $fileNames = Get-ChildItem -Path $ssdSearchPath -Recurse -File -Exclude $excludeList | Sort-Object CreationTime | Select-Object -First $moveCount If (!$fileNames) { Write-Output "No files found to archive!" Exit } ForEach ($fileName in $fileNames) { Write-Output "Moving from: " Write-Output $fileName.FullName $destFilePath = $fileName.FullName.Replace($ssdSearchPath, $archiveDrive) Write-Output "Moving to: " Write-Output $destFilePath New-Item -ItemType File -Path $destFilePath -Force Move-Item -Path $fileName.FullName -Destination $destFilePath -Force -ErrorAction SilentlyContinue -ErrorVariable errors If ($errors) { ForEach($error in $errors) { if ($error.Exception -ne $null) { Write-Host -ForegroundColor Red "Exception: $($error.Exception)" } Write-Host -ForegroundColor Red "Error: An error occurred during move operation." Remove-Item -Path $destFilePath -Force $excludeList.Add("*$($fileName.Name)") } } Else { Write-Output "Move complete." $global:fileCount++ # Increment file count, then check if max is hit If ($global:fileCount -ge $fileArchiveLimit) { Write-Output "Archive max file moves limit reached." Write-Output "Done." Exit } Else { Write-Output "That was file number: $global:fileCount" } } Write-Output "`n" } } If (CheckSSDAbove($ssdMaxUsedPercent)) { While (CheckSSDAbove($ssdMinUsedPercent)) { Write-Output "---------------------------------------" Write-Output "SSD is at $global:ssdUsedPercent%." Write-Output "Max is $ssdMaxUsedPercent%." Write-Output "Archiving files." MoveOldestFiles Write-Output "---------------------------------------" } } Else { Write-Output "Drive not above max used." } Write-Output "Done." Exit
  47. 1 point
    GetdataBack Simple is working for me -- i could get a dir list at least and see the files. It's gonna take days til i'm done the deep scan, but i hope i can recover most things.
  48. 1 point
    It's not a bad idea, at all. Add a request for "all drives" in the reporting, as well, as more robust reporting, in general. https://stablebit.com/Admin/IssueAnalysis/27368 That said, this won't make it into the 1.0 release, most likely. Alex doesn't want to add any more features right now, as we want to get the software "out the door". But after that, we will be adding a bunch of providers, and addressing feature requests like this.
  49. 1 point
    Christopher (Drashna)

    Read Striping

    Full Stop. No. The Read Striping may improve performance, but we still have the added overhead of reading from NTFS volumes. And the performance profile for doing so. RAID 1 works at a block level, and it's able to split IO requests between the disks. SO one disk could read the partition table information, while the second disk starts reading the actual file data. This is a vast oversimplification of what happens, but a good illustration of what happens. So, while we may read from both disks, in parallel, there are a number of additional steps that we have to perform to do so. To be blunt, the speed is never going to rival hardware RAID. However, between with disk switching, and the reading blocks of the file and caching into memory. That said, you should still see at least the native disk speeds, or a bit better. But this depends on what the disks are doing, specifically. CrystalDiskInfo probably isn't going to get great stats because of how it tests the data. At best, enable pool file duplication, so that EVERYTHING is duplicated. Make sure Read Striping is enabled, and then test.
  50. 1 point
    To clarify (and make it simple to find), here is Alex's official definition of that "Other" space:

Announcements

×
×
  • Create New...