Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 09/27/19 in all areas

  1. 5 points
    malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
  2. 4 points
    hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
  3. 3 points
    My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
  4. 2 points
    I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like. CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported. The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible. I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it. There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations. Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda. Hope some of that helps.
  5. 2 points
    Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God's because this morning I awoke with back pain and two of my drives mounted and functioning. The original drive which had completed the upgrade, had been working, and then went into "drive initializing"...is now working again. The drive that I had tried to mount and said it was upgrading with no % given has mounted (FYI 15TB Drive with 500GB on it). However my largest drive is still sitting at "Drive queued to perform recovery". Maybe one more night in the dogs bed will complete the offering required to the Data God's End Diary entry. (P.S. Just in case you wondered...that spoiled dog has "The Big One" Lovesac as a dog bed.. In a pretty ironic fashion their website is down. #Offering)
  6. 2 points
    Unintentional Guinea Pig Diaries. Day 7 - Entry 1 OUR SUPREME LEADER HAS SPOKEN! I received a response to my support ticket though it did not provide the wisdom I seek. You may call it an "Aladeen" response. My second drive is still stuck in "Drive Queued to perform recovery". I'm told this is a local process and does not involve the cloud yet I don't know how to push it to do anything. The "Error saving data. Cloud drive not found" at this time appears to be a UI glitch and it not correct as any changes that I make do take regardless of the error window. This morning I discovered that playing with fire hurts. Our supreme leader has provided a new Beta (1.2.0.1306). Since I have the issues listed above I decided to go ahead and upgrade. The Supreme leaders change log says that it lowers the concurrent I/O requests in an effort to stop google from closing the connections. Upon restart of my computer the drive that was previously actually working now is stuck in "Drive Initializing - Stablebit CloudDrive is getting ready to mount your drive". My second drive is at the same "Queued" status as before. Also to note is that I had a third drive created from a different machine that I tried to mount yesterday and it refused to mount. Now it says "drive is upgrading" but there is no progress percentage shown. Seems that is progress but the drive that was working is now not working. Seeking burn treatment. I hope help comes soon. While my data is replaceable it will take a LONG TIME to do. Until then my Plex server is unusable and I have many angry entitled friends and family. End Diary entry.
  7. 2 points
    Unintentional Guinea Pig Diaries. Day 5 - Entry 2 *The Sound of Crickets* So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued" Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out. End Diary entry.
  8. 2 points
    gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
  9. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  10. 2 points
    srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  11. 1 point
    I'd suggest a tool called Everything, by Voidtools. It'll scan the disks (defaults to all NTFS volumes) then just type in a string (e.g. "exam 2020" or ".od3") and it shows all files (you can also set it to search folder names as well) that have that string in the name, with the complete path. Also useful for "I can't remember what I called that file or where I saved it, but I know I saved it on the 15th..." problems.
  12. 1 point
    OK - I found the issue (for me). MS Virus & Threat Protection has prevented Scanner from seeing the drives attached to my HBA. At this point I don't even know if any of the Scanner settings changes, including the one documented above, made any difference. I have seen the impact real-time protection has on even my new, pretty robust pc, and have made a habit of disabling the real-time protection. Windows, inexplicably, occasionally re-enables the feature. The result has been that when I made changes that may have resolved the issue with Scanner seeing SMART data on my drives, Windows has prevented me from seeing the correct result because it had re-enabled the real-time protection without my knowledge.. Anyway, in the image below you can see that Win has blocked the scanner service executable. I have add it, as well as the .native Scanner service, to the list of excluded programs and can confirm it has solved the issue following a reboot of my machine upon which Windows automatically re-enables real-time virus protection. I am now seeing all information on the drives on the HBA as I should. For the record my system is currently: OS Name Microsoft Windows 10 Pro Version 10.0.19041 Build 19041 Thanks to Spider99 for hanging in with me!
  13. 1 point
    Shane

    eXtreme bottlenecks

    In my experience Resource Monitor's reporting of read and write rates can lag behind what's actually happening, making it look like it's transferring more files at any given point than it really is - but that transfer graph is definitely a sign of hitting some kind of bottleneck. It's the sort of thing I'd expect to see from a large number of small files, a network drive over wireless, or a USB flash drive. Can you tell us more about what version of DrivePool you're using (the latest "stable" release is 2.2.3.1019), what drives are involved (HDD, SSD, other), how they're hooked up (SATA, USB, other) and if you've made any changes to the Manage Pool -> Performance options (default is to have only Read striping and Real-time duplication ticked)? Examining the Performance indicators of DrivePool (to see it, maximise the DrivePool UI and click the right-pointing triangle to the left of the word Performance) and the Performance tab of Task Manager when the bottlenecking is happening might also be useful. Hmm. You might also want to try something other than the built-in Windows copier to see if that helps, e.g. FastCopy ?
  14. 1 point
    Shane

    how to replace failing HDD

    Hi Querl28. There's a few different ways. Simplest is you install the replacement drive, tell DrivePool to add it to the pool and then tell DrivePool to remove the old one from the pool. DP tell you whether it successfully moved all the files on the old drive across (in which case you can then physically remove the old drive) or not (in which case you have to decide what to do about it). If you don't have spare ports to add the new drive before removing the old one, but you have enough free space on your other drives in the pool, then you can tell DP to remove the old drive from the pool before you install the new one. See also this support page on removing drives.
  15. 1 point
    Is it ok to fill non primary hdds to full capacity that would be accessed only to read files?
  16. 1 point
    I finally bit the bullet last night and converted my drives. I'd like to report that even in excess of 250TB, the new conversion process finished basically instantly and my drive is fully functional. If anyone else has been waiting, it would appear to be fine to upgrade to the new format now.
  17. 1 point
    [1] Yes, it will only move the data from disk 2 if a balancing rule causes it to be moved (if you have disk space equalisation turned on, for example). Otherwise, it will stay put. [2] You could just set the drive-overfill plugin to 75-80%? Then if any disk reaches that capacity, it'll move files out? Personally, I assign a pair of landing disks for my pools. Two cheap SSDs where incoming files get dumped. DP then moves them out later, or if it fills up. Note that the disks should be larger than the largest single file you would put on the pool. If cost is an issue, you could try the following setup with existing hardware instead: Install the SSD optimizer plugin Tell DP/SSD Opt. that Disk 2 is an SSD and un-tick the "archive" setting Make Disk 1 and 3 "Archive" drives Change your file placement rules so that only unduplicated files go on Disk 2, and only duplicated files go on 1 and 3 That way, all new incoming files get put on "Disk 2", then later when your duplication/balancing rules engage, it will move the data off of Disk 2 entirely, and duplicate to 1 and 3. This assumes you do not have "real-time duplication" enabled. If you still need the total capacity of the 3 disks, then perhaps a small investment in a 120/240GB SSD to use as a landing drive might be a good idea, and substitute "Disk 2" with "SSD/Disk 4" in the above setup?
  18. 1 point
    I pass through the HBA for Drivepool. I use Dell Perc H310 cards and the SMART data is all visible, as it should be because my Windows VM has direct access to the HBA. edit: Wrong Chris I know, but hopefully helpful?
  19. 1 point
    This is the wrong section of the forums for this really, but if you want to force duplication to the cloud, your pool structure is wrong. The issue you're running into is that your CloudDrive volume exists within (and at the same level of priority as) the rest of your pool. A nested pool setup that is used to balance to the CloudDrive and the rest of your pool will allow you more granular control over balancing rules specifically for the CloudDrive volume. You need to create a higher level pool with the CloudDrive volume and your entire existing pool. Then you can control duplication to the CloudDrive volume and your local duplication independently of one another.
  20. 1 point
    srcrist

    Move data to a partition

    There is no appreciable performance impact by using multiple volumes in a pool.
  21. 1 point
    srcrist

    Move data to a partition

    Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  22. 1 point
    I'm enjoying your diary entries as we are in the same boat... not laughing at you... but laughing/crying with you! I know that this must be done to protect our data long term... it's just a painful process.
  23. 1 point
    Hey guys, So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations. I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.
  24. 1 point
    I had the same issue. Contacted support and was told that they (Covecube) need to update the DrivePool virtual driver and have it re-sign by Microsoft before it can be used with the latest Insider Build. I have no idea how long it would take them. Hopefully soon.
  25. 1 point
    Just to clarify for everyone here, since there seems to be a lot of uncertainty: The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why: Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified) Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work) If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated. I hope this helps clear up some of the confusion!
  26. 1 point
    In the DP GUI, see the two arrows to the right of the balancing status bar? If you press that, it will increase the I/O priority of DP. May help some. Other that that, ouch! Those are more like SMR-speeds.
  27. 1 point
    Hi all, I am using CloudDrive for a while now and I am very happy with it, nevertheless I have a question if it is also possible to script it? At the moment I have configured DrivePool and CloudDrive on my personal server. CloudDrive is my solution for a backup to the cloud, but as I don't want to have the drive always mounted (virusses, ransomware, etc), I'd like to script it. I already have a script which syncs my data to the CloudDrive but I'd like to enhance it so it mounts the drive. And it would be very nice if it could be dismounted when Cloud synchronization is completed. Regards, Wout
  28. 1 point
    You will never see the 1 for 1 files that you upload via the browser interface. CloudDrive does not provide a frontend for the provider APIs, and it does not store data in a format that can be accessed from outside of the CloudDrive service. If you are looking for a tool to simply upload files to your cloud provider, something like rClone or Netdrive might be a better fit. Both of those tools use the standard upload APIs and will store 1 for 1 files on your provider. See the following thread for a more in-depth explanation of what is going on here:
  29. 1 point
    srcrist

    Optimal settings for Plex

    You're just moving data at the file system level to the poolpart folder on that volume. Do not touch anything in the cloudpart folders on your cloud storage. Everything you need to move can be moved with windows explorer or any other file manager. Once you create a pool, it will create a poolpart folder on that volume, and you just move the data from that volume to that folder.
  30. 1 point
    Christopher (Drashna)

    Hiding Drives

    welcome! And yeah, it's a really nice way to set up the system. It hides the drives and keeps them accessible, at the same time.
  31. 1 point
    Thanks for the response. Turns out, I was clicking on the down arrow and that did not give the option of enable auto scanning. So after reading your response, I clicked on the "button" itself and it toggled to enabled. Problem solved. Auto scanning immediately started so I know that it is working. Thanks.
  32. 1 point
    It is. However, you should be able to set the "override" value for "CloudFsDisk_IsDriveRemovable" to "false", and it may fixe the issue. But the drive will no longer be treated as a removable drive. https://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings
  33. 1 point
    Nope, definitely not dead! But if it's urgent, or basically, it's the best way, direct any support queries to https://stablebit.com/Contact
  34. 1 point
    I believe that this is related to your windows settings for how to handle "removable media." CloudDrive shows up as a removable drive, so if you selected to have windows open explorer when removable media is inserted, it will open when CloudDrive mounts. Check that Windows setting.
  35. 1 point
    Umfriend

    Using Drives inside Pool?

    No, that is just fine. There is no issue with adding a disk to a Pool and then place data on that disk besides it (i.e. outside the hidden PoolPart.* folder on that drive).
  36. 1 point
    fattipants2016

    Manualy delete duplicates?

    Inside each physical disk that's part of the pool, exists a hidden folder named with a unique identification ID. Inside these folders is the same folder structure as the pool, itself. Your duplicated files / folders would simply be on the appropriate number of disks. They're not actually denoted as duplicates in any way. If files are now duplicated (that shouldn't) be, it may be enough to simply re-check duplication.
  37. 1 point
    RG9400

    Google Drive API Key switch

    You should be able to Re-Authorize the drive, which would start using the new API key
  38. 1 point
    Christopher (Drashna)

    Drive evacuation priority

    Honestly, I'm not sure. I suspect that the processing is per drive, rather than duplicated/unduplicated, first. Since I'm not sure, I've opened a request, to ask Alex (the Developer). https://stablebit.com/Admin/IssueAnalysis/28301 Not currently. It would require a change to both StableBit DrivePool and StableBit Scanner to facilitate this. My recommendation would be to just evacuate unduplicated data on SMART warnings.
  39. 1 point
    In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
  40. 1 point
    In the SMART details for that drive look for "Reallocated sector count", "Reallocation Event Count", "Uncorrectable Sector Count"... The Raw Values should be zero, if not that means there's some bad sectors. It's not always end of the world, if there's only a few. But it may be an indication why Scanner is maybe showing an issue. If those are all zero, then I'm not sure what else to look for. Does scanner show what that "1 warning" is anywhere? You'd think that it would show you somewhere what that "1 warning" is. I'm fairly new to Stablebit Scanner myself, but hopefully you can figure this out somehow (and a Stablebit rep stops by too, hopefully). Otherwise I'd put in a ticket with them. I did ask a question to them about DrivePool and they responded within 24 hours.
  41. 1 point
    Great thanks. fixed it for me.
  42. 1 point
    So an API key is for an application. It's what the app uses to contact Google and make requests of their service. So your new API key will be used by the application to request access to all of your drives--regardless of the Google account that they are on. The API key isn't the authorization to use a given account. It's just the key that the application uses to request access to the data on whatever account you sign-in with. As an obvious example, Stablebit's default key for CloudDrive was obviously created on their Google account, but you were still using it to access your drives before changing it to your own key right now. When you set it up, you'll see that you still have to sign in and approve your app. It'll even give you a warning, since, unlike actual CloudDrive, Google can't actually vouch for the app requesting access with your key. This just isn't how an API key works. Are you sure you're logging in with the correct account for each drive once you've added the new key? You don't log in with the account you used to create the key. You still log in with whatever credentials you used to create each drive.
  43. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  44. 1 point
    RFOneWatt

    Extension of a Plex Drive?

    Did you get this sorted? Seems to me you did everything correctly. So, to be clear - You had a standalone 8TB drive that was getting full. You bought a new 12TB drive. You downloaded and installed DrivePool. You created a brand new Pool consisting of your old 8TB drive and the new drive 12TB drive, giving you a new Virtual Drive, G: Because G: is considered a new drive, you are going to want to MOVE all of your files from E: to G: That's all you should have to do. In the future when you add drives to the pool you won't have to do anything and you should simply see the new free space from the new drive. Since this is a new pool, it's empty. ~RF
  45. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  46. 1 point
    Hello! I'm fairly new to StableBit, but liking the setup so far! I've got a few pools for various resources in my server. In case it's important, here's a quick idea of what I've got running: Dell R730 running Windows Server 2019 Datacenter, connected to a 24 disk shelf via SAS. Each drive shows up individually, so I've used DrivePool to create larger buckets for my data. I'd like to have them redundant against a single drive failure, but I know that means duplicating everything. I will eventually have a pool dedicated to my VMs, and VMs deduplicate very well since each one requires essentially a copy of the base data, and while I will have backups of this data, I'd still like to have a quicker recovery from a drive failure in case that does happen so they'd also be duplicated... (Scanner currently tells me one of my disks is throwing a SMART error, but i don't know how bad it is... I'd like to run it into the ground before replacing it to save money on purchasing new hardware before it's actually dead...) So, I know deduplication isn't supported against the pool itself, but I was curious if people have deduplicated the physical disks, and if windows dedupe sees the pool data and tries to deduplicate it? I noticed this thread, unfortunately it's locked for further comments as it's fairly old, was talking about deduplicating the drives that a pool uses, but I don't know if they meant the files that weren't part of a pool, or if they were talking about the files from the pool. If possible, I'd love to hear an official answer, since I'd rather not run this in an unsupported way, but I'm really hoping there's a way to deduplicate some of these files before I run myself out of space... Thanks for any info that you can provide!
  47. 1 point
    This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  48. 1 point
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  49. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  50. 1 point
    Christopher (Drashna)

    SSD Optimizer problem

    This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files. This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing%20Settings On the main balancing settings page, set it to "Balance immediately", and uncheck the "No more often than ever X hours" option, it set it to a low number like 1-2 hours. For the balancing ratio slider, set this to "100%", and check the "or if at least this much data needs to be moved" and set it to a very low number (like 5GBs). This should cause the balancing engine to rather aggressively move data out of the SSD drive and onto the archive drives, reducing the likelihood that this will happen. Also, it may not be a bad idea to use a larger sized SSD, as the free space on the drive is what gets reported when adding new files.

Announcements

×
×
  • Create New...