Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Yesterday
  2. So I had just managed to reupload the 10 or so TB of data to my google cloud drive, and once again I'm getting the file system damaged warning from stablebit. Files are accessible, but chkdsk errors out and the unmount, unpin, clear cache, remount process has no effect.
  3. Well, while it's not officially supported, a lot of users are using SnapRAID in conjunction with StableBit DrivePool, to get parity and pooling. However, we don't have any plans on adding parity support for StableBit DrivePool, currently. Though, it has been requested, in the past.
  4. If this happens again, run the StableBit Troubleshooter on the system in question. http://wiki.covecube.com/StableBit_Troubleshooter
  5. Figured it out. For whatever reason, when using mount points, the PoolPart folders don't get created until something is written to it. That doesn't seem to be the case for drive letters. Is that possible or am I missing something?
  6. We have a list of considered providers: Why isn't XYZ Provider available? As for a timeline, we don't really have one. It's just Alex that does the coding. So it's whenever he has the time for it. And right now, we're working on StableBit Cloud, so not a lot of time for other new stuff. Sorry. Once that's done, E'll see about pushing him for more providers.
  7. You should end up with something like |____ C:\mount\1\PoolPart.xxx\PoolPart.yyy  |____ C:\mount\2\PoolPart.xxy\PoolPart.yyz
  8. More than likely, you're being throttled by Google, and this is causing issues. Try reducing the number of download threads for each drive. Set it to 3-5, and see if that helps. Also, you may need to reduce the prefetch forward, if you do so.
  9. I don't think that we actually support the compression on the pool, because of how it works. Also, what OS are you using specifically? The reason that I ask, is that the server OS' actually have a "data deduplication" feature that may be helpful here.
  10. Chkdsk can never do more harm than good. If it recovered your files to a Lost.dir or the data is corrupted than the data was unrecoverable. Chkdsk restores your file system to stability, even at the cost of discarding corrupt data. Nothing can magically restore corrupt data to an uncorrupt state. The alternative, not using chkdsk, just means you would continue to corrupt additional data by working with an unhealthy volume. Chkdsk may restore files depending on what is wrong with the volume, but it's never guaranteed. No tool can work miracles. If the volume is damaged enough, nothing can repair it from RAW to NTFS. You'll have to use file recovery or start over.
  11. This might actually be on the provider's side. I once deleted about 100TB from Google Drive and it took months for the data to actually be removed from Google's servers. It didn't matter, since I didn't have a limit, but you might be running into a similar situation here. Contact your administrator and see if there is something they can do.
  12. Last week
  13. I agree with @browned. Recently, I was extremely close to moving my server to an alternate Linux based OS that supports drive pooling/merging and parity natively. The only reason I ultimately decided against it (and sticking with a Windows OS with Covercube products) was completely unrelated to my current storage needs. I would buy a new product or extension to my existing Drivepool license to have this functionality! IMO, at the enthusiast level, without lots of money for extra drives or homelab level hardware, being a datahoarder gets truly complicated as you fill up the machines chassis. At 8 spinners in my full size tower, I only really have room for 1 more without modding the chassis or buying somewhat expensive "patches" to the root problem. I view the root problem as inefficient, fault tolerant data archival. There is no way I would ever run storage solutions without a level of fault tolerance. The only things on my pool that aren't duplicated are items that are easily accessible online for re-download (i.e. OS ISO images, digitally distributed games, etc.). Everything else is at least duplicated at a 1:1 level ... and the truly important stuff, at 1:2+. I'm already almost at the end of my rope in terms of physical capacity, and something, possibly drastic, has to change for my data retention strategy. What @browned spoke to is where I find myself going ... and currently, this means cobbling together a less than ideal solution of software. Currently, some of this can be done using the SSD optimizer plug-in (though I have never used it) even if the drives aren't SSDs. Though, without parity as a fault tolerant protection, doing this with HDDs and duplication as the fault tolerant protection plan is pretty silly. Drivepool's balancer is pretty great! Kicking in when it needs to ... minimizing intrusion into other, possibly more important tasks ... I've never had any issues! Similarly, Scanner's influence on what to do with data when a drive is being wonky has saved me at least 3 times I can remember! I'm not really wanting to give those up, especially as Scanner's functionality seems to not have an equivalent in the Linux world! Incorporating a parity strategy as an alternative to a duplicating strategy seems like it would fit right in! I could go into details on my personal setup and options I was toying with, but I don't want to dilute the message here and take the conversation off on a tangent. TL;DR; Parity please and I will pay for it! -JesterEE
  14. Nope. Chkdsk totally scrambled all my files into video142.mkv, so recovering 12TB is as good as useless. CHKDSK did more harm than good
  15. Yeah I am using the beta from your link. So you want me to contact you at the email? And do what, link this thread? Thanks in advance! Great program besides this glitch
  16. I run windows server 2016 with 10 hdds and 4 small SSDs - 15TB total usable DP pool with 2x3TB hdds parity using snapraid. The really stupid thing is, I’m only sitting on 2tb of data. I plan on repurposing the drives into an old Synology NAS and keep it as an off-site backup. So, I’m just wandering if anyone has done the following and can give me advice: Create 2x Raid 0 between 2x 1TB SSDs = 4TB in total between 4 drives using stablebit drivepool (will only be mirroring a couple of directories) and then have 2x 3TB parity drives with snapraid. Goals: 1. Reduce idle watts from spinning HDDs 2. Saturate 10GbE link (hopefully Raid 0 SSDs can do this) - with a goal of being able to edit 4K off it 3. Protect against bitrot with snapraid daily scrubs Data redundancy is not my main issue as I have a number of backups + cloud + offsite. One advantage I can see is that I can grown this NAS by adding 2 SSDs in RAID 0 at a time
  17. I got it working. I killed all processes that looked like they might be related to the scanner, restarted the GUI, then had to manually start the service. Eventually everything fired right up and started working! I didn't realize this was supposed to scan all drives in parallel; that's pretty neat. I'll likely never know what the problem was, but this doesn't seem like a good way to make a good impression on new users and get sales.
  18. Yes I thought that as well that's why I resized the drive to a smaller size thinking that would make it max 900GB but it didn't help and now I can't upload anything more because I fear I will reach the limit.
  19. I'm a brand-new user so perhaps I'm doing something wrong. I've had this running for almost 14 hours now, and the scanner automatically started with my small 233 GB SSD (which has no problems at all). It's my system drive and there's a 100MB system reserved partition. As far as I can tell, the scanner is making no progress at all, and if I'm interpreting the icons properly, it's still scanning that 100MB partition. I can see no way to manually skip only that partition. I've seen other posts mentioning progress percentage, but I don't know where they are seeing that. I'm just going by the lack of colored blocks. It seems like this scan should have been done a LONG time ago, but it's just not doing anything. Also, I've tried hitting the "Stop check" button so I can manually try starting another check, but the stop button doesn't do anything. I don't see any way to shut down the whole application and I'm not sure I want to start killing processes at this point. Not interested in rebooting this machine.
  20. i'm also interested in knowing this tho i do believe the reason it acts like this is cause it somewhat works like a normal hdd and never actually deletes anything until the space is needed, but for onedrive/gdrive it still looks like space is used. thus you need some program to clear/zero all the "deleted" files... but if that works when it comes to stablebit i have no idea
  21. Are you able to provide a view on if you're even considering it, what's holding it up (technical, political or other reasons etc.)
  22. To make sure, you're on the beta that I linked? If not, use that beta specifically, not the one linked on the stablebit.com site. As for an email, it wouldn't. It would basically have to send a notification for each new drive. Etc. There are two issues that we've seen: The settings files getting corrupted the Disk ID changing (it SHOULD NOT, but it has) If this is still happening, then please head to https://stablebit.com/Contact
  23. It's possible. It's not feature set, at this point. So ... However, we're currently working on StableBit Cloud. Once that's "shipped", then we'll see. (just FYI, I am fully behind FileVault and want it as much, or more than you, so "eventually")
  24. The best way? Add the CloudDrive disks to a pool. Add the physical disks to another pool. Add both pools to a new pool. Enable duplication on the top level pool. That will make sure one copy is held locally, and one is in the cloud.
  25. Well, it's not supported right now, but we'd like to display historical data, at some point. For now, you can enable some logging, and it will dump SMART info. But that resets
  1. Load more activity

Announcements

×
×
  • Create New...