Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 03/30/19 in all areas

  1. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  2. 2 points
    I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
  3. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  4. 2 points
    srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  5. 1 point
    RG9400

    Google Drive API Key switch

    You should be able to Re-Authorize the drive, which would start using the new API key
  6. 1 point
    Christopher (Drashna)

    Drive evacuation priority

    Honestly, I'm not sure. I suspect that the processing is per drive, rather than duplicated/unduplicated, first. Since I'm not sure, I've opened a request, to ask Alex (the Developer). https://stablebit.com/Admin/IssueAnalysis/28301 Not currently. It would require a change to both StableBit DrivePool and StableBit Scanner to facilitate this. My recommendation would be to just evacuate unduplicated data on SMART warnings.
  7. 1 point
    xazz

    Unusable for duplication 1.56TB

    From my reading elsewhere on the forum, i see that i should have enabled the "Duplication Space Optimizer" balancer. I will do that now and let you know how it goes.
  8. 1 point
    No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
  9. 1 point
    srcrist

    CloudDrive: Pool into DP or separate?

    Once you've created the nested pool, you'll need to move all of the existing data into the poolpart hidden folder within the outer poolpart hidden folder before it will be accessible from the pool. It's the same process that you need to complete if you simply added a drive to a non-nested pool that already had data on it. If you want the data to be accessible within the pool, you'll have to move the data into the pool structure. Right now you should have drives with a hidden poolpart folder and all of the other data on the drive within your subpool. You need to take all of that other data and simply move it within the hidden folder. See this older thread for a similar situation: https://community.covecube.com/index.php?/topic/4040-data-now-showing-in-hierarchical-pool/&sortby=date
  10. 1 point
    Great thanks. fixed it for me.
  11. 1 point
    I've been puzzled why unduplicated content on a drive with SMART warnings and damage was not being removed, despite setting various balancers to do so. I discovered today part of the reason - possibly all of it. I believe all the remaining unduplicated content were backup images from Macrium Reflect. Reflect has an anti-ransomware function that can prevent any changes to backup files. This was preventing drivepool from removing them. I realized this when I shut down drivepool service and tried to manually move those files to another drive. I'd have expected drivepool to report that files were inaccessible - but apparently it did not know other software was actually blocking it - which brings me to the next issue. Stablebit Scanner reported 20 unreadable sectors on this drive and 3 damaged files. SMART had indicated 5 pending sectors. I decided to re-run the scan after disabling the Macrium Image Guard - so far, it appears the unreadable sectors may have been caused by Image Guard and may not be bad. Remains to be seen whether the 5 pending sectors will end up being reallocated or become readable. The damaged files however were NOT image backups, so it's unclear if there was any connection. Bottom line: don't use Macrium Image Guard (or any other similar software) with pooled files. I may just move my image files out of the pool to avoid the issue.
  12. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  13. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  14. 1 point
    Hello! I'm fairly new to StableBit, but liking the setup so far! I've got a few pools for various resources in my server. In case it's important, here's a quick idea of what I've got running: Dell R730 running Windows Server 2019 Datacenter, connected to a 24 disk shelf via SAS. Each drive shows up individually, so I've used DrivePool to create larger buckets for my data. I'd like to have them redundant against a single drive failure, but I know that means duplicating everything. I will eventually have a pool dedicated to my VMs, and VMs deduplicate very well since each one requires essentially a copy of the base data, and while I will have backups of this data, I'd still like to have a quicker recovery from a drive failure in case that does happen so they'd also be duplicated... (Scanner currently tells me one of my disks is throwing a SMART error, but i don't know how bad it is... I'd like to run it into the ground before replacing it to save money on purchasing new hardware before it's actually dead...) So, I know deduplication isn't supported against the pool itself, but I was curious if people have deduplicated the physical disks, and if windows dedupe sees the pool data and tries to deduplicate it? I noticed this thread, unfortunately it's locked for further comments as it's fairly old, was talking about deduplicating the drives that a pool uses, but I don't know if they meant the files that weren't part of a pool, or if they were talking about the files from the pool. If possible, I'd love to hear an official answer, since I'd rather not run this in an unsupported way, but I'm really hoping there's a way to deduplicate some of these files before I run myself out of space... Thanks for any info that you can provide!
  15. 1 point
    This is correct. It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able. Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary.
  16. 1 point
    Umfriend

    My Rackmount Server

    Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably.... But wow, nice setup! With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
  17. 1 point
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  18. 1 point
    I'm running drivepool on a server and I'm sharing the pool. When I access the share on my Mac, every folder except for one is fine. There is a single folder that says I don't have permission to open it. When I check the permissions all of the folders including the one I don't have access to, all have the same permissions. Any ideas what the problem could be?
  19. 1 point
    Spider99

    Event log warning

    you can ignore them - i asked a long time ago and Christopher confirmed its a side effect of it being a virtual disk and nothing to worry about
  20. 1 point
    I believe you need to seed the pool. See Pool Seeding
  21. 1 point
    exterrestris

    Drivepool With Snapraid

    My snapraid.conf is pretty standard - I haven't really changed any of the defaults (so I haven't included them). I choose to keep a copy of the content file on every disk, but that's not strictly necessary. # Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE [,FILE] ..." parity C:\Snapraid\Parity\1\snapraid.parity # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE" content C:\Snapraid\Parity\1\snapraid.content content C:\Snapraid\Data\1\snapraid.content content C:\Snapraid\Data\2\snapraid.content content C:\Snapraid\Data\3\snapraid.content content C:\Snapraid\Data\4\snapraid.content # Defines the data disks to use # The name and mount point association is relevant for parity, do not change it # WARNING: Adding here your boot C:\ disk is NOT a good idea! # SnapRAID is better suited for files that rarely changes! # Format: "data DISK_NAME DISK_MOUNT_POINT" data d1 C:\Snapraid\Data\1\PoolPart.a5f57749-53fb-4595-9bad-5912c1cfb277 data d2 C:\Snapraid\Data\2\PoolPart.7d66fe3d-5e5b-4aaf-a261-306e864c34fa data d3 C:\Snapraid\Data\3\PoolPart.a081b030-04dc-4eb5-87ba-9fd5f38deb7b data d4 C:\Snapraid\Data\4\PoolPart.65ea70d5-2de5-4b78-bd02-f09f32ed4426 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ exclude \.covefs As for DrivePool balancers, yes, turn them all off. The Scanner is useful to keep if you want automatic evacuation of a failing drive, but not essential, and the SSD Optimiser is only necessary if you have a cache drive to use as a landing zone. If you don't use a landing zone, then you can disable automatic balancing, but if you do then you need it to balance periodically - once a day rather than immediately is best, as you ideally want the SnapRAID sync to happen shortly after the balance completes. I'm not sure what the default behaviour of DrivePool is supposed to be when all balancers are disabled, but I think it does split evenly across the disks.
  22. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  23. 1 point
    Umfriend

    moving drives around

    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  24. 1 point
    Christopher (Drashna)

    moving drives around

    Yeah, as long as the drives show up correctly, then it will maintain the pool. As for the power savings, that entirely depends on the power consumption, both at peak and idle. And to be honest, you probably won't see a lot of power savings. The best way to get that savings is to use large capacity SSDs, which is what data centers do. However, they can be a LOT more expensive.
  25. 1 point
    Umfriend

    moving drives around

    TL;DR but yes, DP will recognise the Pool. You could disconnect them all and plug them in on another machine and DP would see the Pool again. One small caveat is that if you use plug-ins that are not installed on the new machine then you may have some unwanted behaviour. Other than that, it should work.
  26. 1 point
    Yes, that's definitely a false positive. It's just some of the troubleshooting stuff for the UI. It's nothing harmful. And if you check, the file should be digitally signed. A good indicator that it's legit.
  27. 1 point
    Sure. I purchased and installed PartedMagic onto a USB. I then booted using this USB to run a Secure Erase, but it was not able to complete successfully. So I ran DD (through PartedMagic as well) on the drive around 5 times. I then converted the disk to GPT using diskpart and installed a fresh copy of Windows. I used CHKDSK, StableBit Scanner, and Intel SSD Toolbox (Full Diagnostic) to confirm that read/writes were functioning as intended. Based on what I could understand from Intel, it seems like the Optane drives are fairly unique due to their usage of 3D XPoint technology which caused the specific/strange behavior I was facing.
  28. 1 point
    I definitely suggest configuring snapraid so it points to the drivepool folder with the GUID inside the config file so that it's much easier to restore. Snapraid doesn't have to be the root of the drive, it can be anywhere you like (as long as they are on different psychical drives). So instead of doing: data d1 z:\ data d2 y:\ data d3 x:\ You do: data d1 z:\drivepool.{guid}\ data d2 y:\drivepool.{guid}\ data d3 x:\drivepool.{guid}\ That way after a failure e.g d2 dies, you drop your new drive in, add it to the pool, get the new GUID from the new drive, and edit your snapraid conf to comment out the old drive and add the new one by changing d2 y:\drivepool.{guid}\ to d2 y:\drivepool.{newguid}\ like so: data d1 z:\drivepool.{guid}\ #data d2 y:\drivepool.{guid}\ data d2 y:\drivepool.{newguid}\ data d3 x:\drivepool.{guid}\ Then run your fix and it all just works - and you don't have to move your files around.
  29. 1 point
    Get a Norco box for the drives, and a LSI Host Bus Adapter with External SAS connectors. Then plug the internal SAS connector into the backplane of the Norco box, and the external connector to the external connector of the HBA.
  30. 1 point
    Well, normally, I'd agree with you. But StableBit Scanner isn't tracking the scan as one whole thing. It actually uses a sector map of the drive. So it tracks sections of the drive, independently. Each section is checked based on that status, and it over time, spreads out when the scanning happens. And should actually make it more intelligently scan (by ideally scanning at times and days when the drives are less likely to be active). You can check out this by double clicking on the disk, and looking at the sector map. It will show different colors for different regions.
  31. 1 point
    OK, I came up with a solution to my own problem which will likely be the best of both worlds. Setup my Cloud Pool to Pool Duplicate, also setup my HDD Pool to Pool Duplicate. Then use both the pools in the storage pool with no duplication as normal.
  32. 1 point
    Christopher (Drashna)

    Disk Activity

    Unfortunately, it may be. There is a setting that we have enabled by default that may be causing this behavior. Specifically, the BitLocker setting. This setting queries the system for data, which creates a WMI query, which causes disk activity. That said, you can disable this: http://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings And the setting is "BitLocker_CloudPartUnlockDetect", which is actually used in the example. Set the "override" value to "false", save the file and reboot the system. That should fix the issue, hopefully.
  33. 1 point
    I've had it happen with normal reboots as well, just not as often as with crashes. It just depends on the timing. Imagine what happens on a reboot. Windows is forcefully shutting down services, including the Stablebit Scanner Service. So if this service gets shutdown at the timeframe where it is writing new DiskId files the files can end up corrupted, thus after a reboot the Service creates new DiskId files meaning all previous scan status is lost. Now the DiskId are not written literally every second anymore (which increases the risk that the service gets killed at the time of writing files significantly) but instead 20-40 minutes (don't know the exact interval now) . That's a reduction of a factor of 1200 to 2400 so the risk that you reboot at the exact time the files are written should basically be zero now.
  34. 1 point
    Well, a number of others were having this as well, and I've posted this info in a number of those threads, so hopefully, confirmation will come soon.
  35. 1 point
    The cause of the issue is fixed with Beta 3246. Since my machine is not frequently crashing, maybe once per month, it will take a while to verify it on my side.
  36. 1 point
    i think you mean mbit :-P Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer. Plus i got upload verification on, that also cuts off speeds on uploads I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  37. 1 point
    We've definitely talked about it. And to be honest, I'm not sure what we can do. Already, we do store the file system data, if you have pinning enabled, in theory. Though, there are circumstances that can cause it to purge that info. The other issue, is that by default, every block is checksummed. That is checked on download. So, if corrupted data is downloaded, then you would get errors, and a warning about it. However, that didn't happen here. And if that is the case, more than likely, it sent old/out of date data. Which ... I'm not sure how we can handle that in a way that isn't extremely complex. But again, this is something that is on our mind.
  38. 1 point
    Yes, there was something wrong in the program. They gave me a newer updated Beta that fixed this issue. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.963_x64_BETA.exe
  39. 1 point
    Understood, (and kind of assumed, but thought it was worth asking). Getting pretty deep into CloudDrive testing and loving it. Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools! Thanks for following up. -eric
  40. 1 point
    As noted before, I'm using a RAID controller, not a HBA, so you'd need to explore the f/w, drivers & s/w for your card. That said, a quick google search & there's this - - however, as far as I can see, 4&83E10FE&0&00E0 is not necessarily a fixed device ID - so you'd need to look in the registry for the equivalent.
  41. 1 point
    I'm not sure? But the number of threads is set by our program. Mostly, it's just the number of open/active connections. Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. Parallel connections. For stuff like prefetching, it makes a different. Or if you have a lot of random access on the drives... But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  42. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  43. 1 point
    Thread count is fine. We really haven't seen issues with 10. However, the settings you have set WILL cause bottlenecking and issues. Download threads: 10 Upload threads: 10 Minimum download size: 20MB Prefetch trigger: 5MB Prefetch forward: 150 MB Prefetch time windows: 30 seconds The Prefetch forward should be roughly 75% of download threads x minimum download size. If you can set a higher minimum size, then you can increase the forward.
  44. 1 point
    PocketDemon

    Different size hdd's

    Along with balancing personal budget, price/TB & warranty (if that matters to you) & whatnot... ...it's also about how many HDDs you can physically connect up vs how your data's growing - since many people get by with just a small SSD in a laptop - whilst others (like myself) are 'data-whores' have many 10s or 100s of TBs of random stuff. As to looking at NAS storage, part of the reason why people look at shucking the higher capacity WD external drives is that they all use WD/HGSC helium 5400rpm filled drives - which are effectively equivalent to the WD Reds... (some of the smaller capacity ones switched to using WD Greens/Blues - I believe only <=4TB but I don't know that for certain) ...though they 'may' alternatively be some version of a WD Gold or HTSC HC500 or...??? ...all of which are designed for NAS - but buying the external drives is cheaper.
  45. 1 point
    That depends ENTIRELY on your use case. It's not a question that others can really answer. But if performance is important, then the SSD is going to be the better choice for you. But if you're accessing a lot of data (reading and writing), then a hard drive may be a better option.
  46. 1 point
    PocketDemon

    Different size hdd's

    There's no issue with different sizes - &, within the context of your drives & what you're likely to be doing, there's no issue with sizes & using the capacity. Yeah, the only time that there would be an issue is if the number of times you're duplicating wouldn't work... ...so, imagining someone were looking at duplicating the entire pool, for example - - with 2x duplication &, say a 1TB & 8TB drive, they could only actually duplicate 1TB - & with 3x duplication &, say a 1TB, 4TB & 8TB drive, they could still only actually duplicate 1TB ...however, unless you're after >=6x duplication (which is highly unlikely), there's no problem whatsoever. If you are using duplication & your current drives are pretty full already, after adding the new drive then I would suggest pushing the "Duplication Space Optimiser" to the top & forcing a rebalance run just before going to bed... As this then should prevent there being any issues moving forward.
  47. 1 point
    MandalorePatriot

    CloudDrive Cache - SSD vs HDD

    Thank you, I really appreciate your quick and informative answer!
  48. 1 point
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  49. 1 point
    Hi there, Awesome software - as always! I've been using Drivepool and Scanner since nearly the beginning. Currently I have something like 22 drives in my main drive pool. They range from iSCSI, SATA-attached, USB (depending on the pool), Ximeta-attached (they have a custom network thing they do), or even virtualized under ESXi. Anything that shows up to Windows as a physical drive can just be pooled. I love it! Recently purchased Clouddrive, after messing around and some Google Searching of this forum, I think I'm fairly setup well. I have 13 clouddrives setup. 1 box.com, 2 dropbox, 10 Google Drive (not the paid cloud drives, but the provided ones for personal use). I used all defaults for everything, except set the cache to MINIMIAL+ Encrypted it, set it to auto login to that encrypted drives (as I only care that the CLOUD data is encrypted...I mean, you want to look at my pictures of my kids THAT bad and break into my PC to do it...okay, enjoy, you earned it). Pointed to a specific 100GB hard drive partition that I could dedicate to this (using drivepool on all other drives and one specific thing mentioned was the cache drive could NOT be part of a drivepool) renamed it removed the drive letter and set it to a folder name (use this with drivepooling to cut down on the displayed drive letters for a clean look) I am getting a slew of "throttling due to bandwidth" issues. I admit that my cache drive is probably too small for the amount of data I DUMPED and will continue to monitor this as I do not feel that I was getting those messages when I did not just DUMP enough data to fill the ENTIRE CloudPool in one shot. So, my request is to have a view in the program to look at all drive upload/download at the same time. Maybe even space? I love the existing charts. They are easy to look at, easy to read and understand. I also like the "Technical Details" page as that shows a TON of information, such as the file - or chunk - and how much of it is uploaded / downloaded. I'm wondering if there is a way to view all drives at once? I would use this to get a sense of the overall health of the system. That is, if I have to scan through all 13 drives, I do not see where my bandwidth is being consumed to understand if the cache drive is FULL or if I am having Upload/Download issues. This reason, by the time I click through each drive, I do not see where the bandwidth is being consumed as the bandwidth seems to shift between drives fast enough that I do not see a true representation of what is going on. I'm sure part of that is redrawing the graphs. I find the technical details page much more useful, as I do not see what is LEFT to upload, but I get a much faster idea of what is going on and although annoying to click through ALL the drives, it seems to be giving me a better idea of what is going on. I think that having an overall page would be fantastic. Thank you again for continuing to show what is possible! --Dan
  50. 1 point
    To clarify (and make it simple to find), here is Alex's official definition of that "Other" space:

Announcements

×
×
  • Create New...