Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. 4 points
    VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
  2. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  3. 2 points
    I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
  4. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  5. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  6. 2 points
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  7. 2 points
    srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  8. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  9. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  10. 2 points
    I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
  11. 2 points
    CC88

    Is DrivePool abandoned software?

    The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
  12. 2 points
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  13. 2 points
    To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
  14. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  15. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  16. 2 points
    Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
  17. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  18. 2 points
    Jose M Filion

    I/O Error

    Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
  19. 2 points
    1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
  20. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  21. 2 points
    Mostly just ask.
  22. 2 points
    ...just to chime in here...remember that expanders have firmware too. I am running 1x M1015 + 1x RES2SV240 in my 24bay rig for 5+ years now....I remember that there was a firmware update for my expander that improved stability with S-ATA drives (which is the standard usecase for the majority of the semi-pro users here, I think). Upgrading the firmware could be done with the same utility as for the HBA, as far as I remember...instructions were in the firmware readme Edit: here's a linbk for a howto: https://lime-technology.com/forums/topic/24075-solved-flashing-firmware-to-an-intel-res2xxxxx-expander-with-your-9211-8i-hba/?tab=comments#comment-218471 regards, Fred
  23. 2 points
    Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
  24. 2 points
    Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
  25. 2 points
    The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
  26. 2 points
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  27. 2 points
    I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
  28. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  29. 2 points
    It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
  30. 2 points
    "dpcmd remeasure-pool x:"
  31. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  32. 2 points
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
  33. 2 points
    wolfhammer

    not authorized, must reauthorize

    I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
  34. 2 points
    HellDiverUK

    Build Advice Needed

    Ah yes, I meant to mention BlueIris. I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3. It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras. Before I used QS, it was sitting at about 60% CPU use. With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption. I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution. I record 24/7 on to an 8TB WD Purple drive, with events turned on. QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K. I get great performance from a creaky old Haswell.
  35. 2 points
    Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
  36. 2 points
    Umfriend

    Recommended server backup method?

    Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
  37. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  38. 1 point
    Thank you everyone who has commented on this thread - with your help I was able to get everything working again! Thanks for being patient !
  39. 1 point
    Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card. Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage. (it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options) Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead. So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this. I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not. Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent. So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.
  40. 1 point
    Changed it to 20 threads and now i am getting 300 mbps and hitting the 750 GB limit, so I think i am good now.
  41. 1 point
    srcrist

    CloudDrive and GSuite and some errors

    The file errors make me wonder if there is something wrong with the installation. Go ahead and run the troubleshooter (http://wiki.covecube.com/StableBit_Troubleshooter) and include your ticket number. I think you'll have to wait for Christopher to handle this one. I'm not sure what's going on.
  42. 1 point
    So, removing StableBit Scanner from the system fixed the issue with the drives? If so.... well, uninstalling the drivers for the controller (or installing them, if they weren't) may help out here. Otherwise, it depends on your budget. I personally highly recommend the LSI SAS 9211 or 9207 cards, but they're on the higher end of consumer/prosumer.
  43. 1 point
    Christopher (Drashna)

    Few requests

    Actually, yes. That's part of what we want to do with StableBit Cloud. Good news! the latest beta has a concurrency limit, that you can configure. It defaults to 2 drives, right now. So this should help you out greatly! http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.1.1.1057_x64_BETA.exe
  44. 1 point
    Umfriend

    Moving from WHS V1

    Actually, all I want is what WHS2011 does but then with continued support and security updates and support for way more memory. In any case, I was planning to go WSE2016 at some stage but that will be WSE2019. I was just trying to warn you that WS2016 support will end, I think end of 2022 (2027 extended support, no clue what that means) and that going WSE2019 might save you a learning curve. Having said that, and missing knowledge & experience with WSE 2012/2016, it may be that WSE 2019 actually misses the dashboard (if that is what is the Essentials Experience role): https://cloudblogs.microsoft.com/windowsserver/2018/09/05/windows-server-2019-essentials-update/ So basically, I don't know what I am talking about...
  45. 1 point
    I think that this has been asked for before. But just in case: https://stablebit.com/Admin/IssueAnalysis/27889 And extending the "Disk Usage Limiter" balancer would be an easy option, I think. Also, are you experienced with C# programming? If not, no worries. If so, let me know, as there is source for building balancer plugins.
  46. 1 point
    Christopher (Drashna)

    Scanner activity log?

    The service log would be where that stuff is logged. But no, there aren't any "reports" generated at this time. This is something that has been requested in the past, and is something that we do plan on added, but as part of the StableBit Cloud http://wiki.covecube.com/Development_Status#StableBit_Cloud
  47. 1 point
    Scanner also pings the SMART status of each drive on a short timer. You will want to go into Settings, and to the S.M.A.R.T. tab to enable "Throttle queries". It's your choice how long to set it, and you also have the option to limit queries to the work window (active disk scanning) only, or to not query the drive if it has spun down (sleep mode on the drive). In your case, I would simply throttle queries to every 5-10 minutes, and enable "Only query during the work window or if scanning". Then under the General tab set the hours it's allowed to access the drives to perform scans, and check "Wake up to scan". It should limit drive activity and allow the machine to sleep normally.
  48. 1 point
    Have you considered it may not be the flash-drive that's causing this? Try the USB controller or the motherboard. Especially check if the board gets enough power. I've seen USB drives unjustifiably getting the blame for issues more often than I can count, while a mainboard was broken, or PSU didn't have enough power left for the USB lanes. If that's ruled out, and it still fails, try running SpinRite level 2 scan on the USB-drive (or similar software). May work wonders.
  49. 1 point
    You could probably just restart the "Windows Server Essentials Storage Manager" service, I think it was. And this may fix the issue, without a full reboot. So duplicated data on the 2x4TB drives, and unduplicated data on the 8TB? If so, that's doable (via the "Drive Usage Limiter" balancer, "manage pool" -> balancing -> "balancers" tab). But keep in mind that we don't keep track of originals and duplicates, so "duplicated" is all data that is duplicated, aka both copies. Unduplicated data is any data that isn't duplicated.
  50. 1 point
    There is ABSOLUTELY NO ISSUE using either the built in defragmentation software, or using 3rd party software. Specifically, StableBit DrivePool stores the files on normal NTFS volumes. We don't do anything "weird" that would break defragmentation. At all. In fact, I personally use PerfectDisk on my server (which has 12 HDDs), and have never had an issue with it.

Announcements

×
×
  • Create New...