Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. 4 points
    VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
  2. 2 points
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
  3. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  4. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  5. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  6. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  7. 2 points
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  8. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  9. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  10. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  11. 2 points
    Jose M Filion

    I/O Error

    Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
  12. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  13. 2 points
    Mostly just ask.
  14. 2 points
    Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
  15. 2 points
    Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
  16. 2 points
    The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
  17. 2 points
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  18. 2 points
    I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
  19. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  20. 2 points
    "dpcmd remeasure-pool x:"
  21. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  22. 2 points
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
  23. 2 points
    wolfhammer

    not authorized, must reauthorize

    I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
  24. 2 points
    Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
  25. 1 point
    Great thanks. fixed it for me.
  26. 1 point
    RFOneWatt

    Extension of a Plex Drive?

    Did you get this sorted? Seems to me you did everything correctly. So, to be clear - You had a standalone 8TB drive that was getting full. You bought a new 12TB drive. You downloaded and installed DrivePool. You created a brand new Pool consisting of your old 8TB drive and the new drive 12TB drive, giving you a new Virtual Drive, G: Because G: is considered a new drive, you are going to want to MOVE all of your files from E: to G: That's all you should have to do. In the future when you add drives to the pool you won't have to do anything and you should simply see the new free space from the new drive. Since this is a new pool, it's empty. ~RF
  27. 1 point
    OK, I came up with a solution to my own problem which will likely be the best of both worlds. Setup my Cloud Pool to Pool Duplicate, also setup my HDD Pool to Pool Duplicate. Then use both the pools in the storage pool with no duplication as normal.
  28. 1 point
    PocketDemon

    Different size hdd's

    Oh, certainly... Which is why I'd written on the 22nd of March in the thread that - "Obviously the downside to what we're suggesting though is voiding the warranty by shucking them..." So, it was about agreeing with you that going for NAS/Enterprise drives is a good thing; esp as you start to increase the drive count - BUT that this didn't contradict what had been suggested earlier about shucking the WD externals IF purchase price trumped warranty.
  29. 1 point
    TeleFragger

    My Rackmount Server

    ok so I am redoing everything and shuffling stuff around. what has stayed is ... Network... this is a beauty.. ive got $75 into.. HP Procurve 6400CL - 6 port CX4 port 10gb switch 5x ConnectX-1 CX4 port 10gb NIC running force fw to 2.9.1000 ConnectX-2 CX4 port 10gb NIC running Mellanox custom forced 2.10.xxxx fw!!!!! just got it and toying...I get that people say cx4 ports are old and dead but for $75 to be fully up for me is just the right price... then the hardware/software... Case: Antec 900 OS: Server 2019 Standard (Essentials role is gone.. im sad) CPU: Intel i5-6600k MoBo: Gigabyte GA-Z170XP-SLI RAM: 4x8gb ddr4 GFX: Onboard Intel HD 530 PSU: Corsair HX 620W OS Drive: 128GB SSD, Samsung Storage Controllers: 2x HP H220 SAS controllers flashed to current fw Hot Swap Cages: ICY DOCK 6 x 2.5" SATA /SAS HDD/SSD Hot Swap Storage Pool1: SSD Storage Pool2: Sata with 500GB SSD Cache pics are garbage and I haven't moved it into my utility room...
  30. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  31. 1 point
    srcrist

    Google Drive Existing Files

    Unfortunately, because of the way CloudDrive operates, you'll have to download the data and reupload it again to use CloudDrive. CloudDrive is a block-based solution that creates an actual drive image, chops it up into chunks, and stores those chunks on your cloud provider. CloudDrive's data is not accessible directly from the provider--by design. The reverse of this is that CloudDrive also cannot access data that you already have on your provider, because it isn't stored in the format that CloudDrive requires. There are other solutions, including Google's own Google File Stream application, that can mount your cloud storage and make it directly accessible as a drive on your PC. Other similar tools are rClone, ocaml-fuse, NetDrive, etc. There are pros and cons to both approaches. I'll list some below to help you make an informed decision: Block-based Pros: A block-based solution creates a *real* drive (as far as Windows is concerned). It can be partitioned like a physical drive, you can use file-system tools like chkdsk to preserve the data integrity, and literally any program that can access any other drive in your PC works natively without any hiccups. You can even use tools like DrivePool or Storage Spaces to combine multiple CloudDrive drives or volumes into one larger pool. A block-based solution enables end to end encryption. An encrypted drive is completely obfuscated both from your provider and anyone who might access your data by hacking your provider's services. Not even the number of files, let alone the file names, is visible unless the drive is mounted. CloudDrive has built-in encryption that encrypts the data before it is even written to your local disk. A block-based solution also enables more sophisticated sorts of data manipulation as well. Consider the ability to access parts of files without first downloading the entire file. That sort of thing. The ability to cache sections of data locally also falls under this category, which can greatly reduce API calls to your provider. Block-based Cons: Data is obfuscated even if unencrypted, and unable to be accessed directly from the provider. We already discussed this above, but it's definitely one of the negatives--depending on your use case. The only thing that you'll see on your provider is thousands of chunks of a few dozen megabytes of size. The drive is inaccessible in any way unless mounted by the drivers that decrypt the data and provide it to the operating system. You'll be tethered to CloudDrive for as long as you keep the data on the cloud. Moving the data outside of that ecosystem would require it to again be downloaded and reuploaded in its native format. Hope that helps.
  32. 1 point
    Christopher (Drashna)

    Few requests

    Actually, yes. That's part of what we want to do with StableBit Cloud. Good news! the latest beta has a concurrency limit, that you can configure. It defaults to 2 drives, right now. So this should help you out greatly! http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.1.1.1057_x64_BETA.exe
  33. 1 point
    Jaga

    PCloud, Spideroak, Tesorit work around?

    Do any of the providers support FTP? Clouddrive has a FTP layer built in that should work in that case. Can you mount any of those providers' spaces as a local drive (w/letter)? If so, you could use the Local Disk feature to place an encrypted volume on them, and manage it with Clouddrive. And - Christopher/Alex are continually evaluating new providers for Clouddrive. Never know when they'll add support for more. You can use the Contact link on that page to request additional ones.
  34. 1 point
    It may have to do to with other balancers. If you have Volume Equalization and Duplication Space Optimizers active, they may need to be de-activated _or_ you need to increase the priority of the Disk Space Equalizer plug-in such that it ranks higher than the other two (but if you have StableBit Scanner, that one should always be #1 IMHO). I have not actually used that plug-in myself though. Edit: Did you activate e-measure though?
  35. 1 point
    I think that this has been asked for before. But just in case: https://stablebit.com/Admin/IssueAnalysis/27889 And extending the "Disk Usage Limiter" balancer would be an easy option, I think. Also, are you experienced with C# programming? If not, no worries. If so, let me know, as there is source for building balancer plugins.
  36. 1 point
    DotJun

    New Update Problem

    So I installed the new update for scanner and now the fields for bay location and custom name have disappeared from the disk settings menu. Please tell me I won't have to pull every drive out to find out where each disk is located.
  37. 1 point
    So then, it's the StableBit DrivePool metadata that is duplicated then. This ... is standard, and it's hard coded to x3 duplication, actually. We don't store a lot of information in this folder. But reparse point info IS stored here (junctions, symblinks, etc). So if you're using a lot of them on the pool, then that would by whey you're seeing this behavior. And in this case, you don't want to mess with this folder.
  38. 1 point
    This is a Windows issue, actually. It auto assigns the drive letter, and can bump the letter like this. The best recommendation here, unfortunately, is to assign a letter that is much further along in the alphabet (such as "P:\" for Pool, or "S:\" for Storage), as this should prevent it from being reassigned a letter.
  39. 1 point
  40. 1 point
    Thank you Chris and Jaga. Ended up just copying everything over so that everything would be in 1 pool. Forced me to do some much needed digital housekeeping.
  41. 1 point
    Jaga

    Switch from DriveBender

    A - Yes, it is - any OS that can read the file system you formatted the drive with (assuming NTFS in this case) can see all it's files. The files are under a "poolpart..." hidden folder on each drive, fully readable by Windows. B - Yes, it will work with Bitlocker. This is a quote directly from Christopher on these forums: "You cannot encrypt the DrivePool drive, but you CAN encrypt the disks in the pool." (Link)
  42. 1 point
    If the sector has no data in it currently, a secure erase of all empty sectors on the drive would work. Some tools available to do that can be found here: https://www.raymond.cc/blog/make-your-recoverable-datas-unrecoverable/ If it has data in it, simply finding the file via a block map tool and deleting it (removing it from the MFT) would mark the sector empty, allowing you to then do a secure erase which would force a write to it.
  43. 1 point
    As far as heat goes, I find watercooling to be an effective solution... but not like most people use it. I don't move it from the components into water, then directly into the air with radiators. Instead I have a ~30 gallon denatured water reservoir (basically an insulated camping cooler) that I use two submerged pumps with, one moves water through the CPU/GPU/Chipset coolers on the workstation, the other cycles water through a Hydroponics chiller which vents the heat through heating/cooling duct pipes to push it straight out the window at night. The practical effect of this is that very few BTUs of heat make it into the room during the day during the summer. In the winter I disconnect the ducting from the window and have it dump into the room - providing extra and practical heating to help save on power use. The only drawback is the initial purchase cost - the 1/2 HP Hydroponics Chiller and Industrial fan to move air were around $700, and the watercooling elements around another $300. But I've been using the same setup for around 4 years now, with only water changes for maintenance, and it's delightfully cool in the summer and warm in the winter. I don't have to run A/C or Heating in that room if I don't want to, and it's one of the most exposed in the house here. Interestingly, your GTX puts out the most heat, on average. I can run my CPU with Prime95 for days on end (24/7), and not begin to approach the amount of heat my 980ti or 1060 dump into the water in even one day. The drives by comparison (unless fully loaded 100% of the time) don't use that much power compared to even your GTX at idle. A dozen spinners at ~4 watts each (average of idle vs loaded) barely take 50 watts. The idle spec for your video card are 71 watts Idle, 288 under full load. To sum up: consider controlling the heat by using watercooling to keep it in a large reservoir during the day, then push it out at night using either a chiller, or radiators. Simply using air cooling means no matter what mechanism you use to remove the heat, you're just dumping it back into the room immediately.
  44. 1 point
    Good points, I think I have my answers now. I did some more reading as well and I have enabled file integrity on each of my drives within my pool. Any drives I add from now on out I will format with file integrity enabled from the start. Thanks for the walk through!! Best wishes, Scott
  45. 1 point
    If that's the case, then yeah, it definitely sounds like it was a timing issue.
  46. 1 point
    Ah. Mount point's don't really matter. To DrivePool, or Windows really. So unless you're intending to mount the drive on the pool's path, there should be no issue.
  47. 1 point
    CloudDrive essentially cannot hit the API ban because of its implementation of exponential back-off. rClone used to hit the bans because it did not respect the back-off requests Google's servers were sending, though I think they've solved that problem at this point. In any case, don't worry about that with CloudDrive. The only "ban" you'll need to worry about is the ~750gb/day upload threshold.
  48. 1 point
    DP has offered hierachical Pools since rencently, version 2.2.0.744 or so. If you're on an older version you'd need to update. Not sure if there has been a stable release already with this feature. I am running .746 BETA (an early adopter exactly for this feature).
  49. 1 point
    jmone

    REFS in pool

    ...also there is a good table here that shows what version of ReFS is supported by what Windows Version https://en.wikipedia.org/wiki/ReFS
  50. 1 point
    Pancakes

    I want to encrypt my drives

    I never updated this thread, but I encrypted my drives one by one, with no availability impact. I now have my entire pool encrypted at it works just as normal. Fantastic!

Announcements

×
×
  • Create New...