Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 01/25/19 in all areas

  1. 1 point
    In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
  2. 1 point
    No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
  3. 1 point
    srcrist

    CloudDrive: Pool into DP or separate?

    Once you've created the nested pool, you'll need to move all of the existing data into the poolpart hidden folder within the outer poolpart hidden folder before it will be accessible from the pool. It's the same process that you need to complete if you simply added a drive to a non-nested pool that already had data on it. If you want the data to be accessible within the pool, you'll have to move the data into the pool structure. Right now you should have drives with a hidden poolpart folder and all of the other data on the drive within your subpool. You need to take all of that other data and simply move it within the hidden folder. See this older thread for a similar situation: https://community.covecube.com/index.php?/topic/4040-data-now-showing-in-hierarchical-pool/&sortby=date
  4. 1 point
    To reauthorize, you shouldn't have to look at the providers list. Although, it's not in the list because you have not enabled 'experimental providers'. See below. To reauthorize, you need to click on 'manage drive' as shown below.
  5. 1 point
    gx240

    Black Friday sales?

    Do you typically have Black Friday sales with reduced prices on your software (Stablebit Drivepool in particular)?
  6. 1 point
    RFOneWatt

    Extension of a Plex Drive?

    Did you get this sorted? Seems to me you did everything correctly. So, to be clear - You had a standalone 8TB drive that was getting full. You bought a new 12TB drive. You downloaded and installed DrivePool. You created a brand new Pool consisting of your old 8TB drive and the new drive 12TB drive, giving you a new Virtual Drive, G: Because G: is considered a new drive, you are going to want to MOVE all of your files from E: to G: That's all you should have to do. In the future when you add drives to the pool you won't have to do anything and you should simply see the new free space from the new drive. Since this is a new pool, it's empty. ~RF
  7. 1 point
    Same here. Any update from the developers? This issue was opened a month ago and nothing... Not very good considering this is paid for software.
  8. 1 point
    Umfriend

    My Rackmount Server

    Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably.... But wow, nice setup! With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
  9. 1 point
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  10. 1 point
    I believe you need to seed the pool. See Pool Seeding
  11. 1 point
    exterrestris

    Drivepool With Snapraid

    My snapraid.conf is pretty standard - I haven't really changed any of the defaults (so I haven't included them). I choose to keep a copy of the content file on every disk, but that's not strictly necessary. # Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE [,FILE] ..." parity C:\Snapraid\Parity\1\snapraid.parity # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE" content C:\Snapraid\Parity\1\snapraid.content content C:\Snapraid\Data\1\snapraid.content content C:\Snapraid\Data\2\snapraid.content content C:\Snapraid\Data\3\snapraid.content content C:\Snapraid\Data\4\snapraid.content # Defines the data disks to use # The name and mount point association is relevant for parity, do not change it # WARNING: Adding here your boot C:\ disk is NOT a good idea! # SnapRAID is better suited for files that rarely changes! # Format: "data DISK_NAME DISK_MOUNT_POINT" data d1 C:\Snapraid\Data\1\PoolPart.a5f57749-53fb-4595-9bad-5912c1cfb277 data d2 C:\Snapraid\Data\2\PoolPart.7d66fe3d-5e5b-4aaf-a261-306e864c34fa data d3 C:\Snapraid\Data\3\PoolPart.a081b030-04dc-4eb5-87ba-9fd5f38deb7b data d4 C:\Snapraid\Data\4\PoolPart.65ea70d5-2de5-4b78-bd02-f09f32ed4426 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ exclude \.covefs As for DrivePool balancers, yes, turn them all off. The Scanner is useful to keep if you want automatic evacuation of a failing drive, but not essential, and the SSD Optimiser is only necessary if you have a cache drive to use as a landing zone. If you don't use a landing zone, then you can disable automatic balancing, but if you do then you need it to balance periodically - once a day rather than immediately is best, as you ideally want the SnapRAID sync to happen shortly after the balance completes. I'm not sure what the default behaviour of DrivePool is supposed to be when all balancers are disabled, but I think it does split evenly across the disks.
  12. 1 point
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available. This is at default settings and without duplication.
  13. 1 point
    Umfriend

    moving drives around

    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  14. 1 point
    Spider99

    Samsung 9xx NVMe support

    It depends on the OS Win10 will work but say 2012r2 will not or thats how it works for me with my 950 Pro's - unless 960's work differently
  15. 1 point
    Umfriend

    moving drives around

    Do you have Scanner? And yeah, even though I have a far smaller Pool (6 HDD in a 9 HDD setup), I label them with a sticker.
  16. 1 point
    Umfriend

    moving drives around

    TL;DR but yes, DP will recognise the Pool. You could disconnect them all and plug them in on another machine and DP would see the Pool again. One small caveat is that if you use plug-ins that are not installed on the new machine then you may have some unwanted behaviour. Other than that, it should work.
  17. 1 point
    Thank you everyone who has commented on this thread - with your help I was able to get everything working again! Thanks for being patient !
  18. 1 point
    I have a 15TB clouddrive, backed by a gsuite account, and performance has gotten significantly worse than it used to be. I'm on a 300/300 fiber connection, and using a 100gb expandable cache on an SSD. Clouddrive generally reports a download speed of 60-100mbps. Even 720 content like 4000kbps buffers constantly. This doesn't seem possible if its downloading 60mpbs. 1080 6000kbps content is unwatchable. when I look at the clouddrive in perfmon it generally shows 100% activity. The SSD activity, CPU, and RAM are all very low usage. I have prefetch enabled, but it doesn't seem to work consistently. It shows me a "hit rate" regularly in the 30-60% rate or something, even when the only thing happening is a single stream being watched (which I would think would be giving a 100% hit rate). Does the prefetcher prefetch files? Or prefetch blocks? If it is prefetching blocks, is fragmentation an issue on a clouddrive that could be confusing the prefetcher (eg, its downloading the next block, which doesn't actually contain the file i'm watching, wasting bandwidth, and causing a cache miss?) Defragging this would seem to be super slow since it would effectively have to download and upload every block to do it (perhaps multiple times). I've tried all kinds of different prefetch settings, different block sizes, different minimum read sizes, nothing seems to work. Any ideas?
  19. 1 point
    Yes, that's definitely a false positive. It's just some of the troubleshooting stuff for the UI. It's nothing harmful. And if you check, the file should be digitally signed. A good indicator that it's legit.
  20. 1 point
    If you'd like to see the genesis of this script, check out my original thread here Since I finally got my PowerShell script running, and I thought I'd post it here in case anyone else might find it helpful. SYNOPSIS: Script will move files from one DrivePool to another according to FIFO policy REQUIRED INFRASTRUCTURE: The expected layout is a DrivePool consisting of two DrivePools, one magnetic and one solid state. The main variables are pretty obviously documented. I added the file archive limit for people like me who also run SnapRAID Helper. That way the script doesn't trip the 'deleted' file limit (I'm assuming moves would trip it, but I didn't actually test it). Warning, I've obviously only tested this on my system. Please test this extensively on your system after you have ensured good backups. I certainly don't expect anything to go wrong, but that doesn't mean that it can't. The code is full of on-screen debugging output. I'm not a great coder, so if I've done anything wrong, please let me know. I've posted the code here so that you can't C&P it into a script of your own, since Windows can be annoying about downloaded scripts. Please let me know if you have any questions. Set-StrictMode -Version 1 # Script drivePoolMoves.ps1 <# .SYNOPSIS Script will move files from one DrivePool to another according to FIFO policy .DESCRIPTION The script can be set to run as often as desired. The expected layout is a DrivePool consisting of two DrivePools, one magnetic and one solid state. .NOTES Author : fly (Zac) #> # Number of files to move before rechecking SSD space $moveCount = 1 # Path to PoolPart folder on magnetic DrivePool drive $archiveDrive = "E:\PoolPart.xxxxx\Shares\" # Path to PoolPart folder on SSD DrivePool drive $ssdSearchPath = "F:\PoolPart.xxxxx\Shares\" # Minimum SSD drive use percent. Below this amount, stop archiving files. $ssdMinUsedPercent = 50 # Maximum SSD drive use percent. Above this amount, start archiving files. $ssdMaxUsedPercent = 80 # Do not move more than this many files $fileArchiveLimit = 200 # Exclude these file/folder names [System.Collections.ArrayList]$excludeList = @('*.covefs*', '*ANYTHING.YOU.WANT*') # Other stuff $ssdDriveLetter = "" $global:ssdCurrentUsedPercent = 0 $fileNames = @() $global:fileCount = 0 $errors = @() Write-Output "Starting script..." function CheckSSDAbove($percent) { $ssdDriveLetter = $ssdSearchPath.Substring(0, 2) Get-WmiObject Win32_Volume | Where-object {$ssdDriveLetter -contains $_.DriveLetter} | ForEach { $global:ssdUsedPercent = (($_.Capacity - $_.FreeSpace) * 100) / $_.Capacity $global:ssdUsedPercent = [math]::Round($ssdUsedPercent, 2) } If ($ssdUsedPercent -ge $percent) { Return $true } Else { Return $false } } function MoveOldestFiles { $fileNames = Get-ChildItem -Path $ssdSearchPath -Recurse -File -Exclude $excludeList | Sort-Object CreationTime | Select-Object -First $moveCount If (!$fileNames) { Write-Output "No files found to archive!" Exit } ForEach ($fileName in $fileNames) { Write-Output "Moving from: " Write-Output $fileName.FullName $destFilePath = $fileName.FullName.Replace($ssdSearchPath, $archiveDrive) Write-Output "Moving to: " Write-Output $destFilePath New-Item -ItemType File -Path $destFilePath -Force Move-Item -Path $fileName.FullName -Destination $destFilePath -Force -ErrorAction SilentlyContinue -ErrorVariable errors If ($errors) { ForEach($error in $errors) { if ($error.Exception -ne $null) { Write-Host -ForegroundColor Red "Exception: $($error.Exception)" } Write-Host -ForegroundColor Red "Error: An error occurred during move operation." Remove-Item -Path $destFilePath -Force $excludeList.Add("*$($fileName.Name)") } } Else { Write-Output "Move complete." $global:fileCount++ # Increment file count, then check if max is hit If ($global:fileCount -ge $fileArchiveLimit) { Write-Output "Archive max file moves limit reached." Write-Output "Done." Exit } Else { Write-Output "That was file number: $global:fileCount" } } Write-Output "`n" } } If (CheckSSDAbove($ssdMaxUsedPercent)) { While (CheckSSDAbove($ssdMinUsedPercent)) { Write-Output "---------------------------------------" Write-Output "SSD is at $global:ssdUsedPercent%." Write-Output "Max is $ssdMaxUsedPercent%." Write-Output "Archiving files." MoveOldestFiles Write-Output "---------------------------------------" } } Else { Write-Output "Drive not above max used." } Write-Output "Done." Exit
  21. 1 point
    Get a Norco box for the drives, and a LSI Host Bus Adapter with External SAS connectors. Then plug the internal SAS connector into the backplane of the Norco box, and the external connector to the external connector of the HBA.
  22. 1 point
    Christopher (Drashna)

    Disk Activity

    Unfortunately, it may be. There is a setting that we have enabled by default that may be causing this behavior. Specifically, the BitLocker setting. This setting queries the system for data, which creates a WMI query, which causes disk activity. That said, you can disable this: http://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings And the setting is "BitLocker_CloudPartUnlockDetect", which is actually used in the example. Set the "override" value to "false", save the file and reboot the system. That should fix the issue, hopefully.
  23. 1 point
    I've had it happen with normal reboots as well, just not as often as with crashes. It just depends on the timing. Imagine what happens on a reboot. Windows is forcefully shutting down services, including the Stablebit Scanner Service. So if this service gets shutdown at the timeframe where it is writing new DiskId files the files can end up corrupted, thus after a reboot the Service creates new DiskId files meaning all previous scan status is lost. Now the DiskId are not written literally every second anymore (which increases the risk that the service gets killed at the time of writing files significantly) but instead 20-40 minutes (don't know the exact interval now) . That's a reduction of a factor of 1200 to 2400 so the risk that you reboot at the exact time the files are written should basically be zero now.
  24. 1 point
    i think you mean mbit :-P Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer. Plus i got upload verification on, that also cuts off speeds on uploads I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  25. 1 point
    srcrist

    Request: Increased block size

    Again, other providers *can* still use larger chunks. Please see the changelog: This was because of issue 24914, documented here. Again, this isn't really correct. The problem, as documented above, is that larger chunks results in more retrieval calls to particular chunks, thus triggering Google's download quota limitations. That is the problem that I could not remember. It was not because of concerns about the speed, and it was not a general problem with all providers. EDIT: It looks like the issue with Google Drive might be resolved with an increase in the partial read size as you discussed in this post, but the code change request for that is still incomplete. So this prerequisite still isn't met. Maybe something to follow up with Christopher and Alex about.
  26. 1 point
    To keep everyone up-to-date: With the help of Alex I've identified the root cause of the issue. The LastSeen variable inside the DiskId files is changed literally every second. This means that the DiskId files are constantly being changed and in the event of a crash there is a high chance that it crashes while the new file is being written thus the DiskId files get corrupted. The LastSmartUpdate variable inside the SmartPersistentInfo files is updated in a more reasonable one minute interval so I'm hoping it is a quick fix to simply adjust the write interval of the LastSeen variable. Besides changing the interval there would have to be backup diskid files to completely eliminate the issue. So instead of creating new DiskId files when corrupt files have been detected it should copy over an older backup file of the DiskId file(s) in question. Or the LastSeen value gets completely eliminated from the DiskId file and moved somewhere else to avoid changing the DiskId files at all.
  27. 1 point
    Yes, there was something wrong in the program. They gave me a newer updated Beta that fixed this issue. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.963_x64_BETA.exe
  28. 1 point
    As noted before, I'm using a RAID controller, not a HBA, so you'd need to explore the f/w, drivers & s/w for your card. That said, a quick google search & there's this - - however, as far as I can see, 4&83E10FE&0&00E0 is not necessarily a fixed device ID - so you'd need to look in the registry for the equivalent.
  29. 1 point
    I'm not sure? But the number of threads is set by our program. Mostly, it's just the number of open/active connections. Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. Parallel connections. For stuff like prefetching, it makes a different. Or if you have a lot of random access on the drives... But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  30. 1 point
    srcrist

    Warning from GDrive (Plex)

    To my knowledge, Google does not throttle bandwidth at all, no. But they do have the upload limit of 750GB/day, which means that a large number of upload threads is relatively pointless if you're constantly uploading large amounts of data. It's pretty easy to hit 75mbps or so with only 2 or 3 upload threads, and anything more than that will exceed Google's upload limit anyway. If you *know* that you're uploading less than 750GB that day anyway, though, you could theoretically get several hundred mbps performance out of 10 threads. So it's sort of situational. Many of us do use servers with 1gbps synchronous pipes, in any case, so there is a performance benefit to more threads...at least in the short term. But, ultimately, I'm mostly just interested in understanding the technical details from Christopher so that I can experiment and tweak. I just feel like I have a fundamental misunderstanding of how the API limits work.
  31. 1 point
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that. In an enterprise environment, sure, lots of big databases and constant access/changes/etc. I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  32. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  33. 1 point
    PocketDemon

    Different size hdd's

    Along with balancing personal budget, price/TB & warranty (if that matters to you) & whatnot... ...it's also about how many HDDs you can physically connect up vs how your data's growing - since many people get by with just a small SSD in a laptop - whilst others (like myself) are 'data-whores' have many 10s or 100s of TBs of random stuff. As to looking at NAS storage, part of the reason why people look at shucking the higher capacity WD external drives is that they all use WD/HGSC helium 5400rpm filled drives - which are effectively equivalent to the WD Reds... (some of the smaller capacity ones switched to using WD Greens/Blues - I believe only <=4TB but I don't know that for certain) ...though they 'may' alternatively be some version of a WD Gold or HTSC HC500 or...??? ...all of which are designed for NAS - but buying the external drives is cheaper.
  34. 1 point
    It won't really limit your ability to upload larger amounts of data, it just throttles writes to the drive when the cache drive fills up. So if you have 150GB of local disk space on the cache drive, but you copy 200GB of data to it, the first roughly 145GB of data will copy at essentially full speed, as if you're just copying from one local drive to another, and then it will throttle the drive writes so that the last 55GB of data will slowly copy to the CloudDrive drive as chunks are uploaded from your local cache to the cloud provider. Long story short: it isn't a problem unless high speeds are a concern. As long as you're fine copying data at roughly the speed of your upload, it will work fine no matter how much data you're writing to the CloudDrive drive.
  35. 1 point
    I'm truly sorry, as it clearly can be done. I won't delete the previous posts, but I will strike through everything that's incorrect so as to not to confuse anyone.
  36. 1 point
    Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card. Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage. (it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options) Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead. So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this. I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not. Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent. So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.
  37. 1 point
    Bigsease30

    Cloud Drive + G Suite = Backup disk

    Made the recommended changes. Now the waiting game begins. Thanks again.
  38. 1 point
    srcrist

    Longevity Concerns

    I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions.
  39. 1 point
    thanks.. yeah I went back in and now I have it set for just I: as cache and rest are archive... so that now the drive is empty and seems to be functioning correctly where it is a straight copy without the dwindling speeds.... I added the SSD as a cache as you see I'm having copying file issues. Now that I have set this, I'm still having an issue but I believe it is my machine itself. before you see slowness, now it copies at a full 450MB/s but another machine I have (plex) copies at 750MB/s. While it is totally faster from my plex box and funny how that works as the computer not copying as fast is the main rig that edits videos, photos, large iso copies, etc... so id want it faster there... but still 450MB/s on 10gb is still faster than 120MB/s on my 1gb network!!! so while 4x faster.. not full speed. ive got a system issue.. because.. iperf shows super fast across the 10gb (and think iperf does memory to memory omitting hardware) so network is good. my machine has 2x nvme on a quad pci-e 16x card that copying across each of them, they get 1.35GB/s.. its just exiting this machine... so more for me to test when I get time.
  40. 1 point
    TeleFragger

    My Rackmount Server

    Wow yall got awesome setups! I don't have a rack, nor do I want the sound of the rack servers. what I have started using was a Lenovo ThinkStation Towers - dual xeon - 16 slots for memory!!!!! and now Lenovo P700 and P710's. they are all quiet and can be pumped up on drives and ram and dual xeon's ESXI 6.7 Machine 1 - 2x Xeon E5-2620 v4 @ 2.10GHz - 64gb ram ESXI 6.7 Machine 2 - 2x Xeon E5-2620 v0 @ 2.0GHZ - 128 GB Ram ESXI 6.7 Machine 3 - 2x Xeon E5-2620 v4 @ 2.10GHz - 64gb ram FreeNAS 11.1 - 1x Xeon E5-2620 V3 - 24gb ram - 6x 2tb wd black (yeah I know reds not back but ive got them and they work.. hah) Server 2016 / stablebit drive pool - HP Z420 - OS-128gb SSD / pool - 3x 2tb wd black + 2x 4tb wd black + 512gb ssd crucial for SSD Optimizer Server 2016 is getting ready to gain 2 ( 6x2.5" hot swap bays) and filled with 12x 512gb crucial ssd running off 2x HP 220 SAS controllers Network... this is a beauty.. ive got $75 into.. HP Procurve 6400CL - 6 port CX4 port 10gb switch 5x ConnectX-1 CX4 port 10gb NIC running HP FW 2.8 1x ConnectX-2 CX4 port 10gb NIC running Mellanox custom forced 2.10.xxxx fw!!!!! just got it and toying... I get that people say cx4 ports are old and dead but for $75 to be fully up for me is just the right price...
  41. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  42. 1 point
    srcrist

    Download Speed problem

    EDIT: Disregard my previous post. I missed some context. I'm not sure why it's slower for you. Your settings are mostly fine, except you're probably using too many threads. Leave the download threads at 10, and drop the upload threads to 5. Turn off background i/o as well, and you can raise your minimum download to 20MB if that's your chunk size. Those will help a little bit, but I'm sure you're able to hit at least 300mbps even with the settings you're using. Here is my CloudDrive copying a 23GB file:
  43. 1 point
    btb66

    Switching OS. Win10 to Ubuntu Server

    I've also made the move from Windows to Linux and wondered how to keep my pooled folders intact, having got used to seeing the content my folders in one pooled directory, it wasn't something I wanted to give up. There is a neat solution. Using 'mhddfs' we can mount our multiple DP folders into one virtual directory, the folder structure remaining as it was under DP. This isn't DP for Linux, all the work has been already done, all we are doing is pooling the PoolPart folders from each HDD and adding them to a new mount point, a virtual folder where they can be read and written to, complete with free space. I have 4 HDDs with PoolPart folders on them, but I see no reason why 'mhddfs' would have a limit. There is more info here mhddfs guide, but instead of adding each partition just add the mount point of each PoolPart directory. (be sure to already have the HDDs mount at boot). So in my case I did this (my 4 HDDs already mounted at /media/dp1 etc)... mint@mint-9:~$ sudo mkdir /mnt/drivepool mint@mint-9:~$ sudo mhddfs /media/dp1/PoolPart.760df304-1076-4a17-a53d-1a306e0b9808,/media/dp2/PoolPart.24256a59-5751-41c7-a2f7-c63e24c3c367,/media/dp3/PoolPart.bb5666e1-d315-4c07-9814-ac017e2287a2,/media/dp4/PoolPart.33b9f57d-862e-4d0d-a087-d4c9caeeefb8 /mnt/drivepool -o allow_other mhddfs: directory '/media/dp1/PoolPart.760df304-1076-4a17-a53d-1a306e0b9808' added to list mhddfs: directory '/media/dp2/PoolPart.24256a59-5751-41c7-a2f7-c63e24c3c367' added to list mhddfs: directory '/media/dp3/PoolPart.bb5666e1-d315-4c07-9814-ac017e2287a2' added to list mhddfs: directory '/media/dp4/PoolPart.33b9f57d-862e-4d0d-a087-d4c9caeeefb8' added to list mhddfs: mount to: /mnt/drivepool mhddfs: move size limit 4294967296 bytes Then we have the virtual mount point; /mnt/ drivepool mint@mint-9:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 2.2M 1.6G 1% /run /dev/sdg1 64G 14G 47G 23% / tmpfs 7.8G 942M 6.9G 12% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/sdb2 118G 85G 34G 72% /media/m4_win10 /dev/sdh2 69G 2.8G 63G 5% /media/data2 /dev/sda2 96M 32M 65M 33% /boot/efi /dev/sdc1 1.4T 997G 401G 72% /media/dp4 /dev/sde1 2.8T 2.3T 481G 83% /media/dp2 /dev/sdf1 3.7T 3.2T 478G 88% /media/dp1 /dev/sdd2 3.7T 3.2T 526G 86% /media/dp3 tmpfs 1.6G 52K 1.6G 1% /run/user/1000 /dev/sdh3 93G 16G 78G 17% /media/mint/38106A8210B140AD /dev/sdh1 56G 52G 3.6G 94% /media/mint/WIN10 /dev/sdg3 29G 18G 8.9G 67% /media/mint/data /media/dp1/PoolPart.760df304-1076-4a17-a53d-1a306e0b9808;/media/dp2/PoolPart.24256a59-5751-41c7-a2f7-c63e24c3c367;/media/dp3/PoolPart.bb5666e1-d315-4c07-9814-ac017e2287a2;/media/dp4/PoolPart.33b9f57d-862e-4d0d-a087-d4c9caeeefb8 12T 9.6T 1.9T 84% /mnt/drivepool I've kept the original PoolPart folder names in case I go back to Windows and want to make DP work properly again, but if there is no need for that they can renamed to something shorter. If everything works as you want it add the following line to fstab, (yours will be different)... mhddfs#/media/dp1/PoolPart.760df304-1076-4a17-a53d-1a306e0b9808,/media/dp2/PoolPart.24256a59-5751-41c7-a2f7-c63e24c3c367,/media/dp3/PoolPart.bb5666e1-d315-4c07-9814-ac017e2287a2,/media/dp4/PoolPart.33b9f57d-862e-4d0d-a087-d4c9caeeefb8 /mnt/drivepool fuse defaults,allow_other 0 0 You may need to make permissions are set correctly, but assuming you've done all the prerequisites it fairly straightforward.
  44. 1 point
    Whatever you all do, don't wait 3 years and 8,000 movies (taking up 50TB), later to decide duplication would be a good idea. When I noticed my pool was getting full, it finally dawned on me I'd have a miserable time replacing lost movies if even one of the 15 WD40EFRX 4TB drives went south. Not only did it blast a hole in my wallet this week, to fill the remainder of my RPC-4224 case with 8x new WD80EFAX 8TB and 1x new WD100EFAX 10TB drive (experimental), it appears it will take a month of Sundays to get the job done. I probably doesn't help than I'm doing this on an old WHS2011 machine with 3x AOC-SASLP2-MV8 controllers, one of which is running in a 4x slot. I just hope I don't kill something in the process. I honestly didn't think the 10TB drive would work. I had it initialize, partition and format it on a newer PC for some reason. So I'm still not 100% sure how reliable its going to be. After 4 hours, it actually looks like its copying about 500GB per hour. So maybe it won't a full month of Sundays...
  45. 1 point
    Hi there, Awesome software - as always! I've been using Drivepool and Scanner since nearly the beginning. Currently I have something like 22 drives in my main drive pool. They range from iSCSI, SATA-attached, USB (depending on the pool), Ximeta-attached (they have a custom network thing they do), or even virtualized under ESXi. Anything that shows up to Windows as a physical drive can just be pooled. I love it! Recently purchased Clouddrive, after messing around and some Google Searching of this forum, I think I'm fairly setup well. I have 13 clouddrives setup. 1 box.com, 2 dropbox, 10 Google Drive (not the paid cloud drives, but the provided ones for personal use). I used all defaults for everything, except set the cache to MINIMIAL+ Encrypted it, set it to auto login to that encrypted drives (as I only care that the CLOUD data is encrypted...I mean, you want to look at my pictures of my kids THAT bad and break into my PC to do it...okay, enjoy, you earned it). Pointed to a specific 100GB hard drive partition that I could dedicate to this (using drivepool on all other drives and one specific thing mentioned was the cache drive could NOT be part of a drivepool) renamed it removed the drive letter and set it to a folder name (use this with drivepooling to cut down on the displayed drive letters for a clean look) I am getting a slew of "throttling due to bandwidth" issues. I admit that my cache drive is probably too small for the amount of data I DUMPED and will continue to monitor this as I do not feel that I was getting those messages when I did not just DUMP enough data to fill the ENTIRE CloudPool in one shot. So, my request is to have a view in the program to look at all drive upload/download at the same time. Maybe even space? I love the existing charts. They are easy to look at, easy to read and understand. I also like the "Technical Details" page as that shows a TON of information, such as the file - or chunk - and how much of it is uploaded / downloaded. I'm wondering if there is a way to view all drives at once? I would use this to get a sense of the overall health of the system. That is, if I have to scan through all 13 drives, I do not see where my bandwidth is being consumed to understand if the cache drive is FULL or if I am having Upload/Download issues. This reason, by the time I click through each drive, I do not see where the bandwidth is being consumed as the bandwidth seems to shift between drives fast enough that I do not see a true representation of what is going on. I'm sure part of that is redrawing the graphs. I find the technical details page much more useful, as I do not see what is LEFT to upload, but I get a much faster idea of what is going on and although annoying to click through ALL the drives, it seems to be giving me a better idea of what is going on. I think that having an overall page would be fantastic. Thank you again for continuing to show what is possible! --Dan
  46. 1 point
    Pancakes

    I want to encrypt my drives

    I never updated this thread, but I encrypted my drives one by one, with no availability impact. I now have my entire pool encrypted at it works just as normal. Fantastic!
  47. 1 point
    Endure, When you install DrivePool and add your existing drives to it, DrivePool will see a pool size of 8TB (your total size across all drives added to the pool), but 4TB will be listed as "Other" (your existing files on the drives themselves). As you move (not copy) your files from the drive(s) themselves into the pool drive, the files will be added to the pool itself, and the space will be reclaimed automatically to be used in the pool. You can also do something called "seeding", which will speed up the process. You don't have to "seed", it's an option: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 -Quinn
  48. 1 point
    I'm not sure what you mean here. There is the read striping feature which may boost read speeds for you. Aside from that, there is the file placement rules, which you could use to lock certain files or folders to the SSDs to get better read speeds.
  49. 1 point
    Christopher (Drashna)

    Surface scan and SSD

    Saiyan, No. The surface scan is read only. The only time we write is if we are able to recover files, after you've told it to. The same thing goes with the file system check. We don't alter any of the data on the drives without your explicit permission. And to clarify, we don't really identify if it's a SSD or HDD. We just identify the drive (using Windows APIs). How we handle the drive doesn't change between SSD or HDD. And in fact, because of what Scanner does, it doesn't matter what kind of drive it is because we are "hands off" with your drives. Grabbing the information about the drives and running the scans are all "read only" and doesn't modify anything on the drives. The only time we write to the drives is when you explicitly allow it (repair unreadable data, or fix the file system). And because we use built in tools/API when we do this, Windows should handle any "SSD" specific functionality/features. I just wanted to make this clarification, because you seem to be very hesitant about Scanner and SSDs. But basically Scanner itself doesn't care if the drive is a SSD or not, because nothing we do should ever adversely affect your SSD. Data integrity is our top priority, and we try to go out of our way to preserve your data.
  50. 1 point
    Alex

    Surface scan and SSD

    Hi Saiyan, I'm the developer. The Scanner never writes to SSDs while performing a surface scan and therefore does not in any way impact the lifespan of the SSD. However, SSDs do benefit from full disk surface scans, just like spinning hard drive, in that the surface scan will bring the drive's attention to any latent sectors that may become unreadable in the future. The Scanner's disk surface scan will force your SSD to remap the damaged sectors before the data becomes unreadable. In short, there is no negative side effect to running the Scanner on SSDs, but there is a positive one. Please let me know if you need more information.

Announcements

×
×
  • Create New...