Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. 4 points
    VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
  2. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  3. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  4. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  5. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  6. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  7. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  8. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  9. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  10. 2 points
    Mostly just ask.
  11. 2 points
    Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
  12. 2 points
    The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
  13. 2 points
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  14. 2 points
    I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
  15. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  16. 2 points
    "dpcmd remeasure-pool x:"
  17. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  18. 2 points
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
  19. 2 points
    wolfhammer

    not authorized, must reauthorize

    I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
  20. 2 points
    Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
  21. 1 point
    Umfriend

    moving drives around

    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  22. 1 point
    Sure. I purchased and installed PartedMagic onto a USB. I then booted using this USB to run a Secure Erase, but it was not able to complete successfully. So I ran DD (through PartedMagic as well) on the drive around 5 times. I then converted the disk to GPT using diskpart and installed a fresh copy of Windows. I used CHKDSK, StableBit Scanner, and Intel SSD Toolbox (Full Diagnostic) to confirm that read/writes were functioning as intended. Based on what I could understand from Intel, it seems like the Optane drives are fairly unique due to their usage of 3D XPoint technology which caused the specific/strange behavior I was facing.
  23. 1 point
    GetdataBack Simple is working for me -- i could get a dir list at least and see the files. It's gonna take days til i'm done the deep scan, but i hope i can recover most things.
  24. 1 point
    To keep everyone up-to-date: With the help of Alex I've identified the root cause of the issue. The LastSeen variable inside the DiskId files is changed literally every second. This means that the DiskId files are constantly being changed and in the event of a crash there is a high chance that it crashes while the new file is being written thus the DiskId files get corrupted. The LastSmartUpdate variable inside the SmartPersistentInfo files is updated in a more reasonable one minute interval so I'm hoping it is a quick fix to simply adjust the write interval of the LastSeen variable. Besides changing the interval there would have to be backup diskid files to completely eliminate the issue. So instead of creating new DiskId files when corrupt files have been detected it should copy over an older backup file of the DiskId file(s) in question. Or the LastSeen value gets completely eliminated from the DiskId file and moved somewhere else to avoid changing the DiskId files at all.
  25. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  26. 1 point
    Now, the manual for the HBA you were talking about states "Minimum airflow: 200 linear feet per minute at 55 °C inlet temperature"... ...which is the same as my RAID card. Beyond that, all I can say is that, even with water cooling the CPU & GPU (& an external rad) so most of the heat's already taken out of the case/ the case fans are primarily cooling the mobo, memory, etc, then I've had issues without direct cooling with all of my previous LSI RAID cards - both in terms of drives dropping out & BSODs without there being an exceptional disk usage. (it's not that I'm running huge R50 arrays or something - primarily that I simply prefer using a RAID card, vs a HBA, in terms of the cache & BBU options) Similarly, the Chenbro expander I have - which, other than the fans, drives, cables, MOLEX-to-PCIE (to power the card) & PSU, is the only thing in the server case - came with a fan attached which failed; & again I had issues... ...so it's now got one of the Noctua fans on instead. So, whilst you 'could' try it without & see, personally I would always stick a fan on something like this. I couldn't advise you on monitoring for PWM as that's not how I do things - since I'd far rather have the system being stable irrespective of whether or not I was in a particular OS or not. Well, not that dissimilarly, whilst the rad fans are PWM, for me it's about creating a temp curve within the bios for the CPU (& hence, by default, the GPU), & so is entirely OS independent. So, whilst I couldn't recommend anything specific, 'if' I were looking for a fan controller then I'd want something which I could connect a thermal sensor to (& attach that to the h/s above the IOC) AND I could set the temp limit solely with the controller.
  27. 1 point
    jellis413

    10gb speeds using ssd cache?

    I just ran into this with a pair of Aquantia 10g nics that I purchased. It seems to be a different amount that I could copy depending on the SSD that I used. Their support confirmed that after the SSD write cache was filled, it would drop to below gigabit speeds. I setup a RAM drive and passed it through as an SSD to the SSD Optimizer and speeds consistently stay where they should be and dont drop off like I was experiencing. https://www.softperfect.com/products/ramdisk/ is the product I used and had to make sure I selected the option for Hard Disk Emulation
  28. 1 point
    srcrist

    Download Speed problem

    EDIT: Disregard my previous post. I missed some context. I'm not sure why it's slower for you. Your settings are mostly fine, except you're probably using too many threads. Leave the download threads at 10, and drop the upload threads to 5. Turn off background i/o as well, and you can raise your minimum download to 20MB if that's your chunk size. Those will help a little bit, but I'm sure you're able to hit at least 300mbps even with the settings you're using. Here is my CloudDrive copying a 23GB file:
  29. 1 point
    @Mick Mickle Unfortunately, there isn't a way to handle multiple settings at once. But honestly, if you are seeing multiple SMART warnings at once, you should replace the drive. LCC by itself is fine, but with other errors.... it's time to replace.
  30. 1 point
    TAdams

    Moving from WHS V1

    First of all, I would like to say thank you all for taking the time to reply and or answer. I currently have an SSD which Windows Server 2016 Standard which has my shares as well as drive pool installed and I have gotten pretty far along in the setup and is pretty close to how I want it. Again, I realized I was missing the "sever essentials" backup system and I installed that. I believe the issues I was seeing is from the way the two flavors deal with file permissions. I initially had them set with Standard and when I added Essentials it added many permissions (per user and or group) to each file. Each mechanism allowing/disallowing rights based on that systems scheme. Which meant I had some doubled up and files that I shouldn't have access to - I did and vice versa for others. In short it seems like the most practical way is to perform a clean install and use which ever system to create and administer the shares - not both or a mix and match of both. I don't think this was a Drive Pool issue at all, simply put it was my lack of understanding of how the two systems apply user permissions. In my first install, I tried Windows Server Essentials and had backups running but did not like the strange folder structure, which is why I tried downloading and installing Standard and added the server role. I believe I could live with it, but the current look is so clean and that of course leaves me without the previously mentioned backup solution. I have checked out Veeam which looks VERY promising I appreciate the suggestion Jaga! I hadn't realized MS had planned on removing the ability to add Essentials Role in the future Umfriend, is there anything in particular you are looking for in WSE 2019? Thank you for the links Christopher, those are put together better than the tidbits I located in my searches. Currently I am trying to decide what I want to purchase, Windows Server 2016 Standard, WSE 2016...They are roughly the same price on Amazon and it seems like the Standard with provide greater flexibility for the future. I am most certainly getting drivepool! Thank you all again! Regards, Tom
  31. 1 point
    That should work, then. Just wanted to make sure. And yes, that would definitely need an SSD (and maybe even an NVMe based SSD) for that. If you want to use file placement rules AND the SSD Optimizer, there are some settings that you MUST change. Junctions. If you have the option, always junctions. Symlinks can be handled oddly in some cases. And yeah, that software is nice You can create multiple pools, actually. That may be the best for what you want. But keep in mind that each disk can only be part of a single pool.
  32. 1 point
    You'll need to disable the SSD Optimizer balancer. Do that, and see if it works (it should)
  33. 1 point
    MisterDuck

    High Interface CRC Error Count

    Thank you Christopher, P19 it is then. As above, I'm a bit surprised that the obviously buggy P20 is still available, especially given that these cards and their variants will be used in enterprise settings.
  34. 1 point
    SAS2 is EOL, actually. That's why we're seeing a lot of SAS2 hardware for super cheap. SAS3 is the current standard, and the SFF connectors are not compatible.
  35. 1 point
    DotJun

    File Duplication Consistency

    Ok I've changed my settings to match what you suggest. Thank you for your help.
  36. 1 point
    Thank you for the kind words! And we're glad to hear it! And every customer (and potential customer) should be valued! They are why we are successful!
  37. 1 point
    Jaga is correct here. And if it's not storing the settings at all, then there is another issue here, and ... resetting the settings manually may fix the issue: http://wiki.covecube.com/StableBit_Scanner_Q4200749 Eg, delete the entire "C:\ProgramData\StableBit Scanner" folder, and restart the service.
  38. 1 point
    Well, I don't mean to minimize the issue, at all.... but this generally happens to a small group people (or at least, that's how many report it). So it doesn't seem to be a wide spread issue. Which ... means it's pretty rare, and ... that it's not as easy to track down. That said, We've looked into this issue in the past, with little success Alex is planning a "deep dive" to look into this issue ( @Mick Mickle, I believe Alex mentioned that to you in the ticket, actually), and we've talked about this The new JSON format for the setting store (located in "C:\ProgramData\StableBit Scanner\Service\Store\Json") should be much more resiliant. In fact, I tested out copying the folder elsewhere, nuked the settings, and restored the JSON folder. I did this on my production server, and can verify that it works. So, if you backup the folder in question, you setting should be safe, from here on out.
  39. 1 point
    The root folder is called "StableBit CloudDrive". If you only renamed that folder, just give it that default name again. The entire (Google Drive) StableBit folder structure looks like this: \StableBit CloudDrive\CloudPart-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-CONTENT(-ATTACHMENT,-METADATA) The "xxxx..." string is a placeholder for the global unique id in hex code (e.g. abcdef01-abc1-def2-1234-12345abcd67f). All files inside the folders on Google Drive have that id in their name and you will find it also in the cache folder's name on your local machine (e.g. \CloudPart-abcdef01-abc1-def2-1234-12345abcd67f).
  40. 1 point
    Dave Hobson

    My first true rant with Drivepool.

    Thanks for all the awesome input everyone. I think I'm gonna say with NTFS. Especially as the SnapRaid site seemingly throws up some suggestions linking to this article https://en.wikipedia.org/wiki/Comparison_of_file_verification_software With regards to Shucking... Although as mentioned I have done this is the past (with 4TB drives when 4TB was the latest thing) the cost difference is negligible especially bearing in mind the reasons Christopher mentions and not an approach I want to return to. Though the cost isn't really the issue my current aim is to get rid of/repurpose some of those 4TB drives and replace them with another couple of 8TB drives. Maybe when that's done I will look again at SnapRaid and It's parity. If Google ever backtrack on unlimited storage at a stupidly low price in the same way Amazon did then it may scale higher on my priorities, but for now... EDIT Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input..
  41. 1 point
    bzowk

    Pool Activity Monitoring

    Hey Guys - I've been using DrivePool & Scanner for a few years now and overall it's been great. My home pool currently consists of 12 disks (11 SATA + 1 SSD for caching) totalling over 43.7tb which is assigned to my D: drive. Being a big fan of monitoring resources, I'd love to be able to monitor the overall disk performance in some sort of desktop gadget or widget. This is easy to do for the pool's individual disks if drive letters are assigned or within Scanner, but not the pool as a whole. Since the pool isn't a standard disk, most applications that do this simply show the D:\ as having no activity ever unfortunately. One of the many examples of what I'd like is an older Windows Gadget "Drive Activity." Does anyone know of an application or workaround where I could get the pool's activity to be shown for typical monitoring applications? All I really would want is something simple which would show (or trick applications into showing) either the combined read / write totals or the highest value of the disks comprising the pool. Thanks!
  42. 1 point
    Good points, I think I have my answers now. I did some more reading as well and I have enabled file integrity on each of my drives within my pool. Any drives I add from now on out I will format with file integrity enabled from the start. Thanks for the walk through!! Best wishes, Scott
  43. 1 point
    Christopher (Drashna)

    drives not balancing

    Honestly, this sounds "fine". It shouldn't rebalance here, since it's not outside of any settings. However, if you want, you can download and install the "Disk Usage Equalizer" balancer, as this will cause the pool to rebalance so it's using roughly equal space on each disk. https://stablebit.com/DrivePool/Plugins
  44. 1 point
    I agree, I don't think it was DrivePool. I was able to restore from a backup. I haven't updated to the RC again, but I will tomorrow. I'm expecting it to upgrade with no issues. I'm not sure what happened, but the system is back up at the moment and all is good. If I have the same issue after I try upgrading tomorrow, I'll be sure to let everyone know.
  45. 1 point
    Was just as easy as it sounded. Thanks for the assistance and the marvelous product. Cheers!
  46. 1 point
    Kraevin

    Pinning causes api ban...

    I have not experienced this myself, the only time i get the rate limit exceeded error is when i upload and go over the 750GB daily limit.
  47. 1 point
    ncage

    Refs?

    Thanks, Drashna. I might ask in a year or so to see if the status has changed any but for now i, of course, will go with your recommendation.
  48. 1 point
    Detaching the drives before a reboot will prevent the reindexing process, yes. Though, if you're not on the latest betas, I've noticed that the drives have been far better at recovering from an unclean shutdown than they used to be.
  49. 1 point
    Endure, When you install DrivePool and add your existing drives to it, DrivePool will see a pool size of 8TB (your total size across all drives added to the pool), but 4TB will be listed as "Other" (your existing files on the drives themselves). As you move (not copy) your files from the drive(s) themselves into the pool drive, the files will be added to the pool itself, and the space will be reclaimed automatically to be used in the pool. You can also do something called "seeding", which will speed up the process. You don't have to "seed", it's an option: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 -Quinn
  50. 1 point
    To clarify (and make it simple to find), here is Alex's official definition of that "Other" space:

Announcements

×
×
  • Create New...