Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Today
  2. Umfriend

    Seagate archive 8tb hdd detected but with 0 capacity

    What data recovery software have you tried and what did it say? Can you have a look at Event Viewer and see if there are DISK/ATAPI/IDE/NTFS errors? Have you tried attaching the HDD to another PC using a different cable (or another port using a different cable)?
  3. My hdd was lagging and freezing so i tried to restart my computer and it got stuck at restarting. After i forced my pc to restart the hdd shows up in windows but with 0 capacity and i'm unable to scan it with data recovery softwares. Can you guys help me by letting me know what's wrong with my hdd and how to fix it?
  4. denywinarto

    Backup and restore mounted NTFS folders?

    Can't seem to find any way to do this. My drives are mounted to 20ish ntfs mount folders, and now i want to migrate to another machine with better specs. The idea is just to swap the sas card and then i hope i dont have to remount the drives again because i want to minimize downtime. Is there anyway to copy these mount folders to the new machine? Or do i have to recreate and re-mount them again one by one? Cause when i tried to copy them, its actually copying the entire drive... Edit : Well i found out the registry location, HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\CPC\LOCALMOF\ So just export and import this key?
  5. Yesterday
  6. zeroibis

    [Performance] Scanner checking my 10TB GDrive

    I wonder what running the scanner on a cloud drive actually does... as it is not as though it would be able to access anything on the physical layer anyways...
  7. Is it even connected to the interweb? If not then a move to WSE2016 might be a bit expensive (I got a really cheap 2nd hand license, still more expensive than WHS2011 at the time). Yeah, 120GB is enough although I am never sure about how much it needs at install time (that was a thing with WHS2011). I got a 240GB. Maybe may advice was not that sensible after all, it does depend on your use case. If it is basically cold storage/archiving _and_ you only backup the OS HDD (which I would not like but he, 24 TB is hard to backup and rotate offline), then I would also consider keeping it as is, don't bother with 2TB partitions and replace one at a time once in a while with the largest HDD you can get.
  8. DotJun

    File placement question #2

    Here is the scenario. I have 3 drives. I am using File Placement Balancer to fill the drives in order from drive 1 then 2 then 3 and only fill them to 95% full before moving onto the next empty drive. I also tell File Placement rule to put folders a, b and c into drive 1 only. The problem I'm having is that the folders/files I have marked to reside in drive 1 end up in the other drives some time. How can I configure it so that drive pool moves folders/files not flagged specifically for drive 1 into another drive in order to make room for folders/files flagged for drive 1?
  9. Last week
  10. eujanro

    [HOWTO] File Location Catalog

    Hi everyone, First, I would like to share that I am very satisfied with DP&Scanner. This IS a "State of the art" software. Second, I have personally experienced 4 HDD drives fail, burned by the PSU,(99% data was professionally $$$$ recovered) and a content information, would have been comfortable, just to rapid compare and have a status overview. I also asked myself, how to catalog the pooled drives content, logging/versioning, just to know, if a pooled drive will die, if professional recovery make sense (again), but also, to check the duplication algorithm is working as advertised. Being a fan of "as simple as it get's", I have found a simple free File lister, command line capable. https://www.jam-software.com/filelist/ I have build up a .cmd file to export Drive letter (eg: %Drive_letter_%Label%_YYYYMMDDSS.txt), for each pooled drives. Then I scheduled a job to run every 3hours, and before running, just pack all previous .txt's into an archive, for versioning purposes. I get for each 10*2TB, 60% filled pooled HDD's, around 15-20MB .txt file (with excluding content filter option) in ~20minute time. An zipped archive, with all files inside, is getting 20MB per archive. For checking, I just use Notepad++ "Find in Files" function, point down to the desired .txt's folder path, and I get what I'm looking for, on each file per drive. I would love to see such options for finding the file on each drive, built up in DP interface. Hopefully good info, and not a long post. Good luck!
  11. Great info and recommendations, thanks!! I use this mostly for storage of photo/video - once projects are complete, this media moves here. I’ve never needed much speed since I rarely work off this, maybe with photos but never video. If I work with this again, I move it back to my Macbook. I guess 24TB would still be a good upgrade or I could add my other 8TB via eSATA for 32TB, if I were to do an OS only disk. 120GB SSDs are cheap and that’s all the more I’d need with WSE2016, right? I’ll look into WSE2016, might be a good move and seems like this system could handle it. Otherwise, I use this only for long term storage, I never use streaming or other features so it’s worked well for me. I use my desktop to back this up to an external drive doing regular backups of the folders on the server to alternating drives.
  12. Umfriend

    Duplication time is extremely long!

    Yes, once you get past the 20GB and a bit, you'd need to leave them idle for a bit to get decent writes again. If you keep throwing data at them, once the non-SMR cache is full, it'll be slow without recovery. SSD with optimizer may help but then the write must still be less than your SSD + 20GB and a bit. I have had great sustained writes on the SMR drives but that was in cases of real sequential nature. You got 4 SMR HDDs? I might be interested. 4 would allow fast writes of 80GB in one go already!
  13. Roger79

    Problems with uptime of my DrivePool on an USB enclosure

    Update: After some long time testing I've concluded that the USB contoller on my NUC6i3 was the problem. I connected the enclosures to my HTPC and I had no problems with the pool the next 30 days. Bummer... :-( , but also a relief to finally know why I had such problems. I'd like to say thanks to Christoffer for giving me some pointers in finding the cause of my problems. I have now made the decision of turning my 5-year-old high end gaming pc into a VM-/file-server. I bought some extra RAM, and a LSI SAS-card to get enough ports and speed for my drives. Now I have to get myself familiar with Hyper-V and how to set up VMs. A lot to learn, but I'm enjoying the entire process. Thanks.
  14. Hi, My Windows 10 boot drive is getting old and slow, and I'd like to swap it out for an SSD. This drive only has Windows and a couple apps on it - it is not part of the pool. Is there anything special that DrivePool requires when migrating to a new OS drive, or can I just use a clone tool to copy the contents to the new drive and it will keep on working? Actually, I've never done this before in Windows. I think hard drives usually come with some software for copying data to the new drive, will that work for an OS drive? Thanks!
  15. Soaringswine

    CloudDrive + Google Drive + Plex Optimization

    Everything looks fine, though I set minimum download size to 50 MB.
  16. Rob Manderson

    Duplication time is extremely long!

    Heh - I have triplication turned on - for the same reason you went duplicated. Started out with ~8TB duplicated and went to triplication - it took 3 days and that was without a single one of those damn SMR drives installed. I've given up on the SMR drives - nothing will make them behave. Yep, other peoples experience may differ - I can only go by what I see and what I see is that anywhere between 20 and 100 GB of continuous writes is all they'll take before they slow to ~500KB/s write speeds. Leave them powered up for 24 hours with no further writes and they're *still* sitting at ~500KB/s. 48 hours? Still ~500KB/s. Patience ran out at that point. I won't be buying another SMR drive any time soon and the 4 I have are sitting there unused on a shelf. I think I'll end up throwing them out. The thing is that the SSD optimizer didn't seem to help. After all, if I add them to a pool the expectation is that I can do some balancing to spread the data around. What if one of my non SMR drives fails and needs to be removed? The SMR drives will be the targets and the same issue arises - it will take weeks (months?) to remove the failing drive because copying to the SMR drives takes so long. For my use case the SMR drives are totally unsuitable. I ended up spending the extra bucks (which wasn't all that much) and installing Toshiba X300 4TB drives.
  17. Dear fellow Stablebit-users, I've been using CloudDrive, DrivePool and Scanner for a while now and I am very happy with it. However, my mounted 10TB Google Drive once had an unsafe shutdown, and Windows kept nagging that it might be faulty/dirty. I tried chkdsk first, because I honestly didn't know scanner also uses an form of chkdsk (if I'm not mistaking), but soon turned to Scanner. However, it's been 4 days since I started Scanner to check and repair that drive. It says it's 50% finished, however it seems terribly slow. Does anyone have experience with checking an 10TB disk (~8TB used) ? How long did it take? Presumably it will be finished in 4 more days. I have an 200/200 fiber connection. Thanks!! Regards,
  18. I assume the 8TB is attached through USB. My advice: 1. "Remove drive from pool" one of the 2 TB non-system disks; 2. Replace it with actual 8TB HDD (I am always suspicious of USB-connected HDDs for longer periods) 3. "Remove drive from pool" the (data partition on) a 2 TB system disks; 4. That HDD, really, replace it with an SSD. Not only does the system become way snappier, I would never use OS and Data from the same HDD. Do a Server Recovery for the OS partition to that SSD (I always disconnect the other HDDs when I do a system recovery). 5. "Remove drive from pool" another 2TB HDD, replace with an 8TB HDD and so on. Do you backup your data as well? Remember, WHS2011 does not support backups of _volumes_ in excess of 2TB, so if you want to backup an 8TB HDD with WHS 2011 Server Backup, you MUST partition the HDD in 2TB partitions. THis has a performance cost that is real. Consider moving to WSE2016. I just did. I fumbled a lot but it it worth it IMO.
  19. Duplication just means two copies stored on different physical HDDs. And all files are stored in plain NTFS format, that does not change with duplication. If you transferred the Pooled HDDs to a system not running DP you'd be able to read them. It might be a bit of a mess working out that you have duplicates though. I run x2 duplicates for continuity and Server Backup for Backup. So if you ever transfer to a non-DP running machine (perhaps a linux-based NAS?) then I would consider to first go back to x1 duplication. Hope this helps.
  20. WHS Upgrade Time! As I squeeze the life out of my WHS Home Server system, I need to upgrade drives and overall capacity! Everything is working great, so I thought I’d ride this system longer until I need to move the drives to a new server or NAS. Current system (HP EX490 with 4GB): 4 x 2TB Internal Drives 1 x 4TB External Drive 10.9 TB Capacity / 1.82TB free (approx 370GB per drive) WHS 2011 - 6.1 (Build 7601 Service Pack 1) StableBit DrivePool - 1.3.6.7585 - (Duplication turned off due to capacity limitations) To facilitate this process, I have attached an 8TB drive to the system to increase the pool size and allow other drives to be removed from the pool. (Once I get a couple drives changed out, I’ll remove this and use it internally). 1) Is the ‘remove drive from pool’ the most effective procedure for the 3 non-system drives? Or, can I remove a (non-system) drive from system, one at a time, and copy the contents (pool folder) from the current 2TB drive to the new 8TB drive and reinsert this in the place of the 2TB? Or, will the change in configuration affect anything? 2) How about the system drive? For this, I will use the ‘remove drive from pool’ option for the data partition of this drive. Then I remove the drive, drop in the new 8TB then initiate a server recovery, correct? 3) Once the OS is reinstalled and the system drive is reinserted, then I’d need to reinstall DrivePool? Anything else I need to do to re-establish the pool? Hopefully this is on the right track - I read through the DrivePool manual but wasn’t sure if I was headed in the right direction and doing this most effeciently! Thanks for your information and advice! It’s been 5-6 years since my last upgrade … gotta refresh my mind on some of this!
  21. While I've used DrivePool for quite a few years on my WHS system, I've never used duplication - mostly due to capacity limitations. Now that I am upgrading drives, this will be a possibility. Otherwise, I backup my WHS system regularly so any drive failure issues should be minimized, in the event this happened. I'm wondering though - the way it is now (unduplicated), I can take these drives out anytime (if needed) and be able to access the files direct from my computer. Or, if I were to decide to go a different direction such as a NAS, these drives can be inserted and up and running quite simply. What happens if my files are duplicated and I take these out of the WHS/DP environment? Do I have 2 of every file across the drives? How would I reconcile this? I guess I'm looking for native file format that I can read from my computer - if the event should arrive or be able to move into another system without issues with sorting out duplicate files across the drives. How does the duplication work in this way? Is it best to remain unduplicated with my regular backups?
  22. I've noticed every single time I reboot, the whole pool remeasures every disk, which takes ~3 hours to complete. During this time, the pool is borderline unusable. Launching a Steam game installed to the pool may take 5 minutes or more to launch, and then stutter for a few seconds every 5 seconds or so (completely I/O starved). Even video playback can stutter, it seems the pool just has no leftover I/O to serve requests over the measuring. I understand that measuring has a high impact, but it doesn't seem like it should be this bad. It seems like outside I/O should have priority over the measuring. Secondly, I understand the pool shouldn't need to remeasure every reboot. Any way to troubleshoot why this is happening? It makes the machine pretty much unusable for hours after rebooting, which I need to do periodically for updates etc. The one thing I have which is a little unusual is the pool is all made of REFS disks. Is this possibly the culprit? Obviously moving back to NTFS would be quite an undertaking, though with Microsoft trying to drop support from the desktop versions of Win 10 it's certainly something I'm open to. Disks in the pool are: 1x8TB WD Red 2x4TB WD Red 1x4TB WD Black EDIT: forgot to add that when the measuring is finished, the performance settles down and becomes pretty reasonable, comparable to a normal disk. So I've been trying to avoid reboots as much as possible not to have to render the machine unusable for hours at a time. I've also completely disabled the Windows Search Service (indexing), as I have no need for it and know it can cause performance impacts.
  23. kpate77

    Data now showing in Hierarchical Pool

    It seems to be working now. Thank you both for your help!
  24. Hey everyone, I decided to bite the bullet and purchase CloudDrive and a GSuite Business account as I am running out of local HDD space on my Home PC/Plex Server. I would like some help in optimizing my setup to achieve : Seamless uploading of media to Google Drive Uninterrupted media streaming The ability to have multiple users(maximum of 10) stream from my Plex Some questions: - Is my setup and settings below optimal to achieve my goals above? - When dowloading a torrent should it be directed straight to the GDrive or should it be saved locally and then transferred to Gdrive? - How can seeding be achieved with this setup? PC Specs are below: Intel i7 4770K 16gb DDR 3 2400mhz RAM 256gb SSD Samsung Evo 6 TB WD Red Download: 1gbps Upload: 1gbps Setup: - CloudDrive - Google Drive - Plex - QBittorent Upon reading some threads about similar setups I am using the current settings on CloudeDrive and Google Drive: Cloud Drive Size: 256 TB Local Cache Size: 100 GB Cache Type: Expandable Full Drive Encryption Local Cache Drive: C:\ Storage Chunk Size: 20 MB Chunk Cache Size 100 MB File System: NTFS Cluster Size: 64 KB Sector Size: 4.00 KB Download Threads: 10 Upload Threads: 5 Maximum Download Size: 20 MB Prefetch Trigger: 1 MB Prefetch Forward: 300 MB Prefetch Time Window 30 sec
  25. Umfriend

    Data now showing in Hierarchical Pool

    It is actually easy. Let's start with your starting position: 1. You had a Pool of local HDDs, say Pool A (or Local). On these HDDs you have a hidden PoolPart.* folder in which the data of Pool A is stored. 2. You created a CloudDrive (possibly added it to a new Pool, Pool B (or CloudPool), I'll assume that) 3. You created a new Pool C (or HYbridPool) and added Pool A and Pool B to it. Now on HDDs of Pool A, there will be another hidden PoolPart.* folder (say level 2) _within_ the earlier PoolPart.* ( level 1) folder. Anything stored in level 1 is in Pool A ONLY. When you look at Pool C, the data in level 1 will show up as Other. This is in fact very similar to storing data on a HDD of Pool A in the root folder or outside of the level 1 PoolPart.* folder: It will show up in Pool A as Other. So, for each HDD in Pool A, _move_ the contents of the level 1 folder to the level 2 folder (stopping DrivePool service first and restarting it when done). The that data is in Pool C, unduplicated and the re-measure pass will duplicate Pool C (by storing a duplicate in Pool B). There is no need for Fileplacement rules (that is assuming you have x2 for everything in the Pool). And you cannot have files only on Local _and_ have x2 duplication.
  26. kpate77

    Data now showing in Hierarchical Pool

    I've been reading more in the forums and I think I have a better understanding now. I think I should seed the Hybrid pool using the following method: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 Then on the Hybrid pool, choose file placement onto the Local Pool only. Choose x2 duplication on Hybrid Pool. The Hybrid Pool will be the new pool that I read/write from going forward. I've tested this on a couple of files and it seems to accomplish what I am looking to do.
  27. kpate77

    Data now showing in Hierarchical Pool

    Thanks for your reply! I get what you are saying, and that is the behavior I am experiencing. However, I am trying to accomplish what is written in the following blog post: http://blog.covecube.com/2017/09/stablebit-drivepool-2-2-0-847-beta/ Basically: Hybrid Pool = Local Pool + Cloud Pool, with x2 duplication from Local to Cloud. This is what is explained in the above blog post. However when I create the above, there is nothing in my Hybrid pool to duplicate. How do I accomplish what is in the blog post (assuming functionality hasn't changed from 2.2.0 to 2.2.2?
  28. srcrist

    Optimal settings for Plex

    Are you using Windows Server by any chance? I think that's a Windows thing, if you are. I don't believe Windows Server initializes new drives automatically. You can do it in a matter of seconds from the Disk Management control panel. Once the disk is initialized it will show up as normal from that point on. I'm sure Christopher and Alex can get you sorted out via support, in any case.
  1. Load more activity

Announcements

×