Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. 4 points
    VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
  2. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  3. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  4. 2 points
    It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
  5. 2 points
    "dpcmd remeasure-pool x:"
  6. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  7. 2 points
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
  8. 2 points
    wolfhammer

    not authorized, must reauthorize

    I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
  9. 2 points
    HellDiverUK

    Build Advice Needed

    Ah yes, I meant to mention BlueIris. I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3. It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras. Before I used QS, it was sitting at about 60% CPU use. With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption. I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution. I record 24/7 on to an 8TB WD Purple drive, with events turned on. QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K. I get great performance from a creaky old Haswell.
  10. 2 points
    Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
  11. 2 points
    Umfriend

    Recommended server backup method?

    Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
  12. 1 point
    Jaga

    DrivePool + Primocache

    I recently found out these two products were compatible, so I wanted to check performance characteristics of a pool with a cache assigned to it's underlying drives. Pleasantly, I found there was a huge increase in pool drive throughput using Primocache and a good sized Level-1 RAM cache. This pool uses a simple configuration: 3 WD 4TB Reds with 64KB block size (both volume and DrivePool). Here are the raw tests on the Drivepool volume, without any caching going on yet: After configuring and enabling a sizable Level-1 read/write cache in Primocache on the actual drives (Z: Y: and X:), I re-ran the test on the DrivePool volume and got these results: As you can see, not only do both pieces of software work well with each other, the speed increase on all DrivePool operations (the D: in the benchmarks was my DrivePool letter) was vastly greater. For anyone looking to speed up their pool, Primocache is a viable and effective means of doing so. It would even work well with the SSD Cache feature in DrivePool - simply cache the SSD with Primocache, and boost read (and write if you use a UPS) speeds. Network speeds are of course, still limited by bandwidth, but any local pool operations will run much, much faster. I can also verify this setup works well with SnapRAID, especially if you also cache the Parity drive(s). I honestly wasn't certain if this was going to work when I started thinking about it, but I'm very pleased with the results. If anyone else would like to give it a spin, Primocache has a 60-day trial on their software.
  13. 1 point
    MrBlond

    3TB Drive showing as 115 PB

    Thanks for the info. I had purchased two of the Orico enclosures so have just swapped the unit and disk over and the new one is reporting correctly. I have just emailed the supplier (via Amazon) to ask for a replacement. Thanks a lot for your help on this. Much appreciated. BTW, Windows was reporting correctly and I had tried the "DoNotCorrectSize" option.
  14. 1 point
    Alex thinks he knows what is going on here, and has a planned "code change" that should help with this.
  15. 1 point
    Jaga

    My first true rant with Drivepool.

    I haven't messed with the server implementation of ReFS, though I assumed it used the same core. I ditched it ~2 years ago after having some issues working on the drives with utilities. Just wasn't worth the headache. I never had actual problems with data on the volume, but just felt unsafe being that "out there" without utilities I normally relied on. When the utilities catch back up, I'd say it's probably safe to go with it, for a home enthusiast. Just my .02 - I'm not a ReFS expert. Shucking has positives and negatives, to be sure. There's one 8TB drive widely available in the US that normally retails for $300, and is on sale regularly for $169. For a reduction in warranty (knowing it's the same exact hardware in the case), I'm more than happy to save 44% per drive if all I need to do is shuck it. They usually die at the beginning or end of their lifespan anyway, so you know fairly early on if it's going to have issues. That's my plan for the new array this July/Aug - shuck 6-10 drives and put them through their paces early, in case any are weak. No need to RAID them just for SnapRAID's parity. It fully supports split parity across smaller drives - you can have a single "parity set" on multiple drives. You just have to configure it using commas in the parity list in the config. There's documentation showing how to do it. I am also doing that with my old 4TB WD Reds when I add new 8TB data drives. I'll split parity across 2 Reds, so that my 4 total Reds cover the necessary 2 parity "drives". It'll save me having to fork out for another 2 8TB's, which is great.
  16. 1 point
    JscoLP

    [2.2.0.906] Disk Space Equalizer

    Yep that did it, on build 920 and reset all settings then set balancing as you said. Overnight it reached about 99% balanced (not exactly perfect) but close enough! Thank you!!
  17. 1 point
    Yep. I'm not really a command line type of guy. But http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html this really helped.
  18. 1 point
    I just want to say: Its gone 12 hours thus far after updating to the recent build and it hasn't yelled at me to reauthorize GDrive. I'll let you know if anything changes. Thanks Drashna!
  19. 1 point
    Nope, Terry nuked the site. And for a while, configured things in a way that nuked cached copies. But here you go: https://web.archive.org/web/20150508105738/http://forum.wegotserved.com:80/index.php/topic/8335-before-you-post-media-stuttering-playback-issues-performance-irregularities/ But here are the highlights:
  20. 1 point
    Christopher (Drashna)

    Windows 10 support etc

    Did you see yet? 2.2.0.881 was released as a public beta last night. Barring any major issues, we should have a public release version in a couple of weeks.
  21. 1 point
    Christopher (Drashna)

    Google Team Drive?

    I didn't see that limit! And yeah, even with large chunks (100MB), that would be 9.5TBs MAX.
  22. 1 point
    I was away all weekend, so sorry for not posting. Im not sure, i checked and im still running .951, so i doubt its related. The issues have seemed to go away, i can see it struggle still during the general 7-10pm window, but its not enough to force a dismount of the drive. This mostly leads me to believe it was some sort of issue at google, but it does disappoint we never managed to figure out exactly what the cause was. I will get back to this if theres any more issues, and if you or anyone else is having a similar issue please do so also
  23. 1 point
    Thanks Umfriend. That was the last bit I needed to know before giving this a go.
  24. 1 point
    Yeah there will definitely be some adjustment to the new forum software. there is a lot more functionality, but it is a lot more "under the hood" stuff, or will take a bit to get used to. Also, there are probably settings on our end that will require tweaking, to get "just right", as well.
  25. 1 point
    The "cloud used" amount refers to the space used on your provider, not the amount of data stored on the drive. When you delete something on your CloudDrive drive, it doesn't remove the drive architecture that has already been uploaded from from the provider. So, if it's a 500GB drive and you've already created and uploaded 500GB worth of chunks to the provider, those will remain, just like a real drive, to be used (overwritten) later. This is why you can use recovery software to recover data on your CloudDrive drive just like a physical hard drive. If you want to remove the additional space, you'll need to shrink the drive, which you can do from the "manage drive" options.

Announcements

×