Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

Everything posted by t-s

  1. Just to add something relevant... I tested the same setup on windows server and win 10 1709 and the problem is already present As I wrote on my first post, the dedupication changed since the (publicly released) version 1709, so was too easy to guess that the problem with drivepool started there. I didnt use such versions so I cannot corfirm the problem on first post. Now I spent some time installing Windows Server 1709 on my test machine and I can confirm the problem. So there isn't any excuse anymore, DP on deduplicated disks is broken since more than one year!!!
  2. Try to share the same disk/folder via ftp if the networks speed via ftp jumps to decent levels (as I guess) the virtual NIC has nothing to do with your slowness. As a further test you may want to test the SMB speed on a android device. Just use ES file explorer, ANDSMB (or something like that) to access your windows CIFS share and see if the speed is slow also there. Given the the ever changing SMB/Cifs protocol, the problem may reside in password encription, the lack of SMB1 compatibility, or other novelties introduced in latest Windows release.
  3. Whatever that sentence wouldn't fix the fact you didn't read what I wrote for 4/5 messages... :D
  4. You still don't get the point. Windows Storage Server 2008R2 Essentials is a 2011 server. Actually it was released AFTER WHS 2011, and it is a very small variation of it. When I wrote "almost unknown" I was serious, you are a further evidence of that
  5. It's pretty clear why 2008 r2 is called 2008 r2, what's amazing is releasing 4 Colorado brothers WHS, SBS, Multipoint and WSSe and calling 3 of them 2011 (as they should) and just one 2008r2. Especially taking in account that while SBS and Multipont are different products WHS and WSSE are almost exactly the same thing. You can even make WSS to look exactly lime WHS (and viceversa) replacing just a single DLL
  6. So? You are really saying that after a user discovered a bug that makes unusable (for the average user) your product, on an already released product, you refuse to test it just because a publicly available link is removed from a single MS page O_o. That's masochism. I spent hours trying to bisect the problem (and I shared my findings), I spent more time to collect the links you need to test safely YOUR product and you just want to sit and wait that a MS webmaster re-add that link to that page? I'm sorry but that sounds a bit incredible to me. Taking any rule just like a
  7. Maybe this is still written somewhere in the eula, but practically isn't true anymore. Once you get your digital entitlement moving it to a new HW trough the MS account is possible w/o much hassle Remember that using W10 is more a favor you're doing to MS than viceversa, so MS agevolates the process in any way. LTSB/LTSC are practically the new Win 7 (but given the less spyware and the lack of the store MS wants real money for them just like for older OSes.)
  8. The purge issue is already fixed by the update KB4464330 (which moves the build from 17763.1 to 17763.55) obviously it is the same for both Win Server 2019, Win 10 1809 and Win Server 1809. So the EVAL ISO, (which are still there) can be safely downloaded and upgraded (especially for a test machine where you have nothing to loose) Eval Standard/Datacenter (English US) https://software-download.microsoft.com/download/pr/17763.1.180914-1434.rs5_release_SERVER_EVAL_X64FRE_EN-US.ISO And the mentioned cumulative update http://download.windowsupdate.com/d/msdownload/update/s
  9. I mean not officially. W10 Pro is "free" in the same meaning as W10 Home. You can buy it, and surely MS is happy if you do that, but you can also upgrade from win 7 and you will get a digital license for that machine. Upgrade from win 7 home and you will get a W10 home license. Upgrade from win 7 Pro and you will get a Win 7 Pro digital license. No matter if Win 7 was bought or not. MS silently backs this operation because it is desperatley trying to switch from beeing a razor vendor to a blade vendor. The more W10 are installed the more chances that applications and multimedia
  10. Well, RAM and storage space are just like money, they are never too much, but, frankly calling 64GB "a limit" is a bit exagerated.. A 64bit machine tekes on average 1.5/2GB of RAM (which is managed dinamically by HyperV anyway) for personal usage 8GB of ram are more than enough to run the host system and a couple of 64bit guests, unless you need to run SETI@home in one of such machines I never installed W10 home in my life, I didn't see the point of running any windows home when they where paid products, and even less now that PRO and Home are both "free,"... (LTSB2015/LTSB2016 a
  11. Obviously the underlying disks, like I said I use both the deduplication and DP since the days of Win8/Server2012, so i know how to deal with it, I'm anything but a newbie about that matter. See the previous reply. (anyway, as you know, DP has that option disabled by default on recent releases.) 2019 is already available, there are the 3 months evaluation ISOs, but you can also install the deduplication cabs on any W10 x64 1809 (obviously same story as server 2019 and Server 1809, tested personally) P.S. I know that the resurces are always limited but the prerelease builds
  12. The Hyper-V consolle is present in any W8+ installation (even x86 ones), just enable it from programs and features. Then connect to a remote one no matter if the HV host has the GUI or not, no matter if paid or free (HyperCore Server). If you need to manage Server 2008[R2] from Win8+ or viceversa there's a [partly]free SW called 5nine which makes possible the remote administration across different generations of windows. Other server bits can be managed graphically via the RSAT (remote server administration tool) a package available for free for any windows client OS (except W7/
  13. Yepp that's an option, but there are many varients limted just by your skill and fantasy. My old setup was WHS 2011 or its almost identical brother Windows Storage Server 2008r2 Essentials (the latter is better and it's still supported today), installed on a native VHD (booted directly by the bare metal), then I had Hyper-V installed on it (I made the pagages for it). Inside HyperV I had a minimal recent machine, Server Core 2008/2012/1016 with the large storage (and drivepool) installed on it, the Core VM accessed the real, phisical (large) HDDs which where deduplicated by their native f
  14. Sorry but what's the point of having 3.5 TB of data stored locally when you have a Home Server? Usually you have from 20 to 60 GB dedicated to the OS and its main programs (Office, Photoshop, whatever), perfect for a small SSD or a VHD, then the essential data you need locally in a mechanical HDD, which rarley are more than further 100GB or so, then everithing else is stored safely on a Server/NAS/Cloud and accessed whan you need it. But even when needed, 3.5GB of data are better managed by a sync SW rather than a traditional Backup. A traditional backup SWs is a perfect fit to
  15. After all that nice tread would ne nice if @Alex , @Christopher (Drashna) or someone else from Covecube would confirm they read it and they are awre of the problem.
  16. It's the same as a real HDD, just you dont need a screwdriver to move it around the world Yepp, native MS vhds can be mounted with a doublecklik on them (and dismounted trough the eject menu, just like a DVD) Copying a 20/25GB hdd over a 1GB network can be faster than a traditional incremental backup, depending your scenario. Perhaps the matters aren't said to be mutually exlusive, you can put your whole server folder on a VHD, that can be local or remote, say on a NAS or a different PC/Server. Unlike a traditional mapped drive, a remote VHD is seen locally
  17. That's very common, but not the only scenario, you could put al your data in a single VHD, then use it as data disk for your OS, no matter if your OS is tratitional or virtualized. What matters is the path, if you install a program (say in D:\acrobat), and that path is available/reachable, the program will run no matter if D:\ is a whole real disk, a parition inside it, a whole VHD or a partition inside it (you can even nest VHDs for particular purposes). NTFS is a filesystem, it's used to format a partition, no matter if it's in a real or virtual disk Your quest
  18. No, I don't have any specific source, there are a number of post on microsoft blogs you will find easily with a simple google search No photos and videos are absolutely the worst candidate: they are already compressed and they lacks any similarity. Maybe RAW uncompressed photos can be worth to be deduplicated Best candidates are ISO, vhd, vmdk, work folders like "say" multiple revisions of Android roms. Think to Win server 2019 ISO and Win server 2019 essentials ISO, they are 4.4GB + 4.17 GB. On a standard disk they will take 8.57.GB, on a dedupli
  19. There are a number of ways to deal with that. My own best solution is to mount a vhdx, then use vmware telling to it that it is a physical drive, so you can mix whatever VM from whatever origin, move the data inside them and so on. Also you can install either ESXi and Hyper-V (or both of them) inside VMware and nest their respective virtual machines (not a task good for low power/low ram machine, but definitely feasible. Then my favorite one, given almost no one is aware it's possible, you can run Hyper-V and VMware (at the same time) on the same machine if the virtual machines
  20. It's indeed a form of block deduplication but is built on top of the NTFS, not under it, just like DP. And no is not strictly a form of compression. MS deduplication does also compress the files, but that's an additional and optional feature. Any file is written on the disk in chunks, with the deduplication enabled the chunks in common with two or more files are used multiple times, so there isn't any major overhead. At least if you don't enable the compression. But even with the compression enabled the performances are more than good enough for a storage disk.
  21. What about testing the current version http://dl.covecube.com/DrivePoolWindows/release/download/StableBit.DrivePool_2.2.2.934_x64_Release.exe or even the latest beta? http://dl.covecube.com/DrivePoolWindows/beta/download/ DP comes from March the same time frame of W10 1803...
  22. The almost unknown "Windows Storage Server 2008r2 Essentials" is exactly the same as WHS 2011, excluding the blue shade instead of green, excluding 25 users instead of 10, excluding it can (optionally) join a domain It's still supported and thanks to my packages (shared elsewhere) can run the mediacenter, it can act as a domain controller and can run Hyper-V virtual machines. For unknown reasons it is called 2008R2 when its a 2011 release just like WHS 2011, Multipoint server 2011 and SBS 2011. For unknown reasons it is called (internally) "Home Server Basic" while his brothe
  23. That's pointless. Just add your original drive to the pool, then MOVE, not COPY your data inside the (hidden) guid folder created by drivepool, then add more drives to your pool, do the some as above, (in case the additional drives are not empty) , then configure the duplication/spanning options in DP, then let DP duplicates/moves your date in background across your drives. No need to copy huge amount of data on top of the same drive, no space constraints, no need of a temporary landing storage, no huge waste of time and unneeded drives wearing.
  24. Map in paint? O_o They are two disks deduplicated with the MS deduplication and made resilient by driveppol I understand that inexperienced people will get confused by the similarity of the terms used: dupliction/deduplication by very different SW. It sounds like a joke, but it isn't!!! MS deduplication can halve or even decimate the storage usage.... Think to an office with 10 client PCs, each started with the same installation. Their VHDXs could be 40 GB, so they are 400GB in theory. In pratice 90% of the data are the same for for each PC, so your 400G
  • Create New...