Jump to content

t-s

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by t-s

  1. Just to add something relevant...

    I tested the same setup on windows server and win 10 1709 and the problem is already present

    As I wrote on my first post, the dedupication changed since the (publicly released) version 1709, so was too easy to guess that the problem with drivepool started there.

    I didnt use such versions so I cannot corfirm the problem on first post.

    Now I spent some time installing Windows Server 1709 on my test machine and I can confirm the problem.

    So there isn't any excuse anymore, DP on deduplicated disks is broken since more than one year!!!

    I guess that no one noticed that because Server 1709 (and 1803) comes only as GUIless flavours and very few SOHO users relied on them.

     

     

  2. Try to share the same disk/folder via ftp if the networks speed via ftp jumps to decent levels (as I guess) the virtual NIC has nothing to do with your slowness.

    As a further test you may want to test the SMB speed on a android device. Just use ES file explorer, ANDSMB (or something like that) to access your windows CIFS share and see if the speed is slow also there.


    Given the the ever changing SMB/Cifs protocol, the problem may reside in password encription, the lack of SMB1 compatibility,  or other novelties introduced in latest Windows release.

  3. On 10/16/2018 at 6:20 PM, Umfriend said:

    2008R2 is called 2008R2 as it was an update of 2008, not an upgrade (and released in 2009). The WHS "variant" was released in 2011.

    It's pretty clear why 2008 r2 is called 2008 r2, what's amazing is releasing 4 Colorado brothers WHS, SBS, Multipoint and WSSe and calling 3 of them 2011 (as they should) and just one 2008r2.

     

    Especially taking in account that while SBS and Multipont are different products WHS and WSSE are almost exactly the same thing.

    You can even make WSS to look exactly lime WHS (and viceversa) replacing just a single DLL

  4. 45 minutes ago, Christopher (Drashna) said:

    Unfortunately, the eval center still doesn't give a link to the ISO.And while the ISO may still be at the link you've posted, it's not a publically available one. 

     

    So? You are really saying that after a user discovered a bug that makes unusable (for the average user) your product, on an already released product, you refuse to test it just because a publicly available link is removed from a single MS page O_o.

    That's masochism.

    I spent hours trying to bisect the problem (and I shared my findings), I spent more time to collect the links you need to test safely YOUR product and you just want to sit and wait that a MS webmaster re-add that link to that page?

    I'm sorry but that sounds a bit incredible to me.

    Quote

    Running it on any system that it is not included with already, or LICENSED for is a violation of the licensing, to start with.  Not to mention, it's not designed nor testing on these other configurations, and may cause issues. 

    Taking any rule just like a Jehovah Witness takes the Bible has never been a good idea.

    W10 And Win Server are the same OS (assuming we are talking about the same build, and we are), few things are allowed or disallowed because the kernel policies (e.g. the maximum RAM available, or the ability to launch DCpromo.exe) some things are just added or removed packages (just like the dedup packages), in that specific case there is absolutely no difference between Win server 2019 and W10 1809.
    There isn't a single hacked/changed bit, there isn't any circumvention of a software protection, there isn't any safety risk.


    From a legal point you are right, no one would theoretically be allowed to use even Notepad.exe taken from win 7 on win 8, but you really should be a bit flexible on that (just like MS is since the DOS days).

    It's not the case here (given the server ISO is there), but assuming you're are going to test your product on W10 + the "hacked" packages, what would be the MS position or that?

    Do  you think they feel your're "stealing" something? Or maybe they consider that action something meant to improve the OS usefulness and the user experience?

  5. 54 minutes ago, Jaga said:

    MS silently backs this because the license is a one-time, non-transferable license good for use on that hardware build only.  When the CPU/Motherboard/etc (any major component) needs to be replaced, the license becomes invalid and you are forced to purchase a new one at full price, because "the machine changed".

    Maybe this is still written somewhere in the eula, but practically isn't true anymore.

    Once you get your digital entitlement moving it to a new HW  trough the MS account is possible w/o much hassle

    Remember that using W10 is more a favor you're doing to MS than viceversa, so MS agevolates the process in any way.

     

    37 minutes ago, Christopher (Drashna) said:

    As long as your hardware isn't using Skylake or newer CPUs.  
    And AMD is doing the same thing (dropping support for Win7). 

    So .... yeah....

    LTSB/LTSC are practically the new Win 7 (but given the less spyware and the lack of the store MS wants real money for them just like for older OSes.)

  6. 7 minutes ago, Christopher (Drashna) said:

    Well, Microsoft pulled them because of the profile purge issue.  I don't see the ISO, at all.

    The purge issue is already fixed by the update KB4464330 (which moves the build from 17763.1 to 17763.55) obviously it is the same for both Win Server 2019, Win 10 1809 and Win Server 1809.

    So the EVAL ISO, (which are still there) can be safely downloaded and upgraded (especially for a test machine where you have nothing to loose)

    Eval Standard/Datacenter (English US)

    https://software-download.microsoft.com/download/pr/17763.1.180914-1434.rs5_release_SERVER_EVAL_X64FRE_EN-US.ISO

    And the mentioned cumulative update

    http://download.windowsupdate.com/d/msdownload/update/software/secu/2018/10/windows10.0-kb4464330-x64_e459c6bc38265737fe126d589993c325125dd35a.msu

     

    Quote

    And yeah, I know about the hack. 

    Well calling it may be misleading, they are the official signed package which are missing from W10, Server Essentials and the (free) Hypercore (for the record both DP and deduplication worked well on Hypercore 2012/2016 I used it to provide the [de]duplication to WHS)
     

    Quote

    We're just waiting for Microsoft to re-release the ISOs so we can do testing.  

    No need to wait anymore. ;)
     

  7. Quote

    I was not aware W10Pro was free. Which is good because I need to get that on my lab-laptop.

    I mean not officially.

    W10 Pro is "free" in the same meaning as W10 Home. You can buy it, and surely MS is happy if you do that, but you can also upgrade from win 7 and you will get a digital license for that machine. Upgrade from win 7 home and you will get a W10 home license. Upgrade from win 7 Pro and you will get a Win 7 Pro digital license. No matter if Win 7 was bought or not.

    MS silently backs this operation because it is desperatley trying to switch from beeing a razor vendor to a blade vendor.

    The more W10 are installed the more chances that applications and multimedia stuff will be bought.

    Where the OS license still matters for MS are from Servers and LTSB/LTSC which comes free from the metro crap and the store.

    Also Pro becomes less Pro on each W10 upgrade, there is "Pro for Workstations" now, which  is the new real Pro.

     

  8. Quote

    So just to get this straight: WSE 2016 has a 64GB mem limit. I thought about going VirtualBox but that runs under the OS so would be limited to 64GB as well I would think.

    With Hyper-V, I can have WSE2016 and a W10 VDI running next to each other under the Hypervisor. WSE2016 would still be limited to 64GB (and I would probaly cap it at 8 or 16GB and the W10 VDI could use a whopping 128GB.

    Well, RAM and storage space are just like money, they are never too much, but, frankly calling 64GB "a limit" is a bit exagerated..

    A 64bit machine tekes on average 1.5/2GB of RAM (which is managed dinamically by HyperV anyway) for personal usage 8GB of ram are more than enough to run the host system and a couple of 64bit guests, unless you need to run SETI@home in one of such machines :D

    Quote

    Edit: And the HyperVisor console (that is the GUI, right?),  that does not come with W10 Home or WSE 2016, does it?

    I never installed W10 home in my life, I didn't see the point of running any windows home when they where paid products, and even less now that PRO and Home are both "free,"... (LTSB2015/LTSB2016 and LTSC are the only worth W10 flavours)

    WSE 2016 comes whith the full HyperV role so the consolle is included as well.

  9. Quote

    Are you trying to enable deduplication on the Pool itself, or the underlying disks?

    Obviously the underlying disks, like I said I use both the deduplication and DP since the days of Win8/Server2012, so i know how to deal with it, I'm anything but a newbie about that matter.

    Quote

    Also, make sure that the "bypass file system filters" option in StableBit DrivePool is disabled.  If it's enabled, it will break deduplication

    See the previous reply. (anyway, as you know, DP has that option disabled by default on recent releases.)

    Quote

    Also, it may be hard to test this, as the 2019 ISO's aren't back up yet, I believe. 

    2019 is already available, there are the 3 months evaluation ISOs, but you can also install the deduplication cabs on any W10 x64 1809 (obviously same story as server 2019 and Server 1809, tested personally)

    P.S. I know that the resurces are always limited but the prerelease builds are released by MS for a reason, my suggestion is to test your products on them, to be ready when a new "final" relase is launched.

    P.S.2 Also WSL was vastly reworked since the days of server 2016 (where it was installable only unofficially), I know it uses some kind of FS filters as well and I smell problems also from that direction (I had a reboot  during the linux update, but since then I didn't have much time to investigate further.
     

  10. The Hyper-V consolle is present in any W8+ installation (even x86 ones), just enable it from programs and features.

    Then connect to a remote one no matter if the HV host has the GUI or not, no matter if paid or free (HyperCore Server).

    If you need to manage Server 2008[R2] from Win8+ or viceversa there's a [partly]free SW called 5nine which makes possible the remote administration across different generations of windows.

    Other server bits can be managed graphically via the RSAT (remote server administration tool) a package available for free for any windows client OS (except W7/8 embedded)

    Or you can just use powershell

     

    Quote

    Do you have a backup solution as well? 

    No, i don't use any old school backup solution at all, I even stopped to enable the essentials role. Once one understands how handy the native vhds are, most of the paradigms from the past looks outdated.

  11. Quote

    I would then imagine that the OS/Application files are in two vhdx but the actual data on "regular" HDDs

    Yepp that's an option, but there are many varients limted just by your skill and fantasy.

    My old setup was WHS 2011 or its almost identical brother Windows Storage Server 2008r2 Essentials (the latter is better and it's still supported today), installed on a native VHD (booted directly by the bare metal), then I had Hyper-V installed on it (I made the pagages for it). Inside HyperV I had a minimal recent machine, Server Core 2008/2012/1016 with the large storage (and drivepool) installed on it, the Core VM accessed the real, phisical (large) HDDs which where deduplicated by their native feature (and made redundant by Drivepool).

    WHS or WSS where also connected to my main TV, running the mediacenter (I made the package for it as well), and acting as a TV server via DVBlink (and/or serverWMC).

    Then we succeded to port WMC on W10/WinServer 2012/6 as well, so I switched to a much simpler infrastructure using Win Server 2016 (now 2019).

    HyperV is there just to run win2008 server (very small, being the last x86 server), which runs just as a backup domain controller.

    The storage is now natively managed by Win Server 2019, which deduplicates them, then duplicated by DP for resiliency (as I wrote elsewhere, sounds like a joke but really it isn't)

    So nowadays I don't use the WHS like backup at all. All my OSes are native VHDs (including the server) the large storage is duplicated by drivepool.

    All I do is to copy 4/5 vhdx from the ssd to the Pool. (usually I do that one per month after a cumulative update is released and installed.

  12. Quote

    With 20/25GB, I can see that. But what about 3.5TB of data?

    Sorry but what's the point of having 3.5 TB of data stored locally when you have a Home Server?

    Usually you have from 20 to 60 GB dedicated to the OS and its main programs (Office, Photoshop, whatever), perfect for a small SSD or a VHD, then the essential data you need locally in a mechanical HDD, which rarley are more than further 100GB or so, then everithing else is stored safely on a Server/NAS/Cloud and accessed whan you need it.

    But even when needed, 3.5GB of data are better managed by a sync SW rather than a traditional Backup.

    A traditional backup SWs is a perfect fit to restore an operating system to it's original state (you can't use explorer for that, unless you use a VHD, as discussed earlier).   Copying/syncing  a bunch of random, large files can be done in a huge number of different ways, starting from the good old powertoys or the offline folder feature.

    Quote

    Yes, VDI. But really, isn't that simple VM with the intention of RDP'ing into that VM?

     Well VDI is a term used by the marketing mumbojumbo, to define a simple VM accessed via a kind remote desktop, not necessarily the MS remote desktop.

    But by a larger extension defines also what's behind it, how the machines are managed, how they are backupped, the deduplication of their storage, the live migration of them (you can move a RUNNING VM across different Hyper V servers), and last but not least, how the RAM is managed (a HyperV server can allocate,dynamically, to each VM more ram than the RAM effectively present on the server, just like a Bank does with money of their clients.)

    BTW most of those arguments are interesting for a company that whants virtualize 10 or more PCs rather than the signel user who runs one or a couple of VMs at the same time.

  13. Quote

    So say I have a vhdx file. It is actually a sort of container of many files.

    It's the same as a real HDD, just you dont need a screwdriver to move it around the world

     

    Quote

    Can I transfer the HDD to another machine and retrieve the contents through Windows Explorer?

    Yepp, native MS vhds can be mounted with a doublecklik on them (and dismounted trough the eject menu, just like a DVD)

     

    Quote

    Say I like regular incremental/differential backups (such as WHS and WSE 2012/2016 provide), would that be possible easily?

    Copying a 20/25GB hdd over a 1GB network can be faster than a traditional incremental backup, depending your scenario.

    Perhaps the matters aren't said to be mutually exlusive, you can put your whole server folder on a VHD, that can be local or remote, say on a NAS or a different PC/Server. Unlike a traditional mapped  drive, a remote VHD is seen locally exactly the same as a real HDD.

    This way you can use a remote disk as WMC cache, you can see it in the disk manager, you can use it on any SW alergic to network drives

    Quote

    The one reason I may be interest in VMs is that I could have a very thin mobile device/laptop, hook it up to two monitors anywhere and RDP into the server for I/O and CPU power.

    That's called VDI, where a remote VM is made available by a single server, very trendy nowadais

    Better than a traditional terminal sever because the security you mentioned, because the flexibility, because a single hung VM can be rebooted independently

    Worse than a treminal server because you have to install/upgrade each VM just like a real PC, while in terminal server you upgrade/install/backup everithing once, then all users will benefit.

    As usual you need the right hammer to bereak a specific nut, as usual there is no "better" and "worse" in absolute. The tool and the infrastructure you need dapends on your needs, your habits, the HW you own and so on.
     

  14. 2 hours ago, Umfriend said:

    @cocksy_boy: I always thought that a virtual disk (vhd/vhdx/vmdk) would typically contain an OS

    That's very common, but not the only scenario, you could put al your data in a single VHD, then use it as data disk for your OS, no matter if your OS is tratitional or virtualized.

    Quote

    and, I guess, apps that are intstalled to run under such OS and that data would not be.

    What matters is the path, if you install a program (say in D:\acrobat), and that path is available/reachable, the program will run no matter if D:\ is a whole real disk, a parition inside it, a whole VHD or a partition inside it (you can even nest VHDs for particular purposes).

     

    Quote

    My question is, why is that better than having data on regular NTFS volumes which could be opened from any VM? Noob here in this area.


    NTFS is a filesystem, it's used to format a partition, no matter if it's in a real or virtual disk

    Your question should be "why is that better than having data on regular volumes/partitons/HDDs which could be opened from any VM?"

    Well, it's not better. There are many advantages and some disadvantages, depending the scenario.

    Advantages:

    A vhd is very flexible, you can shrink, enalarge, convert it at your liking

    You can backup a whole OS or a whole  data VHD just coping a single large file instead of thousands of small ones. No backup SW is needed, no fight with permissions, tens of times faster backup operations and so on.

    You can move one or many OS across virtual and/or real machines, just using a file manager.

    No need to complicate partitioning operations.

    VHD and VHDx (older and newer MS virtual disk formats) can made bootable in a real machine in a couple of clicks. So you can restore a whole OS in 5/10 minutes as long as you have a VHD copy stored somewhere.

    Disadvantages:

    There is a performance price to pay. Almost unnoticeable in newer VHDXs (supported since Win 8.0)

    If you are stupid/distracted enough, you can delete a whole OS (or a whole data disk) in a single click (obviously not from the OS residing inside the VHD itself)

    While good choices in the VHD size makes the use of the space more efficient than a traditional partitioning scheme, a bad decision can waste some space available otherwise.

    It's possible that a file residing on a (partly) damaged HDD could be easier to recover from a real HDD rather than from a VHD residing on it.

    That's more or less summarize the basics of the matter, but is far from being exaustive.

  15. 46 minutes ago, zeroibis said:

     

    Quote

    Is there any material you would recommend to read up on the deduplication service.

    No, I don't have any specific source, there are a number of post on microsoft blogs you will find easily with a simple google search

     

    Quote

    It appears this is a great solution if you have a lot of files that are only accessed but not often modified, such as large sets of videos and photos.

    No photos and videos are absolutely the worst candidate: they are already compressed and they lacks any similarity. Maybe RAW uncompressed photos can be worth to be deduplicated

    Best candidates are ISO, vhd, vmdk, work folders like "say" multiple revisions of Android roms.

    Think to Win server 2019 ISO and Win server 2019 essentials ISO, they are 4.4GB + 4.17 GB. On a standard disk they will take 8.57.GB, on a deduplicated disk they will take 4.8GB or so
    Add 2GB of Hyper-V server 2019 and you will use 33MB (yes, megabyte) more...

     

    Quote

    I would still imagine that there is a heavy processing and memory cost to this that my present system likely can not pay but it is something to read up on and look into for future upgrades.

    The heavy part is the first run, then only the new files added/modified will be deduped (each couple of hours or so), in background nothing that a decent PC even 5/7 years old can't handle

    Then the native ZFS deduplication is very  efficient (it's  done natively and in real time by the filesystem) but very memory hungry (about 8GB of ram per TB of deduplicated storage is needed), the MS flavour isn't so sophisticated but also it's not that demanding: less than 4GB of ram are enough to deduplicate a couple of TB of data.

     

  16. 9 minutes ago, cocksy_boy said:

    Unfortuantely as the old is in VHDX format and the new is in VMDK (former for Hyper-V and latter for EXSi), they can't both be on a single system at the same time. Otherwise, yes, that would be the perfect way to do it!

    There are a number of ways to deal with that.

    My own best solution is to mount a vhdx, then use vmware telling to it that it is a physical drive, so you can mix whatever VM from whatever origin, move the data inside them and so on.

    Also you can install either ESXi and Hyper-V (or both of them) inside VMware and nest their respective virtual machines (not a task good for low power/low ram machine, but definitely feasible.

    Then my favorite one, given almost no one is aware it's possible, you can run Hyper-V and VMware (at the same time) on the same machine if the virtual machines of vmware are 32 bit.

    Right now I have Win98 and ThinPC running on VMware and the domain controller running inside Hyper-V

  17. It's indeed a form of block deduplication but is built on top of the NTFS, not under it, just like DP.

    And no is not strictly a form of compression.

    MS deduplication does also compress the files, but that's an additional and optional feature.

    Any file is written on the disk in chunks, with the deduplication enabled the chunks in common with two or more files are used multiple times, so there isn't any major overhead. At least if you don't enable the compression.

    But even with the compression enabled  the performances are more than good enough for a storage disk.

     

    Quote

    A work around could be to temporarily real time duplication and instead have data flow to one drive and then to the second as an archive.

    No, good suggestion but it it doesn't work

    After a day of digging I realized that putting offline the pool disk/s, (but obviously not its/their members) via the disk manager, is enough to get the deduplication services working again.

    Probably the problem lies in the new dedup.sys (or whatever dedup component) that tries to enumerate the pool disk as a real disk, and fails miserably.

    Very likely, strictly speaking, the fault is from MS, not Covecube, but I'm afraid that MS couldn't care less of that problem.

    As usual MS creates a problem and is matter of third parties to deal with it

    The good news is that no data corruption happens, it's just a major annoyance.

  18. The almost unknown  "Windows Storage Server 2008r2 Essentials"  is exactly the same as WHS 2011, excluding the blue shade instead of green, excluding 25 users instead of 10, excluding it can (optionally) join a domain

    It's still supported  and thanks to my packages (shared elsewhere) can run the mediacenter, it can act as a domain controller and can run Hyper-V virtual machines.

    For unknown reasons it is called 2008R2 when its a 2011 release just like WHS 2011, Multipoint server 2011 and SBS 2011.

    For unknown reasons it is called (internally) "Home Server Basic" while his brother WHS 2011 is called (internally) "home server premium".

    MS and naming schemes are always been obscure to say the best.

  19. On 10/14/2018 at 11:03 PM, cocksy_boy said:
    Quote

    a. Add 2 empty disks to a pool, turn on duplication and then copy the data to the new pool and let it duplicate as it copies the data to the pool?

    or

    b. Copy the data to a new single disk, then add another disk to the pool and turn on duplication?

     

    That's pointless.

    Just add your original drive to the pool, then MOVE, not COPY your data inside the (hidden) guid folder created by drivepool, then add more drives to your pool, do the some as above, (in case the additional drives are not empty) , then configure the duplication/spanning options in DP, then let DP duplicates/moves your date in background across your drives.

    No need to copy huge amount of data on top of the same drive, no space constraints, no need of a temporary landing storage, no huge waste of time and unneeded drives wearing.

  20. 36 minutes ago, zeroibis said:

    I am a bit confused as to what your I/O stack looks like, can you draw a little map in paint or something so we know what is going on?

     

    Map in paint? O_o

    They are two disks deduplicated with the MS deduplication and made resilient by driveppol

    Quote

    Also a bit confused as to why your using MS deduplication instead of the duplication from DrivePool.

    I understand that inexperienced people will get confused by the similarity of the terms used: dupliction/deduplication by very different SW.

    It sounds like a joke, but it isn't!!!

     

    MS deduplication can halve or even decimate the storage usage....

    Think to an office with 10 client PCs, each started with the same installation.

    Their VHDXs could be 40 GB, so they are 400GB in theory. In pratice 90% of the data are the same for for each PC, so your 400GB can be stored in just 80GB or so.

    This is what the deduplication does . 

    Drivepool does something of very different. It takes your 400 GB and duplicates them across two or more drivers (for resiliency).

    Using deduplication + duplication makes the storage both efficient and resilient: You will end (according to the above example) with 80x2 GB of data instead of the basic situation of 400x1 GB.

    A huge win/win. Great security, great efficiency. 

     

    In short duplication means  x2

     

    deduplication means :2  to :10 or more, combine both of them and you will be the happiest IT user in the world :)
     

×
×
  • Create New...