Caveat: I'm not a virtualisation expert, just a home user who enjoys messing with tech!
Basically, you can pretty much attach as many virtual disks to a VM as you want. I have one virtual drive for the guest OS and then multiple virtual drives based on the various data stores (music, photos, films, documents, etc), and spread across various physical disks using drivepool to give a software RAID-type redundancy, and a separate disk dedicated to backup. It is possible to pass through physical disks directly to virtual machines and cut out the need for a virtual drive, but there are compatibility and other various issues which mean this isn't always the best idea or even possible. You can quickly swap a virtual disk from one VM to another, and depending on the operating system / hypervisor, it can be possible to connect it directly to a desktop PC and access the data on it.
There are lots of advantages of using virtualisation, and specifically a hypervisor like ESXi or Hyper-V Server, is that they take up very little resource overhead on a PC / server set up as a "host". So, a half decent specced server with a decent amount of RAM, processing power, network, and disk space can run multiple operating systems on one hardware. Mine is not quite as advanced as some set ups out there yet, but lots of people run multiple VMs on one host with, for example: virtualised active directory, firewall, web server, backup, SAN, game server, home data server such as FreeNAS / UnRAID, streaming / data sharing etc. As each individual system (usually) only uses a small amount of resource at any one time, they can all be running on one set of hardware at the same time. The beauty is that it is very good for testing: if you want to try a new version of windows, or a new firewall, you just create a new VM, fire it up and play with to work out whether you actually want it or not.
I think it was created primarily to work in large server environments where traditionally each system would be run on a separate physical machine. This is expensive from both set up and running costs (think electricity for power and cooling), and is also means that a lot of the processing power of a single piece of hardware is not used for a large proportion of time. e.g. if you had 4 physical servers that were using 20% of their memory &/or processor power on average, you could covert them all into VMs running on one set of similar hardware with roughly 20% capacity spare. This would save on hardware and running costs and multiply this over many servers then the savings can be big. There's also lots that can be done with redundancy, fall-over, clustering, pooling or resources, its easy to move a VM to another machine and just power it on, etc, etc.
hopefully that's helpful?!