How secure are virtual machines really? False sense of security?

  • I was reading this CompTIA Security+ SYO-201 book, and the author David Prowse claims that:

    Whichever VM you select, the VM cannot cross the software boundaries set in place. For example, a virus might infect a computer when executed and spread to other files in the OS. However, a virus executed in a VM will spread through the VM but not affect the underlying actual OS.

    So if I'm running VMWare player and execute some malware on my virtual machine's OS, I don't have to worry about my host system being compromised, at all?

    What if the virtual machine shares the network with the host machine, and shared folders are enabled?

    Isn't it still possible for a worm to copy itself to the host machine that way? Isn't the user still vulnerable to AutoRun if the OS is Windows and they insert a USB storage device?

    How secure are virtual machines, really? How much do they protect the host machine from malware and attacks?

    If I were the editor, I would interject a "hopefully" and "theoretically" in a few choice locations in that quote. As is, it's definitely a false statement.

    A example of a real life attack from guest os to host os. http://venom.crowdstrike.com

    A very generic statement is that the security of the host and network depends on the security of the interfaces between said host / network and the client VM. You should consider installing the absolute minimum of tools, configuring minimum network access and configuring minimum of hardware devices to minimize risk. If you just run a VM in a memory sandbox then you will likely be secure; the only interfaces to attack would be the CPU and memory subsystem of the visor. You would also have a pretty useless VM. E.g. do you need a floppy?

  • Marcin

    Marcin Correct answer

    10 years ago

    VMs can definitely cross over. Usually you have them networked, so any malware with a network component (i.e. worms) will propagate to wherever their addressing/routing allows them to. Regular viruses tend to only operate in usermode, so while they couldn't communicate overtly, they could still set up a covert channel. If you are sharing CPUs, a busy process on one VM can effectively communicate state to another VM (that's your prototypical timing covert channel). Storage covert channel would be a bit harder as the virtual disks tend to have a hard limit on them, so unless you have a system that can over-commit disk space, it should not be an issue.

    The most interesting approach to securing VMs is called the Separation Kernel. It's a result of John Rushby's 1981 paper which basically states that in order to have VMs isolated in a manner that could be equivalent to physical separation, the computer must export its resources to specific VMs in a way where at no point any resource that can store state is shared between VMs. This has deep consequences, as it requires the underlying computer architecture to be designed in a way in which this can be carried out in a non-bypassable manner.

    30yrs after this paper, we finally have few products that claim to do it. x86 isn't the greatest platform for it, as there are many instructions that cannot be virtualized, to fully support the 'no sharing' idea. It is also not very practical for common systems, as to have four VMs, you'd need four harddrives hanging off four disk controllers, four video cards, four USB controllers with four mice, etc..

    What would be the benefit of this sort of covert communication for the virus author? It sounds like it couldn't be used until both machines were infected, but why would you need it after that point?

    @JackO'Connor: In order to *communicate* between them. Consider for instance if one of the VMs has a network card attached to the Internet but not the internal Data Center, and the other has a network card attached to the internal Data Center but not to the Internet. Using this covert channel, the attacker now has a route to attack the DC from the Internet and exfil data. Additionally, one VM could possibly side-channel attack another VM using this method, for instance one compromised VM (on Azure/EC2 maybe) attacking another VM to get the other VM's SSL private keys, for instance.

    If a VM doesn't actually execute machine code directly, but instead interprets it, should there be any fundamental difficulty in making the machine 100% secure if the only ways it can interact with the outside world are initiated from outside itself (e.g. an outside utility which copies data to the virtual machine's "hard drive"?)

    "_to have four VM's, you'd need four harddrives hanging off four disk controllers, four [etc...]_" That sounds like it's missing the point of *virtual* machines.

    @Marcin - If the virtual machines are not networked for the purposes of testing however do have shared folders and clipboard enabled between the guest and host, what is the likelihood of infection from the guest to the host?

    @Motivated that would be the same likelihood of infection of a drive-by download or email attachment: the malware in the VM can put files in the shared folders but a user has to take action for them to do something.

    There's no such thing as a totally "No sharing" system. Even if you have a dedicated network cards, hard drive, etc for each VM, you'll still be sharing the motherboard. And if your VMs actually have their own motherboards, well you can't really call your system VM anymore then.

    Marcin, it's 8 years gone from the answer. can you update it? the topic is very important... thanks

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM