[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Publicity] Blog-post RFC: Hardening Xen against VENOM-style attacks
Hi folks, although a lot of interesting suggestions were raised, I think it is more helpful to propose concrete text changes or additions, rather than suggestions which require that the author does additional research. > Also, is it worth mentioning why the qemu stub domain isn't the default? Is > it all compiled and installed in most of the hypervisor distributions on > Ubuntu/CentOS/etc? I don't think even XenServer uses qemu stub domains, > although that might have changed in the recent release. Is there a conclusion or a concrete paragraph on this one that you can suggest? I did check Ubuntu and a few other distros and they all have stub domains. XenServer may not have. But the domain may of course not be built, even though the I suppose any outcome/mention should deb in the "One would think this is a complex...." @Tamas: I would propose to position this as a guest blog post: we normally do this for someones 1st blog post on xenproject.org. Normally what we do in such cases is to add a pre-amble. An example is: "This is a guest blog post by Georg DÃrn, a long-time system administrator and open source enthusiast. Georg founded his company its-doern in 2008, to develop solutions for customers entirely from open source software." If you could add a brief description that would be great. > The recent disclosure of the VENOM bug affecting major open-source > hypervisors, such as KVM and Xen, has been circulating in the tech news > lately, causing how about replacing lately with a date > many to reevaluate their security posture when using cloud infrastructures. > That is a very good thing indeed. Virtualization and the cloud has been for > too long I am not sure what you mean by security posture: maybe a more common phrase would be better or an example. I also think that some of the more recent Venom articles are partly also calling out the Venom coverage a clever marketing campaign by CrowdStrike to raise their profile and it is also now clear that far fewer vendors than originally anticipated are affected by Venom * http://www.csoonline.com/article/2922066/vulnerabilities/venom-hype-and-pre-planned-marketing-campaign-panned-by-experts.html * http://gizmodo.com/please-stop-comparing-every-security-flaw-to-heartbleed-1704259495 * http://www.forbes.com/sites/thomasbrewster/2015/05/13/venom-vulnerability-could-hit-amazon-oracle-rackspace-citrix/ * http://www.theregister.co.uk/2015/05/13/heartbleed_eat_your_heart_out_venom_vuln_poisons_countless_vms/ * http://www.zdnet.com/article/venom-the-anti-toxin-is-here/ > erroneously considered to be a silver bullet against intrusions, malware and > APTs. What is an APT? > The cloud is anything but a safe place. I am not sure we want to single out the cloud. Maybe, expand the list a bit, e.g. the "internet and cloud". After all we are no worse than other software stacks. Maybe we have a better track record than most. > VENOM in that sense is just another one in the long list of vulnerabilities > that seem to be plaguing hypervisors. Again, maybe not single out hypervisors > However, there are differences between vulnerabilities. While VENOM is indeed > a serious bug and can result in a VM escape, which can compromise all VMs on > the host, it doesnât have to be. In fact, VENOM-style attacks have been known > for a long time. And there are easy-to-deploy co > unter-measures to mitigate the risk of such exploits, natively available in > Xen and KVM. Are there ways to mitigate this for KVM? When I read this I was expecting to read about KVM also. So maybe drop KVM here, as it feels like a loose ends. Actually re-reading this again, you do mention KVM sVirt SELinux policies: it's easy to miss because that paragraph is embedded into a longer section about stub domains. Maybe add something immediately afterwards along the lines of "Xen provides subdomains to sandbox VENOM style exploits in a de-priviledged domaon and KVM allows for similar jailing of the QEMU process via the native SELinux sVirt policies." > While modern systems come ... > Devices such as your network card, graphics card and your hard drive. While > Linux comes with paravirtual (virtualization-aware) drivers to create such > devices, > > emulation is often the only solution to run operating systems > that do not have such kernel drivers. This has been traditionally the case > with Windows. This > emulation layer has been implemented in QEMU, which has caused VENOM and a > handful of other VM-escape bugs in recent years Maybe list some examples re QEMU related VM-escaped bugs I think it is worth pointing pout that people use PVHVM (HVM) for performance reasons also. One gets the impression from this paragraph that the only reason > Back in 2011, the Blackhat talk on Virtunoid demonstrated such a VM escape > attack against KVM, through QEMU. Link? > As a sidenote, KVM allows for similar jailing of the QEMU process via the > native SELinux sVirt policies. See above > One would think it is a complex process to take advantage of this protection, Maybe link to http://wiki.xenproject.org/wiki/Device_Model_Stub_Domains > Unfortunately, your cloud provider may not allow you to enable this option. You should probably also call out distros Regards Lars > On 14 May 2015, at 11:59, Stefano Stabellini > <stefano.stabellini@xxxxxxxxxxxxx> wrote: > > On Thu, 14 May 2015, George Dunlap wrote: >> On 05/14/2015 11:39 AM, Anil Madhavapeddy wrote: >>> Yeah... it's worth noting that unikernels like MirageOS or HaLVM never use >>> the x86 device emulation and so require a far easier to audit hypervisor >>> TCB that doesn't involve qemu. >>> >>> Also, is it worth mentioning why the qemu stub domain isn't the default? >>> Is it all compiled and installed in most of the hypervisor distributions on >>> Ubuntu/CentOS/etc? I don't think even XenServer uses qemu stub domains, >>> although that might have changed in the recent release. >> >> Well the main reason is that qemu-upstream doesn't work with stub >> domains yet. Anthony worked on it for what, a year? He got pretty far >> but there are just a lot of thorny issues to deal with. > > To be fair, there are also other reasons: memory overhead, number of > domains doubling, and the additional complexity of having 2 QEMUs for > each domain (there is still one QEMU in Dom0 running for each guest, > although it just provides the PV backends). > > _______________________________________________ > Publicity mailing list > Publicity@xxxxxxxxxxxxxxxxxxxx > http://lists.xenproject.org/cgi-bin/mailman/listinfo/publicity _______________________________________________ Publicity mailing list Publicity@xxxxxxxxxxxxxxxxxxxx http://lists.xenproject.org/cgi-bin/mailman/listinfo/publicity
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |