[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] qemu-upstream stubdom - Xen can put VENOM-type attacks to bed



Yesterday's public announcement of VENOM highlights a security concern
I raised in January with not getting qemu-upstream moved over to stub
domains.  I have repeated my email below for your convenience.  To
summarize my earlier email: I believe running QEMU device emulation in
dom0 is Xen's single most significant outstanding security issue, and
I have been long puzzled why it has not been addressed.

This week, the fuss is about a bug in QEMU's emulated FDC.  However,
it is guaranteed that in a codebase with the scope, complexity, and
pace of development that QEMU has, that there exist, and there will
continue to be created, other hardware emulation bugs that enable the
very same kinds of attacks demonstrated by VENOM.  VENOM is merely a
single demonstration of a broader class of attacks involving breaking
the QEMU emulation layer - a security issue that has been recognized
for years, but nevertheless left on the shelf.  It does not make sense
to allow QEMU to be part of the trusted base.

I again am campaigning for the Xen team to invest the resources to
finally implement qemu-upstream stub domains.  My email from January
covers plenty of good reasons for doing this.  Also, the Xen team can
view it as a product differentiator over KVM and other solutions
relying on QEMU, or maybe even other VM products that rely on
different device emulation codebases.  Xen stub domains are a powerful
solution to the problem that, as far as I understand, is not even on
the table for KVM.

I have tried to "put my money where my mouth is" on this issue.  On
February 3rd, I released a patchset updating Anthony's original
efforts at implementing a Linux-based qemu-upstream.  With what I
released, QEMU device emulation runs successfully under Linux in its
own stub domain, including networking.  I could run a Linux
distribution headless, and SSH into it.  The next major step that
needs to be addressed is getting the display working.  After that are
all the niceities such as suspend and resume that are currently
enjoyed with qemu-upstream.

However, I cannot bring this task to completion by myself, as much as
I would like to.  Although in an earlier life I was a full-time
professional developer, I am now merely a part-time individual
contributor to Xen.  Also, I no longer have the bandwidth I had even a
few months ago, and unfortunately that is unlikely to change for a
while.  This is partly because to take this further requires an
understanding of QEMU internals and interacting with the QEMU team
that I do not have, and do not have the time to acquire.  However, I
believe there are already developers in the Xen community with such
skills, or at a minimum there are professional Xen developers who can
sensibly devote their efforts to this issue.

Thank you,
Eric Shelton

On Thu, Jan 8, 2015 at 11:39 AM, Eric Shelton <eshelton@xxxxxxxxx> wrote:
>
> With the impending rollout of another Xen release lacking qemu-xen
> stubdom, I would like to campaign for a couple of things: (1) making
> qemu-xen stubdom a blocker for Xen 4.6; and (2) using a Linux-based
> stubdom, at least for the time being.
>
> (1) Making qemu-xen a blocker for 4.6
>
> This security issues presented by allowing qemu to run unrestricted
> within dom0 have been appreciated for a long time (comments about
> qemu-dm in XSA-109 illustrate this).  Just last month, a few
> additional qemu escalation vulnerabilities were demonstrated.  Given
> the size, complexity, and pace of development of qemu, we can
> reasonably assume there will always be some escalation vulnerability.
>
> Xen has generally taken known, and often far more obscure, security
> issues very seriously.  I am puzzled that Xen continues to offer a
> non-stubdom qemu.  The current qemu stubdom solution almost amounts to
> a mitigation, rather than a solution, of a well-known issue.  The
> options offered today are: (a) have access to all of the features and
> bugfixes offered by upstream qemu (which have been significant thanks
> to KVM's use of qemu, and are appreciable on and sometimes necessary
> for some guest operating systems), at the cost of qemu running
> unrestricted within dom0; or (b) have qemu contained in a separate
> domain, but with what is essentially only a "good enough" six year old
> version of qemu (ver 0.10.2).
>
> Additionally, once qemu-xen stubdom is realized, we can move away from
> and stop maintaining the forked qemu-traditional codebase.  Finally,
> all of the Xen-specific code could be mainstreamed into upstream qemu.
>
> If all that can managed is to release qemu-xen stub domains at the
> level of a tech preview, it still represents a significant improvement
> on both of the axes of functionality and security.
>
> (2) Using Linux to implement qemu-xen stubdom
>
> Efforts over the last 3 years at realizing qemu-xen stub domains seem
> to illustrate the "perfect is the enemy of the good" phenomenon.  I
> gather that rump kernel is the favored direction, and ultimately would
> be more memory efficient than using Linux, but a lot of unresolved
> technical issues lie down that path that are already solved by using
> Linux, and resolving them has, and likely will continue to, stymie
> getting something effective and reasonable out the door.
>
> It is understandable that the Xen team has prioritized other items
> over qemu-xen stubdom.  However, rather than allowing a lower priority
> item to be left undone, that should inform us that an "ideal," but
> more development intensive, path is not called for.  Instead, we
> should just adopt the more easily implemented solution and move on.
>
> Some reasons for picking up where Anthony left off with using Linux:
> (a) Linux + qemu is a mature and well-tested codebase.  Although there
> are some other environments under which qemu is built and run, far and
> away the most common platform is Linux.
> (b) Linux kernel + Xen is a mature and well-tested codebase.  Although
> there was some disappointment that external patches had to be applied
> to the Linux kernel, they were pretty minor.
> (c) A lot things need to be done for rump kernel that already just
> work with Linux.
> (d) Developers are more familiar with Linux.  At one point along the
> way, there was expressed interest in moving away from the obscure
> mini-os.  With rump kernel, mini-os remains in the picture AND we are
> adding NetBSD into the mix.  There are very few developers with the
> skills needed to make rump kernel happen or contribute to it.
> (e) The changes that need to be made to upstream qemu to run in a stub
> domain are a fixed item - they are somewhat independent of the
> underlying OS/execution environment.  If Xen starts with a Linux-based
> stubdom, the transition can still be made to rump kernel.
> (f) The memory overhead issue is overblown.  As time goes on, systems
> have more memory and memory gets cheaper.  How hard do we work to
> reduce the 32 MB used for Linux when we're allocating 2+ GB of memory
> to an HVM domain anyway, and how much of a reduction would we realize
> anyway?
>
> When Anthony released his patches, there was some disappointment
> expressed with the build process (which kernel version was being used
> and reliance on busybox, for example).  None of the raised issues
> strike me as significant problems - just decisions that have to be
> made.  Anyways, I find it incredible to believe that a MiniOS + NetBSD
> rump kernel + libc + whatever else solution is going to result in a
> cleaner build process.
>
> Given limited developer resources, sometimes tradeoffs have to be
> made.  I propose that Linux-based stub domains are the right tradeoff
> for incorporation into Xen 4.6.  Rump kernels may indeed be the
> future, but they do not seem to be the immediate solution for qemu
> stubdoms that we could use.
>
> Thank you,
> Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.