[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v13 03/20] Introduce pv guest type and has_hvm_container macros

At 14:46 +0100 on 26 Sep (1380206764), George Dunlap wrote:
> There's more to not having a qemu than just not starting qemu, of 
> course; a lot of the HVM codepaths assume that there is one and will 
> dereference structures which will be empty.  But that should be fairly 
> easy to fix.

True.  And I suspect with various patchsets around that allow for
multiple ioreq-servicing backends we can allow for there to be none.

> Having the PV e820 map makes sense, but you can file that under "make 
> available to hvm guests as well".
> The main things left are the PV paths for cpuid, PIO, and forced 
> emulated ops.  I haven't taken a close look at how these differ, or what 
> benefit is gained from using the PV version over the HVM version.

I would be inclined to use the HVM paths for PIO and emulated ops; the
cpuid interface might need fudging, I guess. 

> The other big thing is being able to set up more state when bringing up 
> a vcpu.

Sure.  But again, probably OK to expose a fuller setvcpucontext to all
HVM guests.

> One reason to disable unneeded things is the security angle: there is a 
> risk, no matter how small, that there is somehow an exploitable bug in 
> our emulated APIC / PIT code; so running with the code entirely disabled 
> is more secure than running with it enabled but just not used.

That's a fair point.  Could we arrange that by having control flags
flags for things like RTC and [[IO]A]PIC, the way we do for HPET?

The same argument goes the other way -- might we want to have a HVM
param that disables the extended PV interface?  We haven't done that
before (except, I guess, for the Viridian interface), but it would be
easy enough to arrange, and it seems less intrusive than having a third
class of guests at the top level.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.