[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HYBRID: PV in HVM container



On Fri, Jul 29, 2011 at 07:00:07PM +0100, Stefano Stabellini wrote:
> On Fri, 29 Jul 2011, Konrad Rzeszutek Wilk wrote:
> > > Besides if we have HVM dom0, we can enable
> > > XENFEAT_auto_translated_physmap and EPT and have the same level of
> > > performances of a PV on HVM guest. Moreover since we wouldn't be using
> > > the mmu pvops anymore we could drop them completely: that would greatly
> > 
> > Sure. It also means you MUST have an IOMMU in the box.
> 
> Why?

For HVM dom0s.. But I think when you say HVM here, you mean using
PV with the hypervisor's code that is used for managing page-tables - 
EPT/NPT/HAP.

So PV+HAP = Stefano's HVM :-)

> We can still remap interrupts into event channels.
> Maybe you mean VMX?
> 
> 
> > > simplify the Xen maintenance in the Linux kernel as well as gain back
> > > some love from the x86 maintainers :)
> > > 
> > > The way I see it, normal Linux guests would be PV on HVM guests, but we
> > > still need to do something about dom0.
> > > This work would make dom0 exactly like PV on HVM guests apart from
> > > the boot sequence: dom0 would still boot from xen_start_kernel,
> > > everything else would be pretty much the same.
> > 
> > Ah, so not HVM exactly (you would only use the EPT/NPT/RV1/HAP for
> > pagetables).. and PV for startup, spinlock, timers, debug, CPU, and
> > backends. Thought sticking in the HVM container in PV that Mukesh
> > made work would also benefit.
> 
> Yes for startup, spinlock, timers and backends. I would use HVM for cpu
> operations too (no need for pv_cpu_ops.write_gdt_entry anymore for
> example).

OK, so a SVM/VMX setup is required.
> 
> 
> > Or just come back to the idea of "real" HVM device driver domains
> > and have the PV dom0 be a light one loading the rest. But the setup of
> > it is just so complex.. And the PV dom0 needs to deal with the PCI backend
> > xenstore, and able to comprehend ACPI _PRT... and then launch the "device
> > driver" Dom0, which at its simplest form would have all of the devices
> > passed in to it.
> > 
> > So four payloads: PV dom0, PV dom0 initrd, HVM dom0, HVM dom0 initrd :-)
> > Ok, that is too cumbersome. Maybe ingest the PV dom0+initrd in the Xen
> > hypervisor binary.. I should stop here.
> 
> The goal of splitting up dom0 into multiple management domain is surely
> a worthy goal, no matter is the domains are PV or HVM or PV on HVM, but
> yeah the setup is hard. I hope that the we'll be able to simplify it in
> the near future, maybe after the switchover to the new qemu and seabios
> is completed.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.