[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Current PVH/HVMlite work and planning (was :Re: Discussion about virtual iommu support for Xen guest)



> From: Stefano Stabellini
> Sent: Saturday, June 04, 2016 12:57 AM
> 
> > > >
> > > > How stable is the HVMLite today? Is it already in production usage?
> > > >
> > > > Wonder whether you have some detail thought how full PCI root complex
> > > > emulation will be done in Xen (including how to interact with Qemu)...
> > >
> > > I haven't looked into much detail regarding all this, since as I said, 
> > > it's
> > > still a little bit far away in the PVH/HVMlite roadmap, we have more
> > > pressing issues to solve before getting to the point of implementing
> > > PCI-passthrough. I expect Xen is going to intercept all PCI accesses and 
> > > is
> > > then going to forward them to the ioreq servers that have been registered
> > > for that specific config space, but this of course needs much more thought
> > > and a proper design document.
> > >
> > > > As I just wrote in another mail, if we just hit for HVM first, will it 
> > > > work if
> > > > we implement vIOMMU in Xen but still relies on Qemu root complex to
> > > > report to the guest?
> > >
> > > This seems quite inefficient IMHO (but I don't know that much about all 
> > > this
> > > vIOMMU stuff). If you implement vIOMMU inside of Xen, but the PCI root
> > > complex is inside of Qemu aren't you going to perform quite a lot of jumps
> > > between Xen and QEMU just to access the vIOMMU?
> > >
> > > I expect something like:
> > >
> > > Xen traps PCI access -> QEMU -> Xen vIOMMU implementation
> > >
> >
> > I hope the role of Qemu is just to report vIOMMU related information, such
> > as DMAR, etc. so guest can enumerate the presence of vIOMMU, while
> > the actual emulation is done by vIOMMU in hypervisor w/o going through
> > Qemu.
> >
> > However just realized even for above purpose, there's still some interaction
> > required between Qemu and Xen vIOMMU, e.g. register base of vIOMMU and
> > devices behind vIOMMU are reported thru ACPI DRHD which means Xen vIOMMU
> > needs to know the configuration in Qemu which might be dirty to define such
> > interfaces between Qemu and hypervisor. :/
> 
> PCI accesses don't need to be particularly fast, they should not be on
> the hot path.
> 
> How bad this interface between QEMU and vIOMMU in Xen would look like?
> Can we make a short list of basic operations that we would need to
> support to get a clearer idea?

Below is a quick thought of basic operations between vIOMMU and a PCI 
root-complex, if they are not putting together:
(come from VT-d spec, section 8, "BIOS Consideration")

1) vIOMMU reports its presence including capabilities to root-complex:
        - interrupt remapping, ATS, etc.
        - type of devices (virtual, PV, passthrough)... (???TBD)

2) root-complex notifies vIOMMU about:
        - base of vIOMMU registers (DRHD)
        - devices attached to this vIOMMU (DRHD)
                * dynamic update due to hotplug or PCI resource rebalancing

3) Additionally as I mentioned in another thread, Qemu need to query
vIOMMU whether a virtual DMA should be blocked, if vIOMMU for virtual
devices are also put in Xen

Other BIOS structures (ATS, RMRR, hotplug, etc.) are optional so I haven't
thought carefully now. They may require additional interactions if required.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.