[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [question] Does HVM domain support xen-pcifront/xen-pciback?



On Wed, 8 Feb 2012, Kai Huang wrote:
> On Wed, Feb 8, 2012 at 1:14 AM, Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx> wrote:
> > On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
> >> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> >> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
> >> >>Hi,
> >> >>
> >> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
> >> >>returns error. So seems HVM domain does not support
> >> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
> >> >Yup. B/c the only thing that the PV PCI protocol does is enable
> >> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
> >> >that already.
> >> >
> >> I heard qemu does not support PCIE simulation, and Xen does not
> >> provides MMIO mechanism but only legacy IO port mechanism to guest
> >> for configuration space access. Is this true?
> >
> > The upstream version has a X58 north bridge implementation to support this.
> > (ioh3420.c).
> >
> > In regards to MMIO mechanism are you talking about MSI-X and such? Then
> > the answer is it does. QEMU traps when a guest tries to write MSI vectors
> > in the BAR space and translates those to appropiate xen calls to setup
> > vectors for the guest.
> 
> MMIO mechanism means software can access PCI configuration space
> through memory space, using normal mov instruction, like normal memory
> access. I believe modern PCs all provide this mechanism. Basically
> there'll be a physical memory address area reserved for PCI
> configuration space, and base address will be reported in ACPI table.
> If there's no such information in ACPI table, we need to use legacy IO
> port mechanism.
> 
> I am not specifically talking about MSI-X. Accessing PCI configuration
> space through IO port has limitation that we can only access first
> 256B of configuration space as it is designed for PCI bus. For PCIE
> device, configuration space is extended to 4K, which means we cannot
> access configuration space after 256B by using legacy IO port
> mechanism. This is why PCIE device requires MMIO mechanism to access
> it's configuration space. If PCIE device implements some capabilities,
> such as MSI-X capability, after first 256B in it's configuration
> space, we will never be able to enable MSI-X by using IO port
> mechanism.
> 
> I am not familiar Xen and don't know if Xen has provides MMIO
> mechanism to guest for PCI configuration space access (To provide it,
> I think we need to report base address of configuration space in
> guest's ACPI table, and mark pages of configuration space to be
> non-accessible, as hypervisor needs to capture guest's configuration
> space access).

We don't support PCI configuration via MMIO at the moment but it is
certainly possible to introduce it.

On the other hand having PV PCI work with HVM guests is technically
challenging because we would need to support the emulated PCI domain/bus
and the new PV PCI domain/bus created by pcifront at the same time.
For example xenbus initialization is done through the Xen platform-pci
driver.
Also, considering that we need to support the emulated passthrough code
for Windows guests anyway, I don't think that introducing any more
complexity or use cases in either the PV or emulated code paths is a
good idea.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.