|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug controller implementation into Xen
> -----Original Message-----
> From: Ian Campbell
> Sent: 14 March 2014 15:02
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI hotplug
> controller implementation into Xen
>
> On Fri, 2014-03-14 at 14:31 +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Ian Campbell
> > > Sent: 14 March 2014 14:09
> > > To: Paul Durrant
> > > Cc: xen-devel@xxxxxxxxxxxxx
> > > Subject: Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI
> hotplug
> > > controller implementation into Xen
> > >
> > > On Fri, 2014-03-14 at 13:25 +0000, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Ian Campbell
> > > > > Sent: 14 March 2014 11:58
> > > > > To: Paul Durrant
> > > > > Cc: xen-devel@xxxxxxxxxxxxx
> > > > > Subject: Re: [Xen-devel] [PATCH v3 6/6] ioreq-server: bring the PCI
> > > hotplug
> > > > > controller implementation into Xen
> > > > >
> > > > > On Wed, 2014-03-05 at 14:48 +0000, Paul Durrant wrote:
> > > > > > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> > > > > > index 2e52470..4176440 100644
> > > > > > --- a/tools/libxl/libxl_pci.c
> > > > > > +++ b/tools/libxl/libxl_pci.c
> > > > > > @@ -867,6 +867,13 @@ static int do_pci_add(libxl__gc *gc, uint32_t
> > > > > domid, libxl_device_pci *pcidev, i
> > > > > > }
> > > > > > if ( rc )
> > > > > > return ERROR_FAIL;
> > > > > > +
> > > > > > + rc = xc_hvm_pci_hotplug_enable(ctx->xch, domid, pcidev-
> >dev);
> > > > > > + if (rc < 0) {
> > > > > > + LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Error:
> > > > > xc_hvm_pci_hotplug_enable failed");
> > > > > > + return ERROR_FAIL;
> > > > > > + }
> > > > >
> > > > > Perhaps I'm misreading this but does this imply that you cannot
> hotplug
> > > > > PCI devices into an HVM guest which wasn't started with a PCI
> device?
> > > > > That doesn't sound right/desirable.
> > > > >
> > > >
> > > > I don't think that is the case. The extra code here is because we're
> > > > intercepting the hotplug controller IO space in Xen so QEMU may well
> > > > play with its hotplug controller device model, but the guest will
> > > > never see it.
> > >
> > > That wasn't what I meant.
> > >
> > > Unless the guest has a PCI device enabled the above code will never be
> > > called, so we will never setup the hotplug controller within Xen.
> > >
> >
> > I don't follow. The hotplug controller is set up by the call to
> > gpe_init() in hvm_domain_initialize(). The above code is there to tell
> > the hotplug controller a new device has appeared. Am I missing
> > something?
>
> No, I was, didn't realise this was per-device setup.
>
> I assume this is ok to call for both cold- and hotplug
>
I believe so. I've certainly seen no fallout in testing my new VGA device
(which is cold plugged to a paused domain).
> > > > > Is there no problem with the availability of the i/o space for the
> > > > > different versions of qemu (i.e. they are both the same today?) The
> AML
> > > > > looked like it poked a different thing in the trad case -- so is
> > > > > 0xae00
> > > > > unused there?
> > > > >
> > > >
> > > > QEMU will still emulate a PCI hotplug controller but the guest will no
> > > > longer see it. In the case of upstream that io range is now handled by
> > > > xen, so it really really can't get to it. If trad is used then the
> > > > hotplug controller would still be visible if the guest talks to the
> > > > old IO ranges, but since they are not specified in the ACPI table any
> > > > more it shouldnât have anything to do with them. If you think that's a
> > > > problem then I could hook those IO ranges in Xen too and stop the IO
> > > > getting through.
> > >
> > > What I meant was what if there was something else at 0xae00 on trad?
> >
> > I don't believe so.
> >
> > > (since trad seems to have its hotplug controller somewhere else this is
> > > possible). That something will now be shadowed by the hotplug
> controller
> > > in Xen. If that something was important for some other reason this is a
> > > problem. IOW is there a hole in the io port address map at this location
> > > on both qemus?
> > >
> >
> > The new implementation in Xen directly overlays the upstream QEMU
> > controller.
>
> I got this part.
>
> > I believe those IO ports are unimplemented by trad.
>
> That's the important thing (although turning "believe" into "have
> confirmed" would make me sleep easier).
Sure, I can check.
>
> > > BTW, what happens on migrate from current Xen to something with this
> > > patch in? The guest will be using the old AML and poke the old
> > > addresses. Maybe that just works?
> > >
> >
> > If you started with upstream, we now overlay them with IO ports having
> > the same semantics, so that should be fine.
>
> Do we need to arrange to restore any state saved by qemu into Xen or is
> it stateless across migration?
>
It should be stateless as we cannot migrate with a pass-through device plugged
in AFAIK.
> > If you started with trad then there will be a problem - the IO ports
> > will still function, but the new API call will cause the SCI to be
> > asserted and unless something talks to to the new IO port it won't be
> > de-asserted.
> >
> > So, I guess we need something to explicitly init the new hotplug
> > controller in domain build rather than always creating it.
>
> You also need to keep the distinction in the AML then, which is
> unfortunate.
>
I don't think we do. The old AML should migrate across with the guest. For
every new domain, it will get the new hotplug controller and hence need the new
AML.
> We could tie this feature (multireq) to qemu version and simply not
> support it for qemu-trad, that might simplify some of this sort of thing
> a bit.
>
If we did that then we *would* need to select the right AML, but - providing I
can verify that trad doesn't have any IO ports covered up by the new
implementation - I don't think we need to differentiate.
Paul
> Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |