[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen-pcifront: Handle missed Connected state



On Fri, Sep 2, 2022 at 12:59 PM Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote:
>
> The conventional style for subject (from "git log --oneline") is:
>
>   xen/pcifront: Handle ...
>
> On Mon, Aug 29, 2022 at 11:15:36AM -0400, Jason Andryuk wrote:
> > An HVM guest with linux stubdom and 2 PCI devices failed to start as
>
> "stubdom" might be handy shorthand in the Xen world, but I think
> it would be nice to consistently spell out "stubdomain" since you use
> both forms randomly in this commit log and newbies like me have to
> wonder whether they're the same or different.
>
> > libxl timed out waiting for the PCI devices to be added.  It happens
> > intermittently but with some regularity.  libxl wrote the two xenstore
> > entries for the devices, but then timed out waiting for backend state 4
> > (Connected) - the state stayed at 7 (Reconfiguring).  (PCI passthrough
> > to an HVM with stubdomain is PV passthrough to the stubdomain and then
> > HVM passthrough with the QEMU inside the stubdomain.)
> >
> > The stubdom kernel never printed "pcifront pci-0: Installing PCI
> > frontend", so it seems to have missed state 4 which would have
> > called pcifront_try_connect -> pcifront_connect_and_init_dma
>
> Add "()" after function names for clarity.
>
> > Have pcifront_detach_devices special-case state Initialised and call
> > pcifront_connect_and_init_dma.  Don't use pcifront_try_connect because
> > that sets the xenbus state which may throw off the backend.  After
> > connecting, skip the remainder of detach_devices since none have been
> > initialized yet.  When the backend switches to Reconfigured,
> > pcifront_attach_devices will pick them up again.

Thanks for taking a look, Bjorn.  That all sounds good.  I'll wait a
little longer to see if there is any more feedback before sending a
v2.

Regards,
Jason



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.