[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/9] vpci/msi: add MSI handlers



On Wed, Aug 02, 2017 at 07:34:28AM -0600, Jan Beulich wrote:
> >>> Roger Pau Monne <roger.pau@xxxxxxxxxx> 06/30/17 5:01 PM >>>
> >+int vpci_msi_arch_enable(struct vpci_arch_msi *arch, struct pci_dev *pdev,
> >+                         uint64_t address, uint32_t data, unsigned int 
> >vectors)
> >+{
> >+    struct msi_info msi_info = {
> >+        .seg = pdev->seg,
> >+        .bus = pdev->bus,
> >+        .devfn = pdev->devfn,
> >+        .entry_nr = vectors,
> >+    };
> >+    unsigned int i;
> >+    int rc;
> >+
> >+    ASSERT(arch->pirq == -1);
> 
> Please introduce a #define for the -1 here, to allow easily matching up
> producer and consumer side(s).

I've added a define for INVALID_PIRQ to xen/irq.h.

> >+    /* Get a PIRQ. */
> >+    rc = allocate_and_map_msi_pirq(pdev->domain, -1, &arch->pirq,
> >+                                   MAP_PIRQ_TYPE_MULTI_MSI, &msi_info);
> >+    if ( rc )
> >+    {
> >+        dprintk(XENLOG_ERR, "%04x:%02x:%02x.%u: failed to map PIRQ: %d\n",
> >+                pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> >+                PCI_FUNC(pdev->devfn), rc);
> >+        return rc;
> >+    }
> >+
> >+    for ( i = 0; i < vectors; i++ )
> >+    {
> >+        xen_domctl_bind_pt_irq_t bind = {
> >+            .machine_irq = arch->pirq + i,
> >+            .irq_type = PT_IRQ_TYPE_MSI,
> >+            .u.msi.gvec = msi_vector(data) + i,
> >+            .u.msi.gflags = msi_flags(data, address),
> >+        };
> >+
> >+        pcidevs_lock();
> >+        rc = pt_irq_create_bind(pdev->domain, &bind);
> >+        if ( rc )
> >+        {
> >+            dprintk(XENLOG_ERR,
> >+                    "%04x:%02x:%02x.%u: failed to bind PIRQ %u: %d\n",
> >+                    pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> >+                    PCI_FUNC(pdev->devfn), arch->pirq + i, rc);
> >+            spin_lock(&pdev->domain->event_lock);
> >+            unmap_domain_pirq(pdev->domain, arch->pirq);
> 
> Don't you also need to undo the pt_irq_create_bind() calls here for all prior
> successful iterations?

Yes, unmap_domain_pirq calls pirq_guest_force_unbind but better not
resort to that.

> >+int vpci_msi_arch_disable(struct vpci_arch_msi *arch, struct pci_dev *pdev,
> >+                          unsigned int vectors)
> >+{
> >+    unsigned int i;
> >+
> >+    ASSERT(arch->pirq != -1);
> >+
> >+    for ( i = 0; i < vectors; i++ )
> >+    {
> >+        xen_domctl_bind_pt_irq_t bind = {
> >+            .machine_irq = arch->pirq + i,
> >+            .irq_type = PT_IRQ_TYPE_MSI,
> >+        };
> >+
> >+        pcidevs_lock();
> >+        pt_irq_destroy_bind(pdev->domain, &bind);
> 
> While I agree that the loop should continue of this fails, I'm not convinced
> you should entirely ignore the return value here.

I've added a printk in order to aid debug.

> >+/* Handlers for the MSI control field (PCI_MSI_FLAGS). */
> >+static void vpci_msi_control_read(struct pci_dev *pdev, unsigned int reg,
> >+                                  union vpci_val *val, void *data)
> >+{
> >+    const struct vpci_msi *msi = data;
> >+
> >+    /* Set multiple message capable. */
> >+    val->u16 = MASK_INSR(fls(msi->max_vectors) - 1, PCI_MSI_FLAGS_QMASK);
> 
> The comment is somewhat misleading - whether the device is multi-message
> capable depends on msi->max_vectors.

Better "Set the number of supported messages"?

> >+    if ( msi->enabled ) {
> 
> Style.
> 
> >+        val->u16 |= PCI_MSI_FLAGS_ENABLE;
> >+        val->u16 |= MASK_INSR(fls(msi->vectors) - 1, PCI_MSI_FLAGS_QSIZE);
> 
> Why is reading back the proper value here dependent upon MSI being
> enabled?

Right, I've now slightly changed this to always store the number of
enabled vectors, regardless of whether the MSI enable bit is set or
not.

> >...
> >+ error:
> >+    ASSERT(ret);
> >+    xfree(msi);
> >+    return ret;
> >+}
> 
> Don't you also need to unregister address handlers you've registered?

vpci_add_handlers already takes care of cleaning up the register
handlers on failure.

> >+void vpci_dump_msi(void)
> >+{
> >+    struct domain *d;
> >+
> >+    for_each_domain ( d )
> >+    {
> >+        const struct pci_dev *pdev;
> >+
> >+        if ( !has_vpci(d) )
> >+            continue;
> >+
> >+        printk("vPCI MSI information for guest %u\n", d->domain_id);
> 
> "... for Dom%d" or "... for d%d" please.
> 
> >...
> >+            if ( msi->masking )
> >+                printk("mask=%#032x\n", msi->mask);
> 
> Why 30 hex digits? And generally # should be used only when not blank or
> zero padding the value (as field width includes the 0x prefix).

Ouch, that should be 8, not 32.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.