[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen_pt: fix failure of attaching & detaching a PCI device to VM repeatedly



On Wed, 9 Dec 2015, Jianzhong,Chang wrote:
> Add pci = [ '$VF_BDF1', '$VF_BDF2', '$VF_BDF3'] in
> hvm guest configuration file. After the guest boot up,
> detach the VFs in sequence by
>  "xl pci-detach $DOMID $VF_BDF1"
>  "xl pci-detach $DOMID $VF_BDF2"
>  "xl pci-detach $DOMID $VF_BDF3"
> and reattach the VFs in sequence by
>  "xl pci-attach $DOMID $VF_BDF1"
>  "xl pci-attach $DOMID $VF_BDF2"
>  "xl pci-attach $DOMID $VF_BDF3"
> An error message will be reported like this:
> "libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received
> an error message from QMP server: Duplicate ID 'pci-pt-07_10.1' for device"
> 
> When xen_pt_region_add/del() is called, MemoryRegion
> may not belong to the XenPCIPassthroughState.
> xen_pt_region_update() checks it but memory_region_ref/unref() does not.
> This case causes obj->ref issue and affects the release of related objects.
> So, memory_region_ref/unref() is moved from
> xen_pt_region_add/del to xen_pt_region_update.
> 
> Signed-off-by: Jianzhong,Chang <jianzhongx.chang@xxxxxxxxx>

This is much better, thanks! I have just one minor ask, see below.


>  hw/xen/xen_pt.c |   13 ++++++++-----
>  1 files changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index aa96288..b963208 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -590,11 +590,15 @@ static void xen_pt_region_update(XenPCIPassthroughState 
> *s,
>      if (bar == -1 && (!s->msix || &s->msix->mmio != mr)) {
>          return;
>      }
> -
> +    if (adding) {
> +        memory_region_ref(mr);
> +    }
>      if (s->msix && &s->msix->mmio == mr) {
>          if (adding) {
>              s->msix->mmio_base_addr = sec->offset_within_address_space;
>              rc = xen_pt_msix_update_remap(s, s->msix->bar_index);
> +        } else {
> +            memory_region_unref(mr);
>          }

Instead of this, could you please add an out label below, where you call
memory_region_unref, and here just goto out;


>          return;
>      }
> @@ -635,6 +639,9 @@ static void xen_pt_region_update(XenPCIPassthroughState 
> *s,
>                         adding ? "create new" : "remove old", errno);
>          }
>      }
> +    if (!adding) {
> +        memory_region_unref(mr);
> +    }
>  }
>  
>  static void xen_pt_region_add(MemoryListener *l, MemoryRegionSection *sec)
> @@ -642,7 +649,6 @@ static void xen_pt_region_add(MemoryListener *l, 
> MemoryRegionSection *sec)
>      XenPCIPassthroughState *s = container_of(l, XenPCIPassthroughState,
>                                               memory_listener);
>  
> -    memory_region_ref(sec->mr);
>      xen_pt_region_update(s, sec, true);
>  }
>  
> @@ -652,7 +658,6 @@ static void xen_pt_region_del(MemoryListener *l, 
> MemoryRegionSection *sec)
>                                               memory_listener);
>  
>      xen_pt_region_update(s, sec, false);
> -    memory_region_unref(sec->mr);
>  }
>  
>  static void xen_pt_io_region_add(MemoryListener *l, MemoryRegionSection *sec)
> @@ -660,7 +665,6 @@ static void xen_pt_io_region_add(MemoryListener *l, 
> MemoryRegionSection *sec)
>      XenPCIPassthroughState *s = container_of(l, XenPCIPassthroughState,
>                                               io_listener);
>  
> -    memory_region_ref(sec->mr);
>      xen_pt_region_update(s, sec, true);
>  }
>  
> @@ -670,7 +674,6 @@ static void xen_pt_io_region_del(MemoryListener *l, 
> MemoryRegionSection *sec)
>                                               io_listener);
>  
>      xen_pt_region_update(s, sec, false);
> -    memory_region_unref(sec->mr);
>  }
>  
>  static const MemoryListener xen_pt_memory_listener = {
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.