[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v9 02/16] vpci: use per-domain PCI lock to protect vpci structure


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 20 Sep 2023 10:12:09 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jCIr2BTrdjBwf8JwussqK1yt2W6NAgWjxI2uXKZS1us=; b=S5ltXKazBl9NZX8CzLrt/5MSHnDGFyUDIbZHiYrGKxRuDtvN255JonrA8FxxaKAR3aJkAMGYqaBYAwOUM0El/aNoo8wI8sAJg+Bn4KYb8LBg1zNM6/KeZ12XmoMOleuOxuzz9hSrhtaArqxxdYmfoX2iMTAsXjP0KfhwPj7dvXA4485wxRRTUirDCsMJvXVetyGR2Kj60eDtvoOuo1Kpk0lUvvgt+D0gTp/Cmty+6YdU6H27dlRpcisqAC9Z0ZGCwuGVxn6gkT5Hjvu1pgSxl9CZoVpHwWGJdp/3bfwQZhHGaSbeTKplQjsHUkqHXoKiVgLkWqWDTPVj7lS72jZ0Rg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HVwT+4wlq6fWRK++z2+hXSAYsfKJTkIkGkIgW4RCFKgCp+b+JAi0JN0iMDvJy1YTXJ1DKAGZszecTiDCt+kModwZvZPeQNtOywbrlfr1cxQw3msqJO61izbiBT+laqhcKk/C8h1Svy5x6njSVhQUjLHNfzgjYWq9icWEB0I8mlXZ7bq3i/c635cg4gzVfw3mfqOTPoVzaol1k2iOwthKWjrF2xbf4pT2Tinh8LvM+I2oGrif+CuhC6KMaOODawtOfJeFxQoC/EXqUKVSp0HPwuWy0luVOzK55gFARjOr/Uax/OM2mp+QsJ7f0Yhx/E87wBOFLxs/eoHExZ8VshbwJQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Wed, 20 Sep 2023 08:12:26 +0000
  • Ironport-data: A9a23:EE1vM6uP729d6+S6DUOEk7/HBOfnVOpfMUV32f8akzHdYApBsoF/q tZmKWHXOK7cZGT1ctknbIuwoxkHupSBzNU1SAY5qCs8FHgU+JbJXdiXEBz9bniYRiHhoOCLz O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVaicfHg3HFc4IMsYoUoLs/YjhYJ1isSODQqIu Nfjy+XSI1bg0DNvWo4uw/vrRChH4rKq4lv0gnRkPaoQ5A6EzyFMZH4iDfrZw0XQE9E88tGSH 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG fMwFxwkNiG6qd2K74mcR643geE4E8LRBdZK0p1g5Wmx4fcOZ7nmGv2Pz/kHmTA6i4ZJAOrUY NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0ouiP60aIC9lt+iHK25mm6Co W3L5SLhCwwyP92D0zuVtHmrg4cjmAuiAtlITOXnrqECbFu7/WEDJANGZHyA/OCfhEvnYv57F A8f9X97xUQ13AnxJjXnZDW6qnOZuh8XW/JLDvY3rgqKz8L83QGdAWQVSy9bX/YvvsQ2WD8C2 0eAmpXiAjkHmK2YTzeR+6mZqRu2ODMJNikSaCkcVwwH7tL/5oYpgXrnVcpuD6evkpv1GDX8z jqQpS4yr7wWgYgA0KDT1XfDjjG3r57FVDkc4AnNQ3ml5QN0Yo2iT4Gw4F2d5vFFRK6GSnGRs X5CnNKRhMgMEJfLkiWOSecMGbiB5vCZPTmaillqd7Ei+iqs/Ti/fIlWyDB4OEptdM0DfFfBe EbOvStB6ZkVO2GlBZKbeKq0AsUuiKLmStLsU6mMasIUO8ArMgia4CtpeEicmXj3l1Qhmr0+P pHddtuwCXEdCuJsyz/eq/oh7ILHDxsWnQv7La0XBTz9uVZCTBZ5kYs4DWY=
  • Ironport-hdrordr: A9a23:5eLSTqHKCu+zcx/TpLqE98eALOsnbusQ8zAXPiFKKSC9Hfbyqy nDpp4mPHzP6Qr5OktOpTlRAtjifZq0z/cc3WB2B9qftWLd1ldAQrsP0WKY+UyDJxHD
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Sep 19, 2023 at 05:55:42PM +0200, Jan Beulich wrote:
> On 19.09.2023 17:39, Roger Pau Monné wrote:
> > On Tue, Aug 29, 2023 at 11:19:42PM +0000, Volodymyr Babchuk wrote:
> >> @@ -2908,7 +2908,13 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
> >> index, int *pirq_p,
> >>  
> >>      msi->irq = irq;
> >>  
> >> -    pcidevs_lock();
> >> +    /*
> >> +     * If we are called via vPCI->vMSI path, we already are holding
> >> +     * d->pci_lock so there is no need to take pcidevs_lock, as it
> >> +     * will cause lock inversion.
> >> +     */
> >> +    if ( !rw_is_locked(&d->pci_lock) )
> >> +        pcidevs_lock();
> > 
> > This is not a safe expression to use, rw_is_locked() just returns
> > whether the lock is taken, but not if it's taken by the current CPU.
> > This is fine to use in assertions and debug code, but not in order to
> > take lock ordering decisions I'm afraid.
> > 
> > You will likely need to move the locking to the callers of the
> > function.
> 
> Along the lines of a later comment, I think it would by rw_is_write_locked()
> here anyway. Noting that xen/rwlock.h already has an internal
> _is_write_locked_by_me(), it would in principle be possible to construct
> something along the lines of what the comment says. But it would certainly
> be better if that could be avoided.

I personally don't like construct like the above, they are fragile and
should be avoided.

It might be better to introduce some wrappers around
allocate_and_map_msi_pirq() for the different locking contexts of
callers if possible.

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.