[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 03/14] vpci: move lock outside of struct vpci


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 31 Jan 2022 09:56:51 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pTrfXcwKimioBEh4P/30fZepIiT5wZOWp2rDgpxwfmE=; b=odsXBli9F2o8Blkz6pBNURWnjHIa+YNd/yg+2FINVqcz4jXVkr7JepeQYGkXPtyRMOlJrW8uesASaeKWC4XuN1nQU/iaT71RzSNNYDWmY92fGW4AoXqwCEeRO1tpha72XGep7N9SkNEM8uOVtrEdUiP/jOxwCsuNZ5XnQnwqHeoOAIvNYWIX3JoMWvX/DTP8ECP0ggqcgabyUP0P44Ws8KvsMgSZ1UbazdixTrEfFoRAE3fEBmvlpetin4k8QzyzdPxu0yHREDTiDS2y2WCRpZVgbgSuAWUuFwvlyZxJXhfHXxuBITfOwR4dRRqQBDnRfVTvW+oimaRUYmOSD0ZTFg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GBkEKmfSUXmSMOwL/0iTFTRQDrEr5tNkGS0X4I57tG96s9LplHup/JjFcHDNqpZMnXCSKT83zAoOKgpR8NQ4kdcW2bD1udnqfFGBIZxRM4FtIPaAraud9gfiAoxKP7kEAvJpT1kDMpBOcaFsl2iWSWqkkREsjhzfkKmwy0PcdBEOtnDYiYRz06G7xVnSPrPx6Ibc6fyIFZhoA+FL6osCLRuEU3K4Oo4vPivp00pxFYr11PEA4O9jan1YFMDrwAJoLwDMeBhYfkIpnTaQJYnfVTd4jRDrsXFoIMCiEf7mWJ5c3wCEyZ0UE7LM7p9bBKdJpYqiPaylZpPPPygQQJscww==
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "andrew.cooper3@xxxxxxxxxx" <andrew.cooper3@xxxxxxxxxx>, "george.dunlap@xxxxxxxxxx" <george.dunlap@xxxxxxxxxx>, "paul@xxxxxxx" <paul@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 31 Jan 2022 08:57:20 +0000
  • Ironport-data: A9a23:SZDhhqLEDDV5SeH/FE+R5ZMlxSXFcZb7ZxGr2PjKsXjdYENSg2MOy WEWCmuGbv2OazemLY9zbY6y9h9Sv5CHnddgQQZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokcxIn5BC5C5xZVG/fjgqoHUVaiUakideSc+EH170Us4wrZg6mJVqYPR7z2l6 IuaT/L3YDdJ6xYsWo7Dw/vewP/HlK2aVAIw5jTSV9gS1LPtvyB94KYkDbOwNxPFrrx8RYZWc QphIIaRpQs19z91Yj+sfy2SnkciGtY+NiDW4pZatjTLbrGvaUXe345iXMfwZ3u7hB2GxuBq2 c12vKeubjsZEafpssYHCUFXRnQW0a1uoNcrIFC6uM2XiUbHb2Ht07NlC0Re0Y8wo7gtRzsUr LpBdW5LPkvra+GemdpXTsF2gcsuNo/zNZ43sXB81zDJS/0hRPgvRo2UvoYAhWxo3qiiG97aR NA6ZGJuZi/uXBlhYlZQUM05nNiR0yyXnzpw9wvO+PtfD3Lo5BR4zbzFINfTPNuQSq19t1yEq 2fx2nX2CxAXKvSS0TOAtHmrg4fnly7hRJgbErH+8/dwmUCS3UQaEhhQXly+ydGph0j7V99BJ kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c8BZE/A+rhqMzKXUyw+DAy4PSTspQOIhsMg6VDk7z GijltniBSFsmLCNQHfb/bCRxRuwMyUIKW4JZQcfUBAIpdLkpekbqRbCTc1qFqKvufTzFSvt2 DCBrCU4hLI7gNYC0uOw+lWvqzCxopnESCYl6wORWXiqhiteYIOmfIWu5ULs0edbLI2ZQ1+Cu 1AJg8GbqusJCPmljzeRSe8AGLWo4fetMzDGh1NrWZ47+FyQF2WLJN4KpmskfQEwb5hCKWSBj FLvVR1568ZjJ2qyVqZMfaWNO+sg6ILCD469WaWBBjZRWaRZeAiC9SBoQEef2WHxjUQh+Z0C1 YenndWEVihDV/k+pNaib6JEiOJwmHhirY/Gbc2jl3yaPayiiGl5oFvvGH+HdagH4ayNu205G P4PZpLRm32zvAATCxQ7ELL/z3hXdxDX5ris8qS7k9JvxCI8QgnN7NeKmdscl3RNxfg9qwsx1 ijVtrVk4FT+n2bbDg6Bd2pubrjiNb4m8y5gYHxzZQb2iiRyCWpK0Ev5X8FtFVXA3Lc7pcOYs tFfI5nQahixYmmvF8shgWnV89U5KUXDafOmNCu5ejkvF6OMtCSSkuIIijDHrXFUZgLu7JNWi +T5imvzHMRfLyw/Upe+QK//njuZ4ClG8MovDhSgHzWmUBi2mGScA3av3qZfzgBlAUir+wZ2I C7PX09A+7GR8tBkmDQL7Ijdx7qU/yJFNhMyN0HQ7KqsNDmc+WymwIRaV/2PcyybX2TxkJhOr 80Mpx0lGPFYzltMraRmFLNnkfA369f1/ucIxQV4BnTbKV+sD+o4cHWB2MBOsIxLx6NY5lTqC h7epIECNOXbIt7hHX4QOBEhMraJ28YLl2SA9v8yOkj7unN6peLVTUVIMhCQoyVBN78pYpg9y OIstZdOuQyygxYnKPiciSVQ+zjeJ3AMSfx/5JobHJXqmkwgzVQbOc7QDSr/4ZeubdRQMxZ1f m/I1fSa37kFnxjMaXs+E3TJzNFxv5VWtUAY1kIGKnSIhsHB2q090ipO/GllVQ9S1BhGjb5+Y zA5K01vKKyS1D50n8wfDXu0EgRMCRDFqEz8z1wFyD/QQ0WyDzGfKWQ8PaCG/VwD8nIadT9ep enKxGHgWDfsXcfwwiptBhI19629FYR8pl/YhcSqP8WZBJ1rMzPqj5inaXcMtxa6U9g6g1fKp LUy8et9AUEh2fX8f0Hv51Gm6Ikt
  • Ironport-hdrordr: A9a23:Qv53i68ix44w/thoH3Buk+FAdb1zdoMgy1knxilNoENuHfBwxv rDoB1E73LJYVYqOU3Jmbi7Sc69qFfnhORICO4qTMqftWjdyRCVxeRZg7cKrAeQeREWmtQtsJ uINpIOdOEYbmIK/PoSgjPIaurIqePvmMvD5Za8854ud3ATV0gJ1XYGNu/xKDwReOApP+tcKH LKjfA32AZINE5nJfiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1SvF Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfomoCoZ 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8A3eiP28YAh/J9tKhIpffBecwVEnpstA3K VC2H/cn4ZLDDvb9R6NqeTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed MeQ/003MwmMW9yUkqp/VWGmLeXLzYO91a9MwQ/U/WuonlrdCsT9Tpc+CQd9k1wg67VBaM0o9 gsCZ4Y542mePVmGZ6VNN1xMfdfNVa9My4kEFjiaGgPR5t3c04klfbMkcAIDaeRCds18Kc=
  • Ironport-sdr: /481knikzNJreyF2jcka5xisX/33OxGNoLoIF3RH8hd+kE8g8InAHW+RC3lLp/ythvOKYKXLXr 15y/6XTDOgjIECq4JWex/OkpmMefrZPXkCoLfbH5puYiKLj5EK0Ws2l3g4phvWVibyIIDnaO/j 9uQuae+5ZaIhow84qMT71Vc0qL27IP/j3EQdNnunt5Q+6sS3dnHWYNkRuh5FEBWNjUZ+DNmOdp SwSf0Mxzgtza+C1AwsOjUBFXuDu+a3/lxGuweBn/zyFJLheV1+RPcgdFcXpSac91LXnswbHc03 53whb/9ox3seAx/if96qxd55
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Jan 28, 2022 at 02:15:08PM +0000, Oleksandr Andrushchenko wrote:
> Hi, Roger, Jan!
> 
> On 13.01.22 10:58, Roger Pau Monné wrote:
> > On Wed, Jan 12, 2022 at 04:52:51PM +0100, Jan Beulich wrote:
> >> On 12.01.2022 16:42, Roger Pau Monné wrote:
> >>> On Wed, Jan 12, 2022 at 03:57:36PM +0100, Jan Beulich wrote:
> >>>> On 25.11.2021 12:02, Oleksandr Andrushchenko wrote:
> >>>>> @@ -379,7 +396,6 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int 
> >>>>> reg, unsigned int size)
> >>>>>   
> >>>>>           data = merge_result(data, tmp_data, size - data_offset, 
> >>>>> data_offset);
> >>>>>       }
> >>>>> -    spin_unlock(&pdev->vpci->lock);
> >>>>>   
> >>>>>       return data & (0xffffffff >> (32 - 8 * size));
> >>>>>   }
> >>>> Here and ...
> >>>>
> >>>>> @@ -475,13 +498,12 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int 
> >>>>> reg, unsigned int size,
> >>>>>               break;
> >>>>>           ASSERT(data_offset < size);
> >>>>>       }
> >>>>> +    spin_unlock(&pdev->vpci_lock);
> >>>>>   
> >>>>>       if ( data_offset < size )
> >>>>>           /* Tailing gap, write the remaining. */
> >>>>>           vpci_write_hw(sbdf, reg + data_offset, size - data_offset,
> >>>>>                         data >> (data_offset * 8));
> >>>>> -
> >>>>> -    spin_unlock(&pdev->vpci->lock);
> >>>>>   }
> >>>> ... even more so here I'm not sure of the correctness of the moving
> >>>> you do: While pdev->vpci indeed doesn't get accessed, I wonder
> >>>> whether there wasn't an intention to avoid racing calls to
> >>>> vpci_{read,write}_hw() this way. In any event I think such movement
> >>>> would need justification in the description.
> >>> I agree about the need for justification in the commit message, or
> >>> even better this being split into a pre-patch, as it's not related to
> >>> the lock switching done here.
> >>>
> >>> I do think this is fine however, as racing calls to
> >>> vpci_{read,write}_hw() shouldn't be a problem. Those are just wrappers
> >>> around pci_conf_{read,write} functions, and the required locking (in
> >>> case of using the IO ports) is already taken care in
> >>> pci_conf_{read,write}.
> >> IOW you consider it acceptable for a guest (really: Dom0) read racing
> >> a write to read back only part of what was written (so far)? I would
> >> think individual multi-byte reads and writes should appear atomic to
> >> the guest.
> > We split 64bit writes into two 32bit ones without taking the lock for
> > the whole duration of the access, so it's already possible to see a
> > partially updated state as a result of a 64bit write.
> >
> > I'm going over the PCI(e) spec but I don't seem to find anything about
> > whether the ECAM is allowed to split memory transactions into multiple
> > Configuration Requests, and whether those could then interleave with
> > requests from a different CPU.
> So, with the above is it still fine for you to have the change as is or
> you want this optimization to go into a dedicated patch before this one?

The change seems slightly controversial, so I think it would be best
if it was split into a separate patch with a proper reasoning in the
commit message.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.