[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/IRQ: bump max number of guests for a shared IRQ to 31


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
  • Date: Wed, 2 Dec 2020 14:53:06 +0000
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <andrew.cooper3@xxxxxxxxxx>, <roger.pau@xxxxxxxxxx>, <wl@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 02 Dec 2020 14:53:20 +0000
  • Ironport-sdr: doGrgZq6PS74a89lqfIVn4zs7R1viOiOITNT/3mDFwZ8OHQl8Pub647M4FUOYk2d0LVgheVxhU fZgqeTX2G6WYXLmaqg2ziuCULHNIraXQh0jjZ+3hmKcNxSL9tzJr/kmBuNhznijLIYnmB79qL5 vupgmD35MSPAG2pnm+p70Y233gqPioVW3tdFbzhpSQ3lVv+EMfZEHdJuEQzhh1+3y0tYSBsvY0 WZG1kOgtBvvYFF7/CCIjQ2qBWH5P8SagAvipv4/nRYBBXkZlVEW3PHKsBkadStN5CyXRVjyAcS 2Gw=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02/12/2020 09:25, Jan Beulich wrote:
> On 01.12.2020 00:59, Igor Druzhinin wrote:
>> Current limit of 7 is too restrictive for modern systems where one GSI
>> could be shared by potentially many PCI INTx sources where each of them
>> corresponds to a device passed through to its own guest. Some systems do not
>> apply due dilligence in swizzling INTx links in case e.g. INTA is declared as
>> interrupt pin for the majority of PCI devices behind a single router,
>> resulting in overuse of a GSI.
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
>> ---
>>
>> If people think that would make sense - I can rework the array to a list of
>> domain pointers to avoid the limit.
> 
> Not sure about this. What is the biggest number you've found on any
> system? (I assume the chosen value of 31 has some headroom.)

The value 31 was taken as a practical maximum for one specific HP system
if all of the PCI slots in all of its riser cards are occupied with GPUs.
The value is obtained by reverse engineering their ACPI tables. Currently
we're only concerned with number 8 (our graphics vendors do not recommend
installing more cards than that) but it's a matter of time it will grow.
I'm also not sure why this routing scheme was chosen in the first place:
is it dictated by hardware restrictions or firmware engineers being lazy - 
we're working with HP to determine it.

> Instead I'm wondering whether this wouldn't better be a Kconfig
> setting (or even command line controllable). There don't look to be
> any restrictions on the precise value chosen (i.e. 2**n-1 like is
> the case for old and new values here, for whatever reason), so a
> simple permitted range of like 4...64 would seem fine to specify.
> Whether the default then would want to be 8 (close to the current
> 7) or higher (around the actually observed maximum) is a different
> question.

I'm in favor of a command line argument here - it would be much less trouble
if a higher limit was suddenly necessary in the field. The default IMO
should definitely be higher than 8 - I'd stick with number 32 which to me
should cover our real world scenarios and apply some headroom for the future.

Igor



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.