[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] x86/pvh: also print hardware domain pIRQ limit for PVH



On Thu, Nov 21, 2024 at 11:54:49AM +0100, Jan Beulich wrote:
> On 20.11.2024 12:35, Roger Pau Monne wrote:
> > Do not return early in the PVH/HVM case, so that the number of pIRQs is also
> > printed.
> 
> What you're printing ...
> 
> > Fixes: 17f6d398f765 ('cmdline: document and enforce "extra_guest_irqs" 
> > upper bounds')
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> >  xen/arch/x86/io_apic.c | 12 +++++++-----
> >  1 file changed, 7 insertions(+), 5 deletions(-)
> > 
> > diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
> > index bd5ad61c85e4..d9db2efc4f58 100644
> > --- a/xen/arch/x86/io_apic.c
> > +++ b/xen/arch/x86/io_apic.c
> > @@ -2754,11 +2754,13 @@ unsigned int __hwdom_init arch_hwdom_irqs(const 
> > struct domain *d)
> >  
> >      /* PVH (generally: HVM) can't use PHYSDEVOP_pirq_eoi_gmfn_v{1,2}. */
> >      if ( is_hvm_domain(d) )
> > -        return nr_irqs;
> > -
> > -    if ( !d->domain_id )
> > -        n = min(n, dom0_max_vcpus());
> > -    n = min(nr_irqs_gsi + n * NR_DYNAMIC_VECTORS, max_irqs);
> > +        n = nr_irqs;
> 
> ... is rather the number of IRQs we picked for the system. That may happen to
> end up being the upper bound for PVH Dom0, yet not logging this at all was
> because of the limited use pIRQ-s have there. Granted at the time I was still
> under the impression they have no use there at all, so this isn't really an
> objection to the change. I would have been nice though if the description had
> mentioned why significance pIRQ-s actually have in PVH Dom0.

Sure, what about adding to the commit message:

"While PVH dom0 doesn't have access to the hypercalls to manage pIRQs
itself, neither the knowledge to do so, pIRQs are still used by Xen to
map and bind interrupts to a PVH dom0 behind its back.  Hence the
pIRQ limit is still relevant for a PVH dom0."

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.