[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/7] x86/hvm: convert gsi_assert_count into a variable size array



On Tue, Mar 28, 2017 at 01:25:44AM -0600, Jan Beulich wrote:
> >>> On 27.03.17 at 19:28, <roger.pau@xxxxxxxxxx> wrote:
> > On Mon, Mar 27, 2017 at 09:59:42AM -0600, Jan Beulich wrote:
> >> >>> On 27.03.17 at 12:18, <roger.pau@xxxxxxxxxx> wrote:
> >> > @@ -122,7 +124,7 @@ void hvm_isa_irq_assert(
> >> >      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> >> >      unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
> >> >  
> >> > -    ASSERT(isa_irq <= 15);
> >> > +    ASSERT(isa_irq <= 15 && isa_irq < hvm_irq->nr_gsis);
> >> >  
> >> >      spin_lock(&d->arch.hvm_domain.irq_lock);
> >> >  
> >> > @@ -139,7 +141,7 @@ void hvm_isa_irq_deassert(
> >> >      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> >> >      unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
> >> >  
> >> > -    ASSERT(isa_irq <= 15);
> >> > +    ASSERT(isa_irq <= 15 && isa_irq < hvm_irq->nr_gsis);
> >> 
> >> Not sure about these two - perhaps they'd better be
> >> BUILD_BUG_ON()s for the DomU side, and Dom0 should never
> >> be allocated less than 16?
> > 
> > Hm, I could add:
> > 
> > BUILD_BUG_ON(VIOAPIC_NUM_PINS < 16);
> > 
> > But this is going to make it harder to remove VIOAPIC_NUM_PINS later on.
> 
> As said elsewhere, those remaining uses could/should become
> ARRAY_SIZE().

Hm, but then I will have to use:

ARRAY_SIZE(((struct hvm_hw_vioapic *)0)->redirtbl)

Which is kind of cumbersome? I can define that into a macro I guess.

> >> > @@ -495,7 +497,7 @@ static void irq_dump(struct domain *d)
> >> >             (uint32_t) hvm_irq->isa_irq.pad[0], 
> >> >             hvm_irq->pci_link.route[0], hvm_irq->pci_link.route[1],
> >> >             hvm_irq->pci_link.route[2], hvm_irq->pci_link.route[3]);
> >> > -    for ( i = 0 ; i < VIOAPIC_NUM_PINS; i += 8 )
> >> > +    for ( i = 0; i < hvm_irq->nr_gsis && i + 8 <= hvm_irq->nr_gsis; i 
> >> > += 8 )
> >> >          printk("GSI [%x - %x] %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" 
> >> > %2.2"PRIu8
> >> >                 " %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8"\n",
> >> >                 i, i+7,
> >> > @@ -507,6 +509,13 @@ static void irq_dump(struct domain *d)
> >> >                 hvm_irq->gsi_assert_count[i+5],
> >> >                 hvm_irq->gsi_assert_count[i+6],
> >> >                 hvm_irq->gsi_assert_count[i+7]);
> >> > +    if ( i != hvm_irq->nr_gsis )
> >> > +    {
> >> > +        printk("GSI [%x - %x]", i, hvm_irq->nr_gsis - 1);
> >> > +        for ( ; i < hvm_irq->nr_gsis; i++)
> >> > +            printk(" %2.2"PRIu8, hvm_irq->gsi_assert_count[i]);
> >> > +        printk("\n");
> >> > +    }
> >> 
> >> I realize you've just copied this from existing code, but what
> >> advantage does %2.2 have over just %2 here?
> > 
> > Shouldn't it be just %3 anyway? Would you be fine with me fixing this here, 
> > or
> > would you rather prefer a separate patch?
> 
> Well, if you also want to change the existing ones, then a separate
> patch would be preferred. As to %2 vs %3 - how would the latter
> be any better if we want/need to change the type from u8 anyway?

ATM uint8_t can be 255, so %3 seems better (although AFAICT this is limited to
4, because there are 4 INT# lines to each GSI, and pending GSIs are not
accumulated). I'm not sure I follow why do we want/need to change the type.

> >> > @@ -545,6 +554,9 @@ static int irq_save_pci(struct domain *d, 
> >> > hvm_domain_context_t *h)
> >> >      unsigned int asserted, pdev, pintx;
> >> >      int rc;
> >> >  
> >> > +    if ( hvm_irq->nr_gsis != VIOAPIC_NUM_PINS )
> >> > +        return -EOPNOTSUPP;
> >> > +
> >> >      spin_lock(&d->arch.hvm_domain.irq_lock);
> >> >  
> >> >      pdev  = hvm_irq->callback_via.pci.dev;
> >> > @@ -592,6 +604,9 @@ static int irq_load_pci(struct domain *d, 
> >> > hvm_domain_context_t *h)
> >> >      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> >> >      int link, dev, intx, gsi;
> >> >  
> >> > +    if ( hvm_irq->nr_gsis != VIOAPIC_NUM_PINS )
> >> > +        return -EOPNOTSUPP;
> >> 
> >> Why do you add these checks here?
> > 
> > Because there's a:
> > 
> >     for ( gsi = 0; gsi < VIOAPIC_NUM_PINS; gsi++ )
> >         hvm_irq->gsi_assert_count[gsi] = 0;
> > 
> > A little bit below.
> 
> True for the load side, but I can't see anything like that for save.
> 
> > I can change that to use nr_gsis instead and remove those
> > checks (both here and in irq_save_pci).
> 
> Please do, there's no harm using the dynamic upper bound in
> that case.

Now done.

> >> > @@ -89,13 +77,27 @@ struct hvm_irq {
> >> >      u8 round_robin_prev_vcpu;
> >> >  
> >> >      struct hvm_irq_dpci *dpci;
> >> > +
> >> > +    /*
> >> > +     * Number of wires asserting each GSI.
> >> > +     *
> >> > +     * GSIs 0-15 are the ISA IRQs. ISA devices map directly into this 
> >> > space
> >> > +     * except ISA IRQ 0, which is connected to GSI 2.
> >> > +     * PCI links map into this space via the PCI-ISA bridge.
> >> > +     *
> >> > +     * GSIs 16+ are used only be PCI devices. The mapping from PCI 
> >> > device to
> >> > +     * GSI is as follows: ((device*4 + device/8 + INTx#) & 31) + 16
> >> > +     */
> >> > +    unsigned int nr_gsis;
> >> > +    u8 gsi_assert_count[];
> >> >  };
> >> 
> >> I'm afraid at the same time (or in a later patch) the type of this array
> >> (and maybe also the type of pci_link_assert_count[]) will need to be
> >> widened.
> > 
> > This one seems to be fine for PVH Dom0 (you will have to look at the next
> > series), but I specifically avoid touching pci_link_assert_count. Again, 
> > better
> > to comment this on the next patch series that actually implements the 
> > interrupt
> > binding. This is needed because code in vioapic makes use of
> > gsi_assert_count.
> 
> Well, we can certainly discuss this there (whenever I get to look at
> it, as that's not 4.9 stuff now anymore), but for now I don't see the
> connection between my comment and your reply.

Hm, I'm not sure I understand why would you like to change the type of this
array. AFAICT the type of the array is not related to the number of vIO APIC
pins, or the number of vIO APICs.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.