[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch v5 2/5] x86/hpet: Use singe apic vector rather than irq_descs for HPET interrupts



On 28/11/13 14:33, Jan Beulich wrote:
>>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 11/27/13 11:37 PM >>>
>> On 27/11/2013 08:35, Jan Beulich wrote:
>>>>>> On 26.11.13 at 19:32, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> There is no safe scenario (given Xen's handling of line level interrupt)
>>>> for timer interrupts to be lower priority than the highest possible line
>>>> level priority.
>>> That's true for the "new ack" model, but not the "old" one (which is
>>> in particular also being used when using directed EOI). Which may
>>> in turn be an argument to consider whether the vector selection
>>> (low or high priority) shouldn't depend on the IO-APIC ack model.
>> How would you go about allocating vectors then?  All IRQs are allocated
>> randomly between vector 0x21 and 0xdf.  You can certainly preferentially
>> prefer lower vectors for line level interrupts, but the only way to
>> guarantee that the HPET vector is greater than line level interrupts is
>> to have it higher than 0xdf.
> I'm not proposing any change to vector allocation. Once again, with the
> new ack model I agree that the HPET one must be high priority. With the
> old ack model, however, it could be low priority (and we might consider
> putting it at 0x20, preventing it from being used in dynamic allocation, or
> even at 0x1f - that ought to work as long as there's no CPU exception at
> that vector, since only the range 0x00...0x0f is reserved in the APIC
> architecture).

Because of XSA-3, the TPR settings block any external vectors below
0x20, to protect from external devices trying to trigger exceptions
(Alignment check specifically which expects to have an error code on the
stack)

0x20 is currently the dynamic irq cleanup vector, but as it is only used
with "int 0x20", it could be moved into the reserved region. (Not that I
suggest we do use the reserved region)

>
>> I do think that allocating line level vectors lower is a good idea,
>> given how long they remain outstanding.  I also think that a useful
>> performance tweak would be for device driver domains to be able to
>> request a preferentially higher vector for their interrupts.
> I'm not sure about this one.
>
> >From a pragmatic point of view, there are plenty of spare high priority
>> vectors for use, where as we at XenServer already have usecases where we
>> are running out of free vectors in the range 0x21 -> 0xdf due to shear
>> quantities of SRIOV.  I already have half a mind to see whether I can
>> optimise the current allocation of vectors to make the dynamic range larger.
> Growing that range will only be possible by a small amount anyway, so this
> won't buy you much (you'll run out of vectors very quickly again). But then
> again - with the vector ranges being available once per CPU, are there
> really setups where the requirements exceed the vectors * CPUs value?
>
> Jan

Netscalar MPX systems have (off the top of my head)
    24x 10GB network cards with 64 functions and 3 interrupts per function
    4x SSL offload cards with 64 functions and 2 interrupts per function

On a dual Sandy-bridge server with 32 cores in total.

That comes to 160 of the 189 available entries in the dynamic region
used before the consideration of other dom0 interrupts, and the fact
that irq migration logic needs to reserve a vector in the target IDT
before starting the move.

Even a fractional increase in available space will have a large impact
on the contention.

~Andrew

>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.