[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 05/11] x86/vioapic: switch to use the EOI callback mechanism



On 08.04.2021 10:59, Roger Pau Monné wrote:
> On Thu, Apr 08, 2021 at 08:27:10AM +0200, Jan Beulich wrote:
>> On 07.04.2021 18:46, Roger Pau Monné wrote:
>>> On Wed, Apr 07, 2021 at 05:19:06PM +0200, Jan Beulich wrote:
>>>> On 31.03.2021 12:32, Roger Pau Monne wrote:
>>>>> --- a/xen/arch/x86/hvm/vioapic.c
>>>>> +++ b/xen/arch/x86/hvm/vioapic.c
>>>>> @@ -621,7 +624,43 @@ static int ioapic_load(struct domain *d, 
>>>>> hvm_domain_context_t *h)
>>>>>           d->arch.hvm.nr_vioapics != 1 )
>>>>>          return -EOPNOTSUPP;
>>>>>  
>>>>> -    return hvm_load_entry(IOAPIC, h, &s->domU);
>>>>> +    rc = hvm_load_entry(IOAPIC, h, &s->domU);
>>>>> +    if ( rc )
>>>>> +        return rc;
>>>>> +
>>>>> +    for ( i = 0; i < ARRAY_SIZE(s->domU.redirtbl); i++ )
>>>>> +    {
>>>>> +        const union vioapic_redir_entry *ent = &s->domU.redirtbl[i];
>>>>> +        unsigned int vector = ent->fields.vector;
>>>>> +        unsigned int delivery_mode = ent->fields.delivery_mode;
>>>>> +        struct vcpu *v;
>>>>> +
>>>>> +        /*
>>>>> +         * Add a callback for each possible vector injected by a 
>>>>> redirection
>>>>> +         * entry.
>>>>> +         */
>>>>> +        if ( vector < 16 || !ent->fields.remote_irr ||
>>>>> +             (delivery_mode != dest_LowestPrio && delivery_mode != 
>>>>> dest_Fixed) )
>>>>> +            continue;
>>>>> +
>>>>> +        for_each_vcpu ( d, v )
>>>>> +        {
>>>>> +            struct vlapic *vlapic = vcpu_vlapic(v);
>>>>> +
>>>>> +            /*
>>>>> +             * NB: if the vlapic registers were restored before the 
>>>>> vio-apic
>>>>> +             * ones we could test whether the vector is set in the 
>>>>> vlapic IRR
>>>>> +             * or ISR registers before unconditionally setting the 
>>>>> callback.
>>>>> +             * This is harmless as eoi_callback is capable of dealing 
>>>>> with
>>>>> +             * spurious callbacks.
>>>>> +             */
>>>>> +            if ( vlapic_match_dest(vlapic, NULL, 0, ent->fields.dest_id,
>>>>> +                                   ent->fields.dest_mode) )
>>>>> +                vlapic_set_callback(vlapic, vector, eoi_callback, NULL);
>>>>
>>>> eoi_callback()'s behavior is only one of the aspects to consider here.
>>>> The other is vlapic_set_callback()'s complaining if it finds a
>>>> callback already set. What guarantees that a mistakenly set callback
>>>> here won't get in conflict with some future use of the same vector by
>>>> the guest?
>>>
>>> Such conflict would only manifest as a warning message, but won't
>>> cause any malfunction, as the later callback would override the
>>> current one.
>>>
>>> This model I'm proposing doesn't support lapic vector sharing with
>>> different devices that require EOI callbacks, I think we already
>>> discussed this on a previous series and agreed it was fine.
>>
>> The problem with such false positive warning messages is that
>> they'll cause cautious people to investigate, i.e. spend perhaps
>> a sizable amount of time in understanding what was actually a non-
>> issue. I view this as a problem, even if the code's functioning is
>> fine the way it is. I'm not even sure explicitly mentioning the
>> situation in the comment is going to help, as one may not stumble
>> across that comment while investigating.
> 
> What about making the warning message in case of override in
> vlapic_set_callback conditional to there being a vector pending in IRR
> or ISR?
> 
> Without having such vector pending the callback is just useless, as
> it's not going to be executed, so overriding it in that situation is
> perfectly fine. That should prevent the restoring here triggering the
> message unless there's indeed a troublesome sharing of a vector.

Ah yes, since the callbacks are self-clearing, this gating looks quite
reasonable to me.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.