[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3 12/15] vmx: posted-interrupt handling when vCPU is blocked

On 02/07/15 09:30, Dario Faggioli wrote:
> On Thu, 2015-07-02 at 04:27 +0000, Wu, Feng wrote:
>>>>> +    list_for_each_entry(vmx, &per_cpu(pi_blocked_vcpu, cpu),
>>>>> +                        pi_blocked_vcpu_list)
>>>>> +        if ( vmx->pi_desc.on )
>>>>> +            tasklet_schedule(&vmx->pi_vcpu_wakeup_tasklet);
>>>> There is a logical bug here.  If we have two NV's delivered to this
>>>> pcpu, we will kick the first vcpu twice.
>>>> On finding desc.on, a kick should be scheduled, then the vcpu removed
>>>> from this list.  With desc.on set, we know for certain that another NV
>>>> will not arrive for it until it has been scheduled again and the
>>>> interrupt posted.
>>> Yes, that seems a possible issue (and one that should indeed be
>>> avoided).
>>> I'm still unsure about the one that I raised myself but, if it is
>>> possible to have more than one vcpu in a pcpu list, with desc.on==true,
>>> then it looks to me that we kick all of them, for each notification.
>>> Added what Andrew's spotted, if there are a bunch of vcpus, queued with
>>> desc.on==ture, and a bunch of notifications arrives before the tasklet
>>> gets executed, we'll be kicking the whole bunch of them for a bunch of
>>> times! :-/
>> As Andrew mentioned, removing the vCPUs with desc.on = true from the
>> list can avoid kick vCPUs for multiple times.
> It avoids kicking vcpus multiple times if more than one notification
> arrives, yes.
> It is, therefore, not effective in making sure that, even with only one
> notification, you only kick the interested vcpu.
> This is the third time that I ask:
>  (1) whether it is possible to have more vcpus queued on one pcpu PI 
>      blocked list with desc.on (I really believe it is);
>  (2) if yes, whether it is TheRightThing(TM) to kick all of them, as
>      soon as any notification arrives, instead that putting together a
>      mechanism for kicking only a specific one.

We will receive one NV for every time the hardware managed to
successfully set desc.on

If multiple stack up and we proactively drain the list, we will
subsequently search the list to completion for all remaining NV's, due
to finding no appropriate entries.

I can't currently decide whether this will be quicker or slower overall,
or (most likely) it will even out to equal in the general case.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.