[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 15/17] vmx: VT-d posted-interrupt core logic handling



On Tue, 2015-10-27 at 05:19 +0000, Wu, Feng wrote:
> > -----Original Message-----
> > From: Dario Faggioli [mailto:dario.faggioli@xxxxxxxxxx]
> > 

> This is something similar with patch v7 and before, doing vcpu block
> during context switch, and seems during the discussion, you guys
> prefer doing the vcpu blocking things outside context switch.
> 
I know, that's why I'm not 100% sure of the path to take (I think I
made that clear).

On one hand, I'm close to convince myself that it's "just" a rollback
of the blocking, which is something we do already, when we clear the
flags. On the other hand, it's two hooks, which is worse than one, IMO,
especially if one is a 'cancel' hook. :-(

> > 
> > At the time, I "voted against" this design, because it seemed we
> > could
> > manage to handle interrupt ('regular' and posted) happening during
> > blocking in one and unified way, and with _only_ arch_vcpu_block().
> > If
> > that is no longer the case (and it's not, as we're adding more
> > hooks,
> > and the need to call the second is a special case being introduced
> > by
> > PI), it may be worth reconsidering things...
> > 
> > So, all in all, I don't know. As said, I don't like this
> > cancellation
> > hook because it's one more hook and because --while I see why it's
> > useful in this specific case-- I don't like having it in generic
> > code
> > (in schedule.c), and even less having it called in two places
> > (vcpu_block() and do_pool()). However, if others (Jan and George, I
> > guess) are not equally concerned about it, I can live with it.
> > 
> If I understand it correctly, this block cancel method was suggested
> by George, please refer to the attached email. George, what is your
> opinion about it? It is better to discuss a clear solution before I
> continue to post another version. Thanks a lot!
> 
Sure.

Thanks for mentioning and attaching the email.

So, bear me with me a bit: do you mind explaining (possibly again, in
which case, sorry) why we need, for instance in vcpu_block(), to call
the hook as early as you're calling it and not later?

I mean, what's the problem with something like this:

void vcpu_block(void)
{
    struct vcpu *v = current;

    set_bit(_VPF_blocked, &v->pause_flags);

    /* Check for events /after/ blocking: avoids wakeup waiting race. */
    if ( local_events_need_delivery() )
    {   
        clear_bit(_VPF_blocked, &v->pause_flags);
    }
    else
    {
 -->    arch_vcpu_block(v);
        TRACE_2D(TRC_SCHED_BLOCK, v->domain->domain_id, v->vcpu_id);
        raise_softirq(SCHEDULE_SOFTIRQ);
    }
}

?

In fact, George said this in the mail you mention:
"We shouldn't need to actually clear SN [in the arch_block hook]; SN
should already be clear because the vcpu should be currently running.
And if it's just been running, then NDST should also already be the
correct pcpu."

And that seems correct to me. So, the difference looks to me to be
"only" the NV, and whether or not the vcpu will be in a blocked list
already. The latter, seems something we can easily compensate for (and
you're doing it already, AFAICT); the former, I'm not sure whether it
could be an issue or not.

What am I missing?

Note that this is "just" to understand and form an opinion. Sorry again
if what I asked have been analyzed already, but I don't remember
anything like that, and I'm not super-familiar with these interrupt
things. :-/
Still in that email, there is something about the possibility of having
to disable the interrupts. I guess that didn't end up to be necessary?

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.