[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [Patch] continue_hypercall_on_cpu rework using tasklets



>-----Original Message-----
>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>Sent: Saturday, April 17, 2010 1:58 AM
>To: Jiang, Yunhong; Juergen Gross
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Yu, Ke
>Subject: Re: [Xen-devel] [Patch] continue_hypercall_on_cpu rework using 
>tasklets
>
>On 16/04/2010 09:16, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>
>> Oops, yes, this should be there already, without c_h_o_c.
>> So, if another vcpu is trying to vcpu_pause() this vcpu, it will deadlock
>> becaues this vcpu can't be paused, right? (I assume the same reason to
>> hvmop_flush_tlb_all).
>
>Yes.
>
>> If this is true, then how about checking !vcpu_runnable(current) in
>> stop_machine_run()'s stopmachine_set_state()? If this check failed, it means
>> someone try to change our state and potential deadlock may happen, we can
>> cancel this stop_machine_run()? This is at least a bit cleaner than the
>> timeout mechanism.
>
>Interesting... That could work. But...

I agree that idle vcpu method is more cleaner in the long run.

>
>>>> Maybe the cleaner way is your previous suggestion, that is, put the
>>>> stop_machine_run() in the idle_vcpu(), another way is, turn back to the
>>>> original method, i.e. do it in the schedule_tail.
>>>
>>> Argh. That would be annoying!
>>
>> Seems do it in the schedule_tail will not benifit this deadlock issues.
>
>Yes, but now I think about it, scheduling c_h_o_c() and s_m_r() in idle-vcpu
>context instead of softirq context might not be *that* hard to do, and it
>avoids all these subtle deadlock possibilities. I think I'm warming to it
>compared with the alternatives.
>
>> Yes, c_h_o_c does not introduce any more deadlock, my concern is, it's name
>> maybe confusing if one try to use it for other hypercall . After all, if a
>> hypercall and function used by the hypercall depends on current, it should 
>> not
>> use this c_h_o_c.
>
>Well, I think there are two issues: (1) we run the continuation as a
>softirq; (2) we don't run the continuation in the context of the original
>vcpu. I think (1) is a bigger problem than (2) as it introduces the
>possibility of all these nasty subtle deadlocks. (2) is obviously not
>desirable, *but* I don't think any callers much care about the context of
>the original caller, *and* if they do the resulting bug is generally going
>to be pretty obvious. That is, the hypercall just won't work, ever -- it's
>much less likely than (1) to result in very rare very subtle bugs.

For issue 2, In fact this c_h_o_c is sometihng like xen action, i.e. it is some 
utility provided by xen hypervisor that can be utilized by guest, so maybe we 
can change the name of c_h_o_c, to be like call_xen_XXX?
To your idle_vcpu context work, I think it is a bit like hvm domain waiting for 
a IO assist from qemu (i.e., use prepare_wait_on_xen_event_channel()), is it 
right?

--jyh

>
> -- Keir
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.