|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Event delivery and "domain blocking" on PVHv2
On 22.06.2020 17:31, Martin Lucina wrote:
> On 2020-06-22 15:58, Roger Pau Monné wrote:
>> On Mon, Jun 22, 2020 at 12:58:37PM +0200, Martin Lucina wrote:
>>> How about this arrangement, which appears to work for me; no hangs I
>>> can see
>>> so far and domU survives ping -f fine with no packet loss:
>>>
>>> CAMLprim value
>>> mirage_xen_evtchn_block_domain(value v_deadline)
>>> {
>>> struct vcpu_info *vi = VCPU0_INFO();
>>> solo5_time_t deadline = Int64_val(v_deadline);
>>>
>>> if (solo5_clock_monotonic() < deadline) {
>>> __asm__ __volatile__ ("cli" : : : "memory");
>>> if (vi->evtchn_upcall_pending) {
>>> __asm__ __volatile__ ("sti");
>>> }
>>> else {
>>> hypercall_set_timer_op(deadline);
>>
>> What if you set a deadline so close that evtchn_upcall_pending gets
>> set by Xen here and the interrupt is injected? You would execute the
>> noop handler and just hlt, and could likely end up in the same blocked
>> situation as before?
>
> Why would an interrupt be injected here? Doesn't the immediately
> preceding
> "cli" disable that?
>
> Or perhaps I need to do a PV/HVM hybrid and set vi->evtchn_upcall_mask
> just
> before the cli, and clear it after the sti?
evtchn_upcall_mask is a strictly PV-only thing. See e.g. the code
comment in hvm_set_callback_irq_level().
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |