|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring is full
On Fri, Dec 09, Andres Lagar-Cavilla wrote:
> Olaf,
> Tim pointed out we need both solutions to ring management in the
> hypervisor. With our patch ("Improve ring management for memory events. Do
> not lose guest events."), we can handle the common case quickly, without
> preempting VMs. With your patch, we can handle extreme situations of ring
> congestion with the big hammer called wait queue.
With my patch the requests get processed as they come in, both foreign
and target requests get handled equally. There is no special accounting.
A few questions about your requirements:
- Is the goal is that each guest vcpu can always put at least one request?
- How many requests should foreign vcpus place in the ring if the guest
has more vcpus than available slots in the ring? Just a single one so
that foreigners can also make some progress?
- Should access and paging have the same rules for accounting?
Olaf
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |