[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events



On Thu, Dec 01, Andres Lagar-Cavilla wrote:

> MAX_HVM_VCPUS sits at 128 right now. Haven't compile checked, but that
> probably means we would need a two page ring. And then, when 1024-cpu
> hosts arrive and we grow MAX_HVM_VCPUS, we grow the ring size again.

The ring has 64 entries.

> Or, we could limit the constraint to the number of online vcpus, which
> would get somewhat tricky for vcpu hot-plugging.
> 
> I can fix that separately, once there is a decision on which way to go re
> ring management.


I just sent "[PATCH] mem_event: use wait queue when ring is full" to the
list. This version ought to work, it takes the request from both target
and foreign cpus into account and leaves at lease one slot for the
target.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.