[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events



I just pushed all that I had on mem event. I split it into two series. One
that adds features that are independent on how we choose to manage the
ring.

Hopefully we'll concur these features are useful (such as kicking Xen to
consume batched responses directly with an event channel, no domctl
needed). And that they should go in the tree. I've been using them for a
month already.

The second series is a refreshed post of our version of ring management,
sans wait queues. With both up to date version in hand, we should be able
to reach a consensus on which to use.

Thanks
Andres

> On Thu, Dec 01, Andres Lagar-Cavilla wrote:
>
>> MAX_HVM_VCPUS sits at 128 right now. Haven't compile checked, but that
>> probably means we would need a two page ring. And then, when 1024-cpu
>> hosts arrive and we grow MAX_HVM_VCPUS, we grow the ring size again.
>
> The ring has 64 entries.
>
>> Or, we could limit the constraint to the number of online vcpus, which
>> would get somewhat tricky for vcpu hot-plugging.
>>
>> I can fix that separately, once there is a decision on which way to go
>> re
>> ring management.
>
>
> I just sent "[PATCH] mem_event: use wait queue when ring is full" to the
> list. This version ought to work, it takes the request from both target
> and foreign cpus into account and leaves at lease one slot for the
> target.
>
>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.