[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events



On Wed, Nov 30, Andres Lagar-Cavilla wrote:

> - This isn't a problem that necessitates wait-queues for solving. Just
> careful logic.
> 
> - I am not sure what your concerns about the mix are. get_gfn* would call
> populate on a paged out gfn, and then go to sleep if it's a guest vcpu.
> With our patch, the guest vcpu event is guaranteed to go in the ring. vcpu
> pausing will stack (and unwind) properly.

Today I sent my current version of wait queues for mem_event and paging.
Unfortunately the earlier versions of mem_event had bugs which caused
trouble also for the paging change.
Please have a look wether my changes work for you.

I see you have a change to mem_event_get_response() which pulls all
requests instead of just one. Thats currently a noop for paging, but
it will be used once paging can get rid of the domctls.



I added p2m_mem_paging_wait() which calls p2m_mem_paging_populate(),
then goes to sleep until p2m_mem_paging_get_entry() indicates the gfn is
back. If I understand your change correctly a guest vcpu can always
place a request. If p2m_mem_paging_populate() happens to fail to put a
request, what is supposed to happen in p2m_mem_paging_wait()? Should it
skip the wait_event() and return to its caller?

With my implementation both p2m_mem_paging_wait() and
p2m_mem_paging_populate() will stop exectution until either the gfn is
back or if there is room in the buffer. There is no need for exit
code handling.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.