[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-unstable] xenpaging: correct dropping of pages to avoid full ring buffer
# HG changeset patch # User Olaf Hering <olaf@xxxxxxxxx> # Date 1307695634 -7200 # Node ID b4d18ac00a4642e78f7b4d1a993a32923177850e # Parent e30cff57b146b2bc72b1db9df0c1ddadea6230d4 xenpaging: correct dropping of pages to avoid full ring buffer Doing a one-way channel from Xen to xenpaging is not possible with the current ring buffer implementation. xenpaging uses the mem_event ring buffer, which expects request/response pairs to make progress. The previous patch, which tried to establish a one-way communication from Xen to xenpaging, stalled the guest once the buffer was filled up with requests. Correct page-dropping by taking the slow path and let p2m_mem_paging_resume() consume the response from xenpaging. This makes room for yet another request/response pair and avoids hanging guests. Signed-off-by: Olaf Hering <olaf@xxxxxxxxx> Committed-by: Ian Jackson <ian.jackson.citrix.com> --- diff -r e30cff57b146 -r b4d18ac00a46 tools/xenpaging/xenpaging.c --- a/tools/xenpaging/xenpaging.c Fri Jun 10 10:47:12 2011 +0200 +++ b/tools/xenpaging/xenpaging.c Fri Jun 10 10:47:14 2011 +0200 @@ -690,19 +690,19 @@ ERROR("Error populating page"); goto out; } + } - /* Prepare the response */ - rsp.gfn = req.gfn; - rsp.p2mt = req.p2mt; - rsp.vcpu_id = req.vcpu_id; - rsp.flags = req.flags; + /* Prepare the response */ + rsp.gfn = req.gfn; + rsp.p2mt = req.p2mt; + rsp.vcpu_id = req.vcpu_id; + rsp.flags = req.flags; - rc = xenpaging_resume_page(paging, &rsp, 1); - if ( rc != 0 ) - { - ERROR("Error resuming page"); - goto out; - } + rc = xenpaging_resume_page(paging, &rsp, 1); + if ( rc != 0 ) + { + ERROR("Error resuming page"); + goto out; } /* Evict a new page to replace the one we just paged in */ _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |