[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 10 of 22] xenpaging: correct dropping of pages to avoid full ring buffer
# HG changeset patch # User Olaf Hering <olaf@xxxxxxxxx> # Date 1307695634 -7200 # Node ID 1de8de108d152fe915fc7f78044c406fed872bca # Parent 9c42376aac05dc21a617d1d9fd62037cb8a9700d xenpaging: correct dropping of pages to avoid full ring buffer Doing a one-way channel from Xen to xenpaging is not possible with the current ring buffer implementation. xenpaging uses the mem_event ring buffer, which expects request/response pairs to make progress. The previous patch, which tried to establish a one-way communication from Xen to xenpaging, stalled the guest once the buffer was filled up with requests. Correct page-dropping by taking the slow path and let p2m_mem_paging_resume() consume the response from xenpaging. This makes room for yet another request/response pair and avoids hanging guests. Signed-off-by: Olaf Hering <olaf@xxxxxxxxx> diff -r 9c42376aac05 -r 1de8de108d15 tools/xenpaging/xenpaging.c --- a/tools/xenpaging/xenpaging.c Fri Jun 10 10:47:12 2011 +0200 +++ b/tools/xenpaging/xenpaging.c Fri Jun 10 10:47:14 2011 +0200 @@ -690,19 +690,19 @@ int main(int argc, char *argv[]) ERROR("Error populating page"); goto out; } + } - /* Prepare the response */ - rsp.gfn = req.gfn; - rsp.p2mt = req.p2mt; - rsp.vcpu_id = req.vcpu_id; - rsp.flags = req.flags; + /* Prepare the response */ + rsp.gfn = req.gfn; + rsp.p2mt = req.p2mt; + rsp.vcpu_id = req.vcpu_id; + rsp.flags = req.flags; - rc = xenpaging_resume_page(paging, &rsp, 1); - if ( rc != 0 ) - { - ERROR("Error resuming page"); - goto out; - } + rc = xenpaging_resume_page(paging, &rsp, 1); + if ( rc != 0 ) + { + ERROR("Error resuming page"); + goto out; } /* Evict a new page to replace the one we just paged in */ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |