[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 07/10] xen/arm: Add handling write fault for dirty-page tracing
On Mon, 5 Aug 2013, Jaeyong Yoo wrote: > > -----Original Message----- > > From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx] > > Sent: Monday, August 05, 2013 1:28 AM > > To: Jaeyong Yoo > > Cc: xen-devel@xxxxxxxxxxxxx > > Subject: Re: [Xen-devel] [PATCH v3 07/10] xen/arm: Add handling write > > fault for dirty-page tracing > > > > On Thu, 1 Aug 2013, Jaeyong Yoo wrote: > > > Add handling write fault in do_trap_data_abort_guest for dirty-page > > tracing. > > > Rather than maintaining a bitmap for dirty pages, we use the avail bit > > in p2m entry. > > > For locating the write fault pte in guest p2m, we use virtual-linear > > > page table that slots guest p2m into xen's virtual memory. > > > > > > Signed-off-by: Jaeyong Yoo <jaeyong.yoo@xxxxxxxxxxx> > > > > Looks good to me. > > I would appreciated some more comments in the code to explain the inner > > working of the vlp2m. > I got it. > > One question: If you see patch #6, it implements the allocation and free of > vlp2m memory (xen/arch/arm/vlpt.c) which is almost the same to vmap > allocation (xen/arch/arm/vmap.c). To be honest, I copied vmap.c and change > the virtual address start/end points and the name. While I was doing that, > I think it would be better if we naje a common interface, something like > Virtual address allocator. That is, if we create a virtual address allocator > > giving the VA range from A to B, the allocator allocates the VA in between > A and B. And, we initialize the virtual allocator instance at boot stage. Good question. I think it might be best to improve the current vmap (it's actually xen/common/vmap.c) so that we can have multiple vmap instances for different virtual address ranges at the same time. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |