[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about quest page table update
Hi, At 22:21 -0400 on 19 Oct (1319062904), Steven wrote: > Hi, > Recently I am trying to understand the page table update of a guest OS > and I am in some trouble. > Assuming the starting point is the hypercall do_mmu_update(req, count, > done, ...), > req has the struct of {uint64_t ptr; uint64_t val}. This hypercall interface is documented in xen/include/public/xen.h, with quite a lot of comments explaining the arguments. That should answer most of your questions. > My first question is that. Since the ptr points to the address of the > page table entry to be updated, it is the guest address. Why > the code "gmfn = req.ptr >> PAGE_SHIFT;" can be used to the guest > machine frame number? See public/xen.h > Second, what is va from these 3 line of code? > mfn = mfn_x(gfn_to_mfn(pt_owner, gmfn, &p2mt)); > va = map_domain_page_with_cache(mfn, &mapcache); > va = (void *)((unsigned long)va + > (unsigned long)(req.ptr & ~PAGE_MASK)); It's a mapping of machine-address described by req.ptr. > Third, is the input argument val a real machine address or the > pseudo-physical address of the guest? Neither (See public/xen.h) > Fourth, I saw a lot of code in xen/arch/x86/mm.c calls the functions > like p2m. What is the relationship between the p2m functions and the > shadow page tables? For guests which don't manage their own phys-to-machine mapping (i.e. HVM guests), Xen manages it for them, in the p2m table. All users of those guests' GFNs, (including the shadow pagetables) need to look them up in the p2m before using them. Guests which use do_mmu_update() to update their pagetables (i.e. PV guests) manage their own p2m tables and use MFNs to talk to the hypervisor, so the p2m lookup functions (like gfn_to_mfn()) are no-ops for them. Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |