[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [patch] more correct pfn_valid()



> So how does that sparse style get implemented? Could you say 
> more or show a link to the place in source tree? :)

On x86, for fully virtualized guests the pfn->mfn table is virtually
mapped and hence you can have holes in the 'physical' memory and
arbitrary page granularity mappings to machine memory. See
phys_to_machine_mapping().

For paravirtualized guests we provide a model wherebe 'physical' memory
starts at 0 and is contiguous, but maps to arbitrary machine pages.
Since for paravirtualized guests you can hack the kernel, I don't see
any need to support anything else. [Note that IO address do not have
pages in this map, whereas they do in the fully virtualized case]

Ian
 
> Take following sequence in xc_linux_build.c as example:
> 1. Setup_guest() call xc_get_pfn_list(xc_handle, dom, 
> page_array, nr_pages), where page_array is acquired by 
> walking domain->page_list in HV. So page_array is actually 
> the mapping of [index in page_list, machine pfn], not [guest 
> pfn, machine pfn].
> 
> 2. loadelfimage() will utilize that page_array to load kernel of domU,
> like:
> pa = (phdr->p_paddr + done) - dsi->v_start; va = xc_map_foreign_range(
>       xch, dom, PAGE_SIZE, PROT_WRITE, 
> parray[pa>>PAGE_SHIFT]); Here parray[pa>>PAGE_SHIFT] is used, 
> which tempt to consider index of page_array as guest pfn, 
> however it's not from explanation in 1st point.
> 
> Yes, it should work in above example, since usually kernel is 
> loaded at lower address which is far from I/O hole. So in 
> lower range actually "index in page_list" == "guest pfn". 
> However this is not correct model in generic concept. 
> Especially for device model, which needs to map whole machine 
> pages of domU, also follows the wrong model as 
> xc_get_pfn_list + xc_map_foreign.
> 
> Maybe the sparse memory maps has already been managed inside 
> HV as you said, but we also need to waterfall same sparse 
> info to CP and DM especially for GB memory. That's why we're 
> considering adding new hypercall. 
> 
> Correct me if I misunderstand something there. :)
> 
> Thanks,
> Kevin
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.