[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT


  • To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Yang, Fred" <fred.yang@xxxxxxxxx>
  • From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
  • Date: Sun, 1 May 2005 19:21:28 -0700
  • Cc: ipf-xen <ipf-xen@xxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 02 May 2005 02:21:00 +0000
  • List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AcVKfesR741jQGkzQvWmdNAbNvskDgAAPPdQAAnH97AAJJJm8AAGJlmgACr730AABzFgcAAYTImwABTis7AAALccgAAAPutgABLQl4AAN60BYAARFB9AABDYKGAAClWscAADqFKg
  • Thread-topic: Xen/ia64 - global or per VP VHPT

Perhaps I will understand better after I see your code.

However, I do still think that a non-physically-contiguous per-domain
VHPT is nearly useless and a physically-contiguous one will
result in memory allocation problems when dealing with dynamic domain
creation/ballooning.

I left the 3-level page tables in for guest-physical-to-machine-physical
because I expected physical memory on ia64 to generally be much
larger than x86 and the existing Linux method for handling VA
spaces seemed suitable.  Is there Xen-common code for this
that is better?  (I think this code is archdep right now, though
I suppose it could be moved back from archdep to common if it
is truly architecture-independent, e.g. if ppc could use it too.)

> -----Original Message-----
> From: Dong, Eddie [mailto:eddie.dong@xxxxxxxxx] 
> Sent: Sunday, May 01, 2005 6:45 PM
> To: Magenheimer, Dan (HP Labs Fort Collins); Yang, Fred
> Cc: ipf-xen; xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: Xen/ia64 - global or per VP VHPT
> 
> Magenheimer, Dan (HP Labs Fort Collins) wrote:
> >> How can you guarantee the MAP for hypercall will not be
> >> purged without extra data structure I mean vTLB?
> > 
> > I haven't seen your hypercall implementation yet, but I was
> > assuming that (on non-VT) the parameters would be passed in
> > memory in the "shared page" which is always mapped by a TR.
> That may be true for XENOlinux, but how about non modified guest?
> Both hypercall shared page, ForeignMap needs to be pinned by 
> TR in this solution. Further more, the ForeignMap is usually 
> used to map all guest memory space that may be 16GB or even 
> 64GB in your example. While TR can only cover 256MB, so HV 
> needs 64TRs for that. It is impossible in current IPF architecture.
> So my point is to support non modified guest in global VHPT 
> solution is difficult, it needs extra data structure that 
> eventually will be same with our implementation except you 
> want to exclude non modified guest.
> > 
> > Actually the current implementation does include a vTLB
> > implementation, but it is one entry vITLB and one entry
> > vDTLB, used so far only to ensure forward progress on
> > privop emulation.
> Eventually you needs to track a lot of vTLB like foreignmap 
> entry (Per domain basis). Will you still use scattered 
> variable? I am using hash data same with VHPT.
> > 
> > This is insufficient if your hypercall proposal passes parameters
> > by address where the address can be anywhere in guest memory,
> > but I didn't think that was your proposal.
> hypercall parameter by address pointing to any place in guest 
> has problem for both solution. That is why we suggest to 
> point to hypercall shared page. Can this be done by global VHPT?
> > 
> >>    This is not a big cake. If the domain get more
> >> memories(exceed some threshold), it is ok to increase VHPT
> >> size dynamically.
> > 
> > If the per-domain VHPT must be contiguous in physical memory,
> > this IS a big cake.
> No machine contiguous requirement. The design is using TC map 
> for VHPT, but stage 1 code use contiguous.
> 
> >> this. The VHPT miss will handle this. So don't worry for
> >> this. If digging into much details of vMMU, an HV TLB/VHPT is
> >> a must to support PMT (guest physical to machine physical).
> >> (Oh, you may argu IA64 don't need PMT, if this is the case,
> >> it deviates from X86 much more).
> > 
> > The current implementation does support guest physical to
> > machine physical translation.  But the translations are
> > put in the TLB and NOT into the VHPT.  If there is a TLB
> > miss on a guest physical address, resolving it is slow
> > (requiring a multi-level page table lookup),
> > but since physical addressing is used relatively infrequently
> > (and of course never used by applications), I suspect locality
> > is so low that putting guest physical addresses into the
> > VHPT won't help much.
> > 
> > Dan
>       I know you are using current IPF linux code (multiple 
> level page table), but again that deviates from XEN/X86 more.
> Any strong reason to deviate?
>       In general, I suggest we keep same with XEN/X86 except 
> it is architectural difference.
> Eddie
> 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.