[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-ia64-devel] RE: Implementing both (was: Xen/ia64 - global or per VP VHPT)
Thanks for the explanation. Will the foreignmap only be needed for Domain0 then? How frequent will the foreignmap be used (and will it be used with high temporal locality)? The reason I am asking these questions is that what I had planned for domain0 to access domU guest-physical addresses is as follows: - Domain0 is currently direct-mapped, meaning it can access any machine-physical address simply by adding a constant (0xf000000000000000) to the machine-physical address. (This is in the current implementation.) - Domain0 is trusted. If domain0 accesses any virtual address between 0xf000000000000000 and 0xf100000000000000, the miss handler direct maps this to the corresponding machine-physical address with no restrictions. (This is in the current implementation.) - Given the domid and the guest-physical address for any domU, with a simple dom0 hypercall, dom0 can ask for the machine-physical address corresponding to [domid,guest-physical], add 0xf000000000000000 to it and directly access it. (The hypercall doesn't exist yet, but the lookup mechanism is in the current implementation.) As for putting large pages (e.g. 256MB) in the VHPT, yes they may take up many (16Kx16K) entries. Insertion is done "on demand", meaning each 16K page is put in the VHPT when it is accessed rather than putting all 16,384 individual mappings in the VHPT at once. But this is necessary anyway (at least in the current implementation) because a domU guest may be entirely fragmented; every 16K of guest-physical memory may reside in a different, non-contiguous 16K of machine-physical memory. And, even worse, this mapping may change dynamically because of ballooning (of course requiring a TLB/VHPT flush if it changes). Dan > -----Original Message----- > From: Dong, Eddie [mailto:eddie.dong@xxxxxxxxx] > Sent: Wednesday, May 04, 2005 8:26 AM > To: Magenheimer, Dan (HP Labs Fort Collins) > Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx > Subject: RE: Implementing both (was: Xen/ia64 - global or per VP VHPT) > > Hi, dan: > > Hi Eddie -- > > > > Could you explain again what the foreignmap is? > > Is it mappings for other domains (e.g. for domainA > > to access memory in domainB)? Or is it for > > the hypervisor to access domainA space? Or > > something else? > It is mapping for domain 0 (device model application) to > access other domain's physical memory. Like a non modified > domain N issue a IDE DMA read/write command, the service > domain can directly read or write data to the point provided > by domain N to avoid data copy with foreignmap. > In this way, the map will be huge like 64GB to cover domain > N, and one map for each domain (except service domain). My > current implementation is to introduce another attribute to > vTLB that means vTLB has TR, TC and FM (FM covers foreignmap > and shared page). When collision chain memory is exhausted, > all vTC will be cycled, but vTR and FM will remain there. > Thus I can guarantee the hyper call shared page will never to > purged that you are using TR temply now. It is same for foreignmap. > Another important thing is that the entry in VHPT may be > different with that in vTLB due to machine discontiguous > situation. Suppose a guest has a 256MB huge tlb map that uses > one vTC, but in VHPT it may be unable to find a machine > contiguous space for that, so HV needs to convert this single > map into bunch of VHPT entries like 256M/16K=16K entries. I > guess you will headache without full tracking of guest vTLB. > If you prefer to always use 16KB or 4KB page size for VHPT > entries, the problem is disappeared but I am afraid it will > consume so much VHPT entries and thus impact other entries > performance. > > > > Since the current implementation doesn't have > > this, I just want to ensure I understand it. > > (And it would probably be useful for others > > on the mailing list too!) > > > > Thanks, > > Dan > > > _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |