[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] More questions about Xen memory layout/usage, access to guest memory
Hi all, I have some follow-up questions about Xen's usage and layout of memory, building on the ones I asked here a few weeks ago (which were quite helpfully answered: see https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01513.html for reference). For context on why I'm asking these questions, I'm using Xen as a research platform for enforcing novel memory protection schemes on hypervisors and guests. 1. Xen itself lives in the memory region from (on x86-64) 0xffff 8000 0000 0000 - 0xffff 8777 ffff ffff, regardless of whether it's in PV mode or HVM/PVH. Clearly, in PV mode a separate set of page tables (i.e. CR3 root pointer) must be used for each guest. Is that also true of the host (non-extended, i.e. CR3 in VMX root mode) page tables when an HVM/PVH guest is running? Or is the dom0 page table left in place, assuming the dom0 is PV, when an HVM/PVH guest is running, since extended paging is now being used to provide the guest's view of memory? Does this change if the dom0 is PVH? Or, to ask this from another angle: is there ever anything *but* Xen living in the host-virtual address space when an HVM/PVH guest is active? And is the answer to this different depending on whether the HVM/PVH guest is a domU vs. a PVH dom0? 2. Do the mappings in Xen's slice of the host-virtual address space differ at all between the host page tables corresponding to different guests? If the mappings are in fact the same, does Xen therefore share lower-level page table pages between the page tables corresponding to different guests? Is any of this different for PV vs. HVM/PVH? 3. Under what circumstances, and for what purposes, does Xen use its ability to access guest memory through its direct map of host-physical memory? Similarly, to what extent does the dom0 (or other such privileged domain) utilize "foreign memory maps" to reach into another guest's memory? I understand that this is necessary when creating a guest, for live migration, and for QEMU to emulate stuff for HVM guests; but for PVH, is it ever necessary for Xen or the dom0 to "forcibly" access a guest's memory? (I ask because the research project I'm working on is seeking to protect guests from a compromised hypervisor and dom0, so I need to limit outside access to a guest's memory to explicitly shared pages that the guest will treat as untrusted - not storing any secrets there, vetting input as necessary, etc.) 4. What facilities/processes does Xen provide for PV(H) guests to explicitly/voluntarily share memory pages with Xen and other domains (dom0, etc.)? From what I can gather from the documentation, it sounds like "grant tables" are involved in this - is that how a PV-aware guest is expected to set up shared memory regions for communication with other domains (ring buffers, etc.)? Does a PV(H) guest need to voluntarily establish all external access to its pages, or is there ever a situation where it's the other way around - where Xen itself establishes/defines a region as shared and the guest is responsible for treating it accordingly? Again, this mostly boils down to: under what circumstances, if ever, does Xen ever "force" access to any part of a guest's memory? (Particularly for PV(H). Clearly that must happen for HVM since, by definition, the guest is unaware there's a hypervisor controlling its world and emulating hardware behavior, and thus is in no position to cooperatively/voluntarily give the hypervisor and dom0 access to its memory.) Thanks again in advance for any help anyone can offer! Sincerely, Ethan Johnson -- Ethan J. Johnson Computer Science PhD student, Systems group, University of Rochester ejohns48@xxxxxxxxxxxxxxxx ethanjohnson@xxxxxxx PGP public key available from public directory or on request _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |