[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 00/14] XSA-277 followup
On 21/11/2018 22:42, Tamas K Lengyel wrote: > On Wed, Nov 21, 2018 at 2:22 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > wrote: >> On 21/11/2018 17:19, Tamas K Lengyel wrote: >>> On Wed, Nov 21, 2018 at 6:21 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>> wrote: >>>> This covers various fixes related to XSA-277 which weren't in security >>>> supported areas, and associated cleanup. >>>> >>>> The biggest issue noticed here is that altp2m's use of hardware #VE support >>>> will cause general memory corruption if the guest ever balloons out the >>>> VEINFO >>>> page. The only safe way I think of doing this is for Xen to alloc >>>> annonymous >>>> domheap pages for the VEINFO, and for the guest to map them in a similar >>>> way >>>> to the shared info and grant table frames. >>> Since ballooning presents all sorts of problems when used with altp2m >>> I would suggest just making the two explicitly incompatible during >>> domain creation. Beside the info page being possibly ballooned out the >>> other problem is when ballooning causes altp2m views to be reset >>> completely, removing mem_access permissions and remapped entries. >> If only it were that simple. >> >> For reasons of history and/or poor terminology, "ballooning" means two >> things. >> >> 1) The act of the Toolstack interacting with the balloon driver inside a >> VM, to change the current amount of RAM used by the guest. >> >> 2) XENMEM_{increase,decrease}_reservation which are the underlying >> hypercalls used by guest kernels. >> >> For the toolstack interaction side of things, this is a mess. There is >> a single xenstore key, and a blind assumption that all guests know what >> changes to memory/target mean. There is no negotiation of whether a >> balloon driver is running in the guest, and if one is running, there is >> no ability for the balloon driver to nack a request it can't fulfil. >> The sole feedback mechanism which exists is the toolstack looking to see >> whether the domain has changed the amount of RAM it is using. >> >> PV guests are fairly "special" by any reasonable judgement. They are >> fully aware of their memory layout , an of changes to it across >> migrate. "Ballooning" was implemented at a time when most computers had >> MB of RAM rather than GB, and the knowledge a PV guest had was "I've got >> a random set of MFNs which aren't currently used by anything important, >> and can be handed back to Xen on request. Xen guests also have shared >> memory constructs such as the shared_info page, and grant tables. A PV >> guest gets access to these by programming the frame straight into to the >> pagetables, and Xen's permission model DTRT. >> >> Then HVM guests came along. For reasons of trying to get things >> working, they inherited a lot of same interfaces as PV guests, despite >> the fundamental differences in the way they work. One of the biggest >> differences was the fact that HVM guests have their gfn=>mfn space >> managed by Xen rather than themselves, and in particular, you can no >> longer map shared memory structures in the PV way. >> >> For a shared memory structure to be usable, a mapping has to be put into >> the guests P2M, so the guest can create a regular pagetable entry >> pointing at it. For reasons which are beyond me, Xen doesn't have any >> knowledge of the guests physical layout, and guests arbitrary mutative >> capabilities on their GFN space, but with a hypercall set that has >> properties such as a return value of "how many items of this batch >> succeeded", and replacement properties rather than error properties when >> trying to modify a GFN which already has something in it. >> >> Whatever the reasons, it is commonplace for guests to >> decrease_reservation out some RAM to create holes for the shared memory >> mappings, because it is the only safe way to avoid irreparably >> clobbering something else (especially if you're HVMLoader and in charge >> of trying to construct the E820/ACPI tables). >> >> tl;dr If you actually prohibit XENMEM_decrease_reservation, HVM guests >> don't boot, and that's long before a balloon driver gets up and running. > Thanks for the detailed write-up. This explains why I could never get > altp2m working from domain start, no matter where in the startup logic > of the toolstack I placed the altp2m activation (had to resort to > activating altp2m settings only after I detect the guest OS is fully > booted and things have settled down). So, in theory it should all work, even from the start. In practice, the implementation quality of altp2m leaves a lot to be desired, and it was designed to have the "all logic inside the guest" model, which in practice means that it only ever started once the guest had come up sufficiently. Do you recall more specifically where you tried inserting startup logic? It sounds like something which wants fixing, irrespective of the other concerns here. > >> Now, all of that said, there are a number of very good reasons why a >> host administrator might want to prohibit the guest from having >> arbitrary mutative capabilities, chief among them being to prevent the >> guest from shattering host superpagpes, but also due to >> incompatibilities with some of our more interesting features. >> >> The only way I see of fixing this to teach Xen about the guests gfn >> layout (as chosen by the domainbuilder), and include within that "space >> which definitely doesn't have anything in, and is safe to put shared >> mappings into". > Yes, that would be great - especially if this was something we could > query from the toolstack too. Right now we resorted to parsing the > E820 map as it shows up in the domain creation logs and whatever > xc_domain_maximum_gpfn returns to get some idea of what memory layout > looks like in the guest and where the holes are, but there is still a > lot of guessing involved. Eww :( So, we've got a number of issues which need addressing. For a start, there isn't a clear understanding of how much RAM a guest has, and previous attempts to resolve this have only succeeded in demonstrating that the core maintainers can't even agree on what it means, let alone how to calculate it. Things get especially complicates with VRAM and ROMs, and the overall answer is some mix of information in Xen, xenstore, qemu and the guest. In reality, whomever actually does the legwork to resolve the problems will get to define the terms, and how they get calculated. Ultimately, it is the domain builder which knows all the pertinent details, and is in a position to operate on them - it is already responsible for doing the initial memory layout calculations, and stashing an E820 table in the hypervisor (see XENMEM_{,set_}memory_map). The main problem we have is that we need more types than exist in the E820 spec, so my plan is to have the domain builder construct an E820-like table with Xen-defined types and pass that to the hypervisors. It shall be the single and authoritative source of guest physmap information, and will most likely be immutable once the guest has started. From this, we can trivially derive a real E820, but we can also fix other problems such as Xen not actually knowing where MMIO holes are. It would be lovely if we could reject emulation attempts if they occur in unexpected locations, as an attack surface reduction action. Also, we'd at least be able to restrict a guests ballooning operations to be within prescribed RAM regions. > >> Beyond that, we'll need some administrator level >> knowledge of which guests are safe to have XENMEM_decrease_reservation >> prohibited, or some interlocks inside Xen to disable unsafe features as >> soon as we spot a guest which isn't playing by the new rules. >> >> This probably needs some more more thought, but fundamentally, we have >> to undo more than a decades worth of "doing it wrong" which has >> percolated through the Xen ecosystem. >> >> I'm half tempted to put together a big hammer bit in the domain creation >> path which turns off everything like this (and other areas where we know >> Xen is lacking, such as default readability/write-ignore of all MSRs), >> after which we'll have a rather a more concrete baseline to discuss what >> the guests are actually doing, and how to get them back into a working >> state while maintaining architectural. >> > +1, bringing some sanity to this (and documentation) would be of great > value! I would be very interested in this line of work and happy to > help however I can. I need to find some copious free time :) ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |