[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/14] XSA-277 followup


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 22 Nov 2018 00:08:24 +0000
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: JGross@xxxxxxxx, Kevin Tian <kevin.tian@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Paul Durrant <paul.durrant@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Brian Woods <brian.woods@xxxxxxx>, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 22 Nov 2018 00:08:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 21/11/2018 22:42, Tamas K Lengyel wrote:
> On Wed, Nov 21, 2018 at 2:22 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> wrote:
>> On 21/11/2018 17:19, Tamas K Lengyel wrote:
>>> On Wed, Nov 21, 2018 at 6:21 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
>>> wrote:
>>>> This covers various fixes related to XSA-277 which weren't in security
>>>> supported areas, and associated cleanup.
>>>>
>>>> The biggest issue noticed here is that altp2m's use of hardware #VE support
>>>> will cause general memory corruption if the guest ever balloons out the 
>>>> VEINFO
>>>> page.  The only safe way I think of doing this is for Xen to alloc 
>>>> annonymous
>>>> domheap pages for the VEINFO, and for the guest to map them in a similar 
>>>> way
>>>> to the shared info and grant table frames.
>>> Since ballooning presents all sorts of problems when used with altp2m
>>> I would suggest just making the two explicitly incompatible during
>>> domain creation. Beside the info page being possibly ballooned out the
>>> other problem is when ballooning causes altp2m views to be reset
>>> completely, removing mem_access permissions and remapped entries.
>> If only it were that simple.
>>
>> For reasons of history and/or poor terminology, "ballooning" means two
>> things.
>>
>> 1) The act of the Toolstack interacting with the balloon driver inside a
>> VM, to change the current amount of RAM used by the guest.
>>
>> 2) XENMEM_{increase,decrease}_reservation which are the underlying
>> hypercalls used by guest kernels.
>>
>> For the toolstack interaction side of things, this is a mess.  There is
>> a single xenstore key, and a blind assumption that all guests know what
>> changes to memory/target mean.  There is no negotiation of whether a
>> balloon driver is running in the guest, and if one is running, there is
>> no ability for the balloon driver to nack a request it can't fulfil.
>> The sole feedback mechanism which exists is the toolstack looking to see
>> whether the domain has changed the amount of RAM it is using.
>>
>> PV guests are fairly "special" by any reasonable judgement.  They are
>> fully aware of their memory layout , an of changes to it across
>> migrate.  "Ballooning" was implemented at a time when most computers had
>> MB of RAM rather than GB, and the knowledge a PV guest had was "I've got
>> a random set of MFNs which aren't currently used by anything important,
>> and can be handed back to Xen on request.  Xen guests also have shared
>> memory constructs such as the shared_info page, and grant tables.  A PV
>> guest gets access to these by programming the frame straight into to the
>> pagetables, and Xen's permission model DTRT.
>>
>> Then HVM guests came along.  For reasons of trying to get things
>> working, they inherited a lot of same interfaces as PV guests, despite
>> the fundamental differences in the way they work.  One of the biggest
>> differences was the fact that HVM guests have their gfn=>mfn space
>> managed by Xen rather than themselves, and in particular, you can no
>> longer map shared memory structures in the PV way.
>>
>> For a shared memory structure to be usable, a mapping has to be put into
>> the guests P2M, so the guest can create a regular pagetable entry
>> pointing at it.  For reasons which are beyond me, Xen doesn't have any
>> knowledge of the guests physical layout, and guests arbitrary mutative
>> capabilities on their GFN space, but with a hypercall set that has
>> properties such as a return value of "how many items of this batch
>> succeeded", and replacement properties rather than error properties when
>> trying to modify a GFN which already has something in it.
>>
>> Whatever the reasons, it is commonplace for guests to
>> decrease_reservation out some RAM to create holes for the shared memory
>> mappings, because it is the only safe way to avoid irreparably
>> clobbering something else (especially if you're HVMLoader and in charge
>> of trying to construct the E820/ACPI tables).
>>
>> tl;dr If you actually prohibit XENMEM_decrease_reservation, HVM guests
>> don't boot, and that's long before a balloon driver gets up and running.
> Thanks for the detailed write-up. This explains why I could never get
> altp2m working from domain start, no matter where in the startup logic
> of the toolstack I placed the altp2m activation (had to resort to
> activating altp2m settings only after I detect the guest OS is fully
> booted and things have settled down).

So, in theory it should all work, even from the start.

In practice, the implementation quality of altp2m leaves a lot to be
desired, and it was designed to have the "all logic inside the guest"
model, which in practice means that it only ever started once the guest
had come up sufficiently.

Do you recall more specifically where you tried inserting startup
logic?  It sounds like something which wants fixing, irrespective of the
other concerns here.

>
>> Now, all of that said, there are a number of very good reasons why a
>> host administrator might want to prohibit the guest from having
>> arbitrary mutative capabilities, chief among them being to prevent the
>> guest from shattering host superpagpes, but also due to
>> incompatibilities with some of our more interesting features.
>>
>> The only way I see of fixing this to teach Xen about the guests gfn
>> layout (as chosen by the domainbuilder), and include within that "space
>> which definitely doesn't have anything in, and is safe to put shared
>> mappings into".
> Yes, that would be great - especially if this was something we could
> query from the toolstack too. Right now we resorted to parsing the
> E820 map as it shows up in the domain creation logs and whatever
> xc_domain_maximum_gpfn returns to get some idea of what memory layout
> looks like in the guest and where the holes are, but there is still a
> lot of guessing involved.

Eww :(

So, we've got a number of issues which need addressing.  For a start,
there isn't a clear understanding of how much RAM a guest has, and
previous attempts to resolve this have only succeeded in demonstrating
that the core maintainers can't even agree on what it means, let alone
how to calculate it.  Things get especially complicates with VRAM and
ROMs, and the overall answer is some mix of information in Xen,
xenstore, qemu and the guest.

In reality, whomever actually does the legwork to resolve the problems
will get to define the terms, and how they get calculated.

Ultimately, it is the domain builder which knows all the pertinent
details, and is in a position to operate on them - it is already
responsible for doing the initial memory layout calculations, and
stashing an E820 table in the hypervisor (see XENMEM_{,set_}memory_map).

The main problem we have is that we need more types than exist in the
E820 spec, so my plan is to have the domain builder construct an
E820-like table with Xen-defined types and pass that to the
hypervisors.  It shall be the single and authoritative source of guest
physmap information, and will most likely be immutable once the guest
has started.

From this, we can trivially derive a real E820, but we can also fix
other problems such as Xen not actually knowing where MMIO holes are. 
It would be lovely if we could reject emulation attempts if they occur
in unexpected locations, as an attack surface reduction action.  Also,
we'd at least be able to restrict a guests ballooning operations to be
within prescribed RAM regions.

>
>> Beyond that, we'll need some administrator level
>> knowledge of which guests are safe to have XENMEM_decrease_reservation
>> prohibited, or some interlocks inside Xen to disable unsafe features as
>> soon as we spot a guest which isn't playing by the new rules.
>>
>> This probably needs some more more thought, but fundamentally, we have
>> to undo more than a decades worth of "doing it wrong" which has
>> percolated through the Xen ecosystem.
>>
>> I'm half tempted to put together a big hammer bit in the domain creation
>> path which turns off everything like this (and other areas where we know
>> Xen is lacking, such as default readability/write-ignore of all MSRs),
>> after which we'll have a rather a more concrete baseline to discuss what
>> the guests are actually doing, and how to get them back into a working
>> state while maintaining architectural.
>>
> +1, bringing some sanity to this (and documentation) would be of great
> value! I would be very interested in this line of work and happy to
> help however I can.

I need to find some copious free time :)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.