[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] MMIO emulation fixes



>>> On 04.09.18 at 18:24, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 04/09/18 17:11, Juergen Gross wrote:
>> On 16/08/18 13:27, Jan Beulich wrote:
>>>>>> On 16.08.18 at 12:56, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 16/08/18 11:29, Jan Beulich wrote:
>>>>> Following some further discussion with Andrew, he looks to be
>>>>> convinced that the issue is to be fixed in the balloon driver,
>>>>> which so far (intentionally afaict) does not remove the direct
>>>>> map entries for ballooned out pages in the HVM case. I'm not
>>>>> convinced of this, but I'd nevertheless like to inquire whether
>>>>> such a change (resulting in shattered super page mappings)
>>>>> would be acceptable in the first place.
>>>> We don't tolerate anything else in the directmap pointing to
>>>> invalid/unimplemented frames.  Why should ballooning be any different?
>>> Because ballooning is something virtualization specific, which
>>> doesn't have any equivalent on bare hardware (memory hot
>>> unplug doesn't come quite close enough imo, not the least
>>> because that doesn't work on a page granular basis). Hence
>>> we're to define the exact behavior here, and hence such a
>>> definition could as well include special behavior of accesses
>>> to the involved guest-physical addresses.
>> After discussing the issue with some KVM guys I still think it would be
>> better to leave the ballooned pages mapped in the direct map. KVM does
>> it the same way. They return "something" in case the guest tries to
>> read from such a page (might be the real data, 0's or all 1's).
>>
>> So we should either map an all 0's or 1's page via EPT, or we should
>> return 0's or 1's via emulation of the read instruction.
>>
>> Performance shouldn't be a major issue, as such reads should be really
>> rare.
> 
> Such reads should be non-existent.  One way or another, there's still a
> bug to fix in the kernel, because it isn't keeping suitable track of the
> pfns.

So you put yourself in opposition to both what KVM and Xen do in
their Linux implementations. I can only re-iterate: We're talking
about a PV extension here. Behavior of this is entirely defined by
us. Hence it is not a given that "such reads should be non-existent".

> As for how Xen could do things better...
> 
> We could map a page of all-ones (all zeroes would definitely be wrong),
> but you've still got the problem of what happens if a write occurs.  We
> absolutely can't sacrifice enough RAM to fill in the ballooned-out
> frames with read/write frames.

Of course, or else the ballooning effect would be nullified. However,
besides a page full of 0s or 1s, a simple "sink" page could also be
used, where reads return undefined data (i.e. whatever has last
been written into it through one of its perhaps very many aliases).

Another possibility for the sink page would be a (hardware) MMIO
one we know has no actual device backing it, thus allowing writes
to be terminated (discarded) by hardware, and reads to return all
ones (again due to hardware behavior). The question is how we
would universally find such a page (accesses to which must
obviously not have any other side effects).

> I'd prefer not to see any emulation here, but that is more for an attack
> surface limitation point of view.  x86 still offers us the option to not
> tolerate misaligned accesses and terminate early write-discard when
> hitting one of these pages.

Well - for now we have the series hopefully fixing the emulation
misbehavior here (and elsewhere at the same time). But I certainly
appreciate your desire for there not being emulation here in the
first place. Which I think leaves as the only option the sink page
described above.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.