[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps

On 2014/11/3 17:51, Jan Beulich wrote:
On 03.11.14 at 10:40, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/11/3 16:56, Jan Beulich wrote:
On 03.11.14 at 06:49, <tiejun.chen@xxxxxxxxx> wrote:
On 2014/10/31 16:20, Jan Beulich wrote:
On 31.10.14 at 07:21, <kevin.tian@xxxxxxxxx> wrote:
    From: Chen, Tiejun
Sent: Friday, October 31, 2014 1:41 PM
On 2014/10/30 17:20, Jan Beulich wrote:
Thinking about this some more, this odd universal hole punching in
the E820 is very likely to end up causing problems. Hence I think
this really should be optional behavior, with pass through of devices
associated with RMRRs failing if not used. (This ought to include
punching holes for _just_ the devices passed through to a guest
upon creation when the option is not enabled.)

Yeah, we had a similar discussion internal to add a parameter to force
reserving RMRR. In this case we can't create a VM if these ranges
conflict with anything. So what about this idea?

Adding a new parameter (e.g. 'check-passthrough') looks the right
approach. When the parameter is on, RMRR check/hole punch is
activated at VM creation. Otherwise we just keep existing behavior.

If user configures device pass-through at creation time, this parameter
will be set by default. If user wants the VM capable of device hot-plug,
an explicit parameter can be added in the config file to enforce RMRR
check at creation time.

Not exactly, I specifically described it slightly differently above. When
devices get passed through and the option is absent, holes should be
punched only for the RMRRs associated with those devices (i.e.
ideally none). Of course this means we'll need a way to associate
RMRRs with devices in the tool stack and hvmloader, i.e. the current
XENMEM_reserved_device_memory_map alone won't suffice.

Yeah, current hypercall just provide RMRR entries without that
associated BDF. And especially, in some cases one range may be shared by
multiple devices...

Before we decide who's going to do an eventual change we need to
determine what behavior we want, and whether this hypercall is
really the right one. Quite possibly we'd need a per-domain view
along with the global view, and hence rather than modifying this one
we may need to introduce e.g. a new domctl.

If we really need to work with a hypercall, maybe we can introduce a
little bit to construct that to callback with multiple entries like
this, for instance,

RMRR entry0 have three devices, and entry1 have two devices,

[start0, nr_pages0, bdf0],
[start0, nr_pages0, bdf1],
[start0, nr_pages0, bdf2],
[start1, nr_pages1, bdf3],
[start1, nr_pages1, bdf4],

Although its cost more buffers, actually as you know this actual case is
really rare. So maybe this way can be feasible. Then we don't need
additional hypercall or xenstore.

Conceptually, as a MEMOP, it has no business reporting BDFs. And
then rather than returning the same address range more than once,
having the caller supply a handle to an array and storing all of the
SBDFs (or perhaps a single segment would suffice along with all the
BDFs) there would seem to be an approach more consistent with
what we do elsewhere.

Here I'm wondering if we really need to expose BDFs to tools. Actually tools just want to know those range no matter who owns these entries. I mean we can do this in Xen.

When we try to assign device as passthrough, Xen can get that bdf so Xen can pre-check everything inside that hypercall, and Xen can return something like this,

#1 If this device has RMRR, we return that rmrr buffer. This is similar with our current implementation. #2 If not, we return 'nr_entries' as '0' to notify hvmloader it has nothing to do.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.