[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/5] xen: better grant v2 support



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan
> Beulich
> Sent: 23 August 2017 09:36
> To: Juergen Gross <jgross@xxxxxxxx>
> Cc: Tim (Xen.org) <tim@xxxxxxx>; sstabellini@xxxxxxxxxx; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH 0/5] xen: better grant v2 support
> 
> >>> On 23.08.17 at 09:49, <jgross@xxxxxxxx> wrote:
> > On 22/08/17 14:48, Jan Beulich wrote:
> >>>>> On 21.08.17 at 20:05, <jgross@xxxxxxxx> wrote:
> >>> Currently Linux has no support for grant v2 as this would reduce the
> >>> maximum number of active grants by a factor of 2 compared to v1,
> >>> because the number of possible grants are limited by the allowed
> number
> >>> of grant frames and grant entries of v2 need twice as much bytes as
> >>> those of v1.
> >>>
> >>> Unfortunately grant v2 is the only way to support either guests with
> >>> more than 16TB memory size or PV guests with memory above the 16TB
> >>> border, as grant v1 limits the frame number to be 32 bits wide.
> >>>
> >>> In order to remove the disadvantage of grant v2 this patch series
> >>> enables configuring different maximum grant frame numbers for v1 and
> >>> v2.
> >>
> >> But that does imply higher memory footprint of such a guest in Xen,
> >> doesn't it?
> >
> > With current defaults this would need up to 128kB more for a guest using
> > v2 grants.
> 
> At least in an auto-ballooned setup this may make the difference
> between a guest being able or failing to start.
> 
> >> The limit, after all, is there to bound resource use of
> >> DomU-s.  I wonder whether we shouldn't make any such increase
> >> dependent on first putting in place proper accounting of the memory
> >> used for individual domains.
> >
> > So you would want to have a way to count pages (or bytes?) allocated for
> > hypervisor internal needs on a per-domain basis, right?
> >
> > Would that be additional to struct domain -> xenheap_pages or would you
> > want to merge the new counter into it? I guess a new field would be
> > required in order to avoid counting some data twice.
> >
> > Do you have an idea what to do with that value? Do you want to expose it
> > to the user (dom0 admin), or should it be used just inside the
> > hypervisor and e.g. printed by a debug key handler?
> >
> > Do you want an additional set of allocating functions doing the
> > accounting, or should the existing functions be used with an additional
> > domain pointer, or should the caller be responsible doing the additional
> > accounting?
> >
> > Do you want an all-or-nothing approach or a gradual move to add the new
> > accounting step by step?
> 
> We've been vaguely discussing this in the past on a few occasions.
> My personal thinking is that the "memory=" setting in a guest config
> really ought to express all the memory associated with a guest. But
> of course there'll be problems with us starting to do so, and that's
> beyond people observing less memory in their guests. Switching to
> such a full accounting model will require some careful thought (and
> discussion up front). Hence I've only said "I wonder whether", i.e.
> I don't mean to make this a strict prerequisite to the proposed
> changes here. I'd be in particular interested to hear opinions of a
> few other people.
> 

Making a the number of grant frames a per-vm-configurable quantity would seem 
like a reasonable first step. I'm not convinced of the need for separate v1 and 
v2 limits if this were the case.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.