[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XENMEM_maximum_gpfn return value


  • To: "xen-devel" <xen-devel@xxxxxxxxxxxxx>
  • From: "Jan Beulich" <JBeulich@xxxxxxxx>
  • Date: Mon, 25 Feb 2013 14:33:32 +0000
  • Delivery-date: Mon, 25 Feb 2013 14:34:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

This coming from a shared info field, it has to be assumed to possibly
have any value. In particular, our kernels don't set up this field at all
if running as Dom0 and kexec isn't enabled (along with not setting up
the P2M frame lists, as that's simply wasted memory in that case).

So, with this being a guest provided value, we apparently have two
problems: do_memory_op()'s "rc" variable is only of type "int", thus
potentially truncating the result of domain_get_maximum_gpfn()
(considering that we now support 16Tb, an 8Tb+ guest ought to
be possible, and such would have a max GPFN with 32 significant
bits). And zero or a very large number being returned by the latter
will generally be mistaken as an error code by the caller of the
hypercall.

So, along with promoting "rc" to long, I'm considering enforcing a
sane lower bound on domain_get_maximum_gpfn()'s return value,
using d->tot_pages (under the assumption that each page would
have a representation in the physical map, and hence the
physical map can't reasonably be smaller than this). That would
overcome the subtraction of 1 done there for PV guests to
convert 0 to -1 (i.e. -EPERM). Similarly, a sane upper bound
ought to be enforced (to avoid collisions with the -E range).

Other thoughts? Is such a behavioral change acceptable for an
existing interface?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.