[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.13 2/2] Rationalize max_grant_frames and max_maptrack_frames handling



On 27.11.19 13:07, George Dunlap wrote:
On 11/27/19 4:34 AM, Jürgen Groß wrote:
On 26.11.19 18:30, George Dunlap wrote:
On 11/26/19 5:17 PM, George Dunlap wrote:
- xl will use the libxl default for maptrack, but does its own default
    calculation for grant frames: either 32 or 64, based on the max
    possible mfn.

[snip]

@@ -199,13 +198,6 @@ static void parse_global_config(const char
*configfile,
         if (!xlu_cfg_get_long (config, "max_grant_frames", &l, 0))
           max_grant_frames = l;
-    else {
-        libxl_physinfo_init(&physinfo);
-        max_grant_frames = (libxl_get_physinfo(ctx, &physinfo) != 0 ||
-                            !(physinfo.max_possible_mfn >> 32))
-                           ? 32 : 64;
-        libxl_physinfo_dispose(&physinfo);
-    }

Sorry, meant to add a patch to add this functionality back into the
hypervisor -- i.e., so that opt_max_grant_frames would be 32 on systems
with 32-bit mfns.

But this seems like a fairly strange calculation anyway; it's not clear
to me where it would have come from.
mfns above the 32-bit limit require to use grant v2. This in turn
doubles the grant frames needed for the same number of grants.

But is large mfns the *only* reason to use grant v2?  Aren't modern
guests going to use grant v2 regardless of the max mfn size?

Large mfns leave the guest no choice. Linux kernel V2 support was
removed and I reintroduced it for being able to support large mfns in
guests.

Current Linux kernel will use V1 if the max mfn fits in 32 bits and V2
only if there can be memory above that boundary.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.