[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.19 v2 2/3] xen/x86: enable altp2m at create domain domctl



On Wed, May 08, 2024 at 08:38:07PM +0100, Andrew Cooper wrote:
> On 08/05/2024 12:23 pm, Roger Pau Monne wrote:
> > Enabling it using an HVM param is fragile, and complicates the logic when
> > deciding whether options that interact with altp2m can also be enabled.
> >
> > Leave the HVM param value for consumption by the guest, but prevent it from
> > being set.  Enabling is now done using the misc_flags field in
> > xen_arch_domainconfig.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> > Changes since v1:
> >  - New in this version.
> 
> Ha.  So this is actually work that Petr has been wanting to do.
> 
> Petr has a series hoping to make it into 4.19 (x86: Make MAX_ALTP2M
> configurable), which just missed out on this side of things.
> 
> altp2m is not architecture specific at all, and there's even support for
> ARM out on the mailing list.  Therefore, the altp2m mode wants to be
> common, just like the new MAX_ALTP2M setting already is.

Initially I had it as a set of XEN_DOMCTL_CDF_* flags, but it wasn't
clear to me whether the modes could be shared between arches.

> Both fields can reasonably share uint32_t, but could you work with Petr
> to make both halfs of this land cleanly.

I'm happy for Petr to pick this patch as part of the series if he
feels like.

I assume the plan would be to add an XEN_DOMCTL_CDF_altp2m flag, and
then a new field to signal the mode.

> 
> As to the HVMPARAM, I'd really quite like to delete it.  It was always a
> bodge, and there's a full set of HVMOP_altp2m_* for a guest to use.

I've assumed we must keep HVM_PARAM_ALTP2M for backwards
compatibility.

> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 20e83cf38bbd..dff790060605 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -708,13 +711,33 @@ int arch_sanitise_domain_config(struct 
> > xen_domctl_createdomain *config)
> >          }
> >      }
> >  
> > -    if ( config->arch.misc_flags & ~XEN_X86_MSR_RELAXED )
> > +    if ( config->arch.misc_flags & ~XEN_X86_MISC_FLAGS_ALL )
> >      {
> >          dprintk(XENLOG_INFO, "Invalid arch misc flags %#x\n",
> >                  config->arch.misc_flags);
> >          return -EINVAL;
> >      }
> >  
> > +    if ( altp2m && (altp2m & (altp2m - 1)) )
> > +    {
> > +        dprintk(XENLOG_INFO, "Multiple altp2m options selected in flags: 
> > %#x\n",
> > +                config->flags);
> > +        return -EINVAL;
> 
> I think this would be clearer to follow by having a 2 bit field called
> altp2m_mode and check for <= 2.

Don't we need 3 bits, for mixed, external and limited modes?

We could do with 2 bits if we signal altp2m enabled in a different
field, and then introduce a field to just contain the mode.

FWIW, the check should be `if ( altp2m & (altp2m - 1) )`.  I had
updated this, but seems like I missed to re-generate the patches.

> > +    }
> > +
> > +    if ( altp2m && nested_virt )
> > +    {
> > +        dprintk(XENLOG_INFO,
> > +                "Nested virt and altp2m are mutually incompatible\n");
> 
> There's nothing inherently incompatible.  I think it's more that noone
> had any interest in trying to make it work in combination with nested p2ms.
> 
> I'd phrase it as "not supported", rather than incompatible.

"Nested virt and altp2m are not supported together\n"

> > +        return -EINVAL;
> > +    }
> > +
> > +    if ( altp2m && !hap )
> > +    {
> > +        dprintk(XENLOG_INFO, "altp2m requires HAP\n");
> > +        return -EINVAL;
> > +    }
> 
> altp2m ought to work fine with shadow.  It's only if you want VMFUNC/#VE
> acceleration that you depend on EPT.
> 
> Again, I'd phrase this as "not supported".

"altp2m is only supported with HAP\n"

To avoid the double negation of "not supported without HAP" wording.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.