[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 03/21] libs/guest: introduce xc_cpu_policy_t


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 1 Apr 2021 10:48:00 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UUcEZb4vYsSnIknMdrcaUmgBNIX+6bgQlq5dQ+9sTIM=; b=QvNEo4lGPx0WbEIX/gk/fkTNMJc8L+F6lDJyEq6FY0182ypkhOryZtNT5L1aHxi5paHACQNV95fi9QbYnCyGk+FVNzOVkRaB/IytMBavvHD2gZ70RNJFeBTW284skQWhXq99WfUHI7nhVe9PseuA9ff7owFPL1hP4je+aa1q3pBOrAllLG6PusoWiux0U4E6i4u5PXaphvrzt9RTBVgIx5h6xYowioD1cU5b06zKCBJMQXJs4YK77jJUh2XfgAslgt7qI5ysA+MHCuHK5G+kwU8O3pJvD3hVv4YICkuSKjl9QhPNUfIkrPMMZIgpRGzyBe/CkqIZSTEHEgmL3QBeZg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=e/iPIqtm/rwsVrPnt4dnursB8qN8NAtYroNr9M9GhXZRO2uO+s1H3oQ/fh240Hi3OnDvn15MzERZ9nNrSNmeVkBqVSobZ3meE7FF+yy5bhc0/DIxiUjVln6jsgWKMTvOQSgCud/DGQEEErgbrcGs5CKP4Qg/MKzpUzWQlfKBih/U6u+TWhu75yy4vTjD95QLDb/qmyQvDU53DGvAgOiIgCEekJSdVHFFhKDm1uimyKJ6gLJuQXoOuQATnVLUcziuTBjCqSwhpVDAMpIIFWBXlkSd5IixR4CGRqjApb1IlUQHGmwJbVmFstDVU1JDATAb8BJTHFaCMMN+iiM41k6GWA==
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Thu, 01 Apr 2021 08:48:17 +0000
  • Ironport-hdrordr: A9a23:fBIx+6tR+1S6RWoAoNJB2xrJ7skCIIMji2hD6mlwRA09T+Wzna mV7Y0m/DXzjyscX2xlpMCYNMC7MAvh3LNWwa1UB7etWwH6pHClRbsJ0aLOyyDtcheOk9J1+r xnd8FFeb/NJHh8yf33+QypV+snqeP3k5yAocf74zNTQRpxa6dmhj0ZNi++HldtTAdLQboVfa DsgvZvnDardXQJYsnTPBBsNNTrnNHFmInrZhQLHXcciDWmty+i67LxDnGjr3Ajeg5IqI1Sl1 TtokjW4uGGv+ugwhHRk1XP54lb8eGM9vJzQOKLjMYRJnHAqCaNIL5gVbqLoSwvrIiUhWoCoZ 3jpREvOsg20XfNZyWOpwf30QWI6lkTwk6n8lOTjXv9rcGRflwHN/Y=
  • Ironport-sdr: 6pGHwN491dY/sBdxrtQZ5zgn3uRge0bE9xgLF62jz4w/Sko483q7nMj/aeeyKF2Mf6idf58P91 rq3Ako9dpDFarf/BPCkf1PsrcmM2F9ZT0wi1LhrtZnZxSrrhTK4YOba0h7BF0oWW0ThMeiNNTH iiOsWVJgbQWym5lWuC6ciS3/Ul0XT0xs5nBkvWGwqV7HmWo1SyYlNWxLRMbwnHadS6Rm2ptvxx W5O9pZsenWHn5E8JtgUOFMV9RrFar44P+HoZ0/VJnREuWIcIF8wLyWGrSowDrzFS1L90PTGGil HqM=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Mar 31, 2021 at 09:10:13PM +0100, Andrew Cooper wrote:
> On 23/03/2021 09:58, Roger Pau Monne wrote:
> > Introduce an opaque type that is used to store the CPUID and MSRs
> > policies of a domain. Such type uses the existing cpu_policy structure
> > to store the data, but doesn't expose the type to the users of the
> > xenguest library.
> >
> > Introduce an allocation (init) and freeing function (destroy) to
> > manage the type.
> >
> > Note the type is not yet used anywhere.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> >  tools/include/xenctrl.h         |  6 ++++++
> >  tools/libs/guest/xg_cpuid_x86.c | 28 ++++++++++++++++++++++++++++
> >  2 files changed, 34 insertions(+)
> >
> > diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> > index e91ff92b9b1..ffb3024bfeb 100644
> > --- a/tools/include/xenctrl.h
> > +++ b/tools/include/xenctrl.h
> > @@ -2590,6 +2590,12 @@ int xc_psr_get_domain_data(xc_interface *xch, 
> > uint32_t domid,
> >  int xc_psr_get_hw_info(xc_interface *xch, uint32_t socket,
> >                         xc_psr_feat_type type, xc_psr_hw_info *hw_info);
> >  
> > +typedef struct cpu_policy *xc_cpu_policy_t;
> > +
> > +/* Create and free a xc_cpu_policy object. */
> > +xc_cpu_policy_t xc_cpu_policy_init(void);
> > +void xc_cpu_policy_destroy(xc_cpu_policy_t policy);
> > +
> >  int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
> >  int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
> >                            uint32_t *nr_features, uint32_t *featureset);
> > diff --git a/tools/libs/guest/xg_cpuid_x86.c 
> > b/tools/libs/guest/xg_cpuid_x86.c
> > index 9846f81e1f1..ade5281c178 100644
> > --- a/tools/libs/guest/xg_cpuid_x86.c
> > +++ b/tools/libs/guest/xg_cpuid_x86.c
> > @@ -659,3 +659,31 @@ out:
> >  
> >      return rc;
> >  }
> > +
> > +xc_cpu_policy_t xc_cpu_policy_init(void)
> > +{
> > +    xc_cpu_policy_t policy = calloc(1, sizeof(*policy));
> > +
> > +    if ( !policy )
> > +        return NULL;
> > +
> > +    policy->cpuid = calloc(1, sizeof(*policy->cpuid));
> > +    policy->msr = calloc(1, sizeof(*policy->msr));
> > +    if ( !policy->cpuid || !policy->msr )
> > +    {
> > +        xc_cpu_policy_destroy(policy);
> > +        return NULL;
> > +    }
> > +
> > +    return policy;
> > +}
> > +
> > +void xc_cpu_policy_destroy(xc_cpu_policy_t policy)
> > +{
> > +    if ( !policy )
> > +        return;
> > +
> > +    free(policy->cpuid);
> > +    free(policy->msr);
> > +    free(policy);
> > +}
> 
> Looking at the series as a whole, we have a fair quantity of complexity
> from short-lived dynamic allocations.
> 
> I suspect that the code would be rather better if we had
> 
> struct xc_cpu_policy {
>     struct cpuid_policy cpuid;
>     struct msr_policy msr;
>     xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
>     xen_msr_entry_t msrs[MSR_MAX_SERIALISED_ENTRIES];
>     /* Names perhaps subject to improvement */
> };
> 
> and just made one memory allocation.
> 
> This is userspace after all, and we're taking about <4k at the moment.
> 
> All operations with Xen need to bounce through the leaves/msrs encoding
> (so we're using the space a minimum of twice for any logical operation
> at the higher level), and several userspace-only operations use them too.

We would still need to do some allocations for the system policies,
but yes, it would prevent some of the short-lived allocations. I
didn't care much because it's all user-space, but removing them will
likely make the code simpler.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.