[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/mem_sharing: support forks with active vPMU state


  • To: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Tue, 19 Jul 2022 18:22:33 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=J1N65RqM0M2f7IEwxCPXDaObWZSpFN/CyaTtqVHOB+E=; b=iR18J1MKivlrS+ft2J8P+15r7h3sdBFVJI0P+ZlOnF+8xpUWBC2cYwRKjMkPc0tpbDLX+2PDLiTomHYmdesy9+VYqbvdhOHY1/WIh640AH3sYMB1EdTfW5ctmvAbve2dGfoAmkJb+UVuYjdBJxEueJr+4RJITvNXa2Z+ZcCVEDWBv0lSqYyDfsw+weCozFbeHDPLMRQnk5SXdNdiDcLyvtAzBGxlygUihMONL5zibicyfB/lfg1/hXqqfVGbwBNMjmEHQVxsPIZ6VRlTp5Xrp0fRQFVx7U6vS0qm9kQBWkc1TZqu8gpeUf8iY3zyGC3tJrBzjht3ZgWGRASDpngVCg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N/rVNxrvdrOuRADep/cZz70gYFo/Zecwkz4NfW6lYGWk82UCmkECQ6l1YP6fwxGT3ZmIAlNUcox1ZdN5E20wAsCmemmhPBqIMSgQq0Sp3DHkGKVOulcJpyuEI9iIRU9n7Gf+LLyvStuyyqyeLK1bVLsIXmRFfVnUH87kMm4cJ19/2S9kyW5hMtO0OyCO9zyoFqOT7q8uAZtxyB4+CMj11KDrIqZrTAR5qM5oagna4PlLUL77+ZPh9r/ZrczZYvepN0fIR+b5HbJT5t3gcxjxe6ackiCTo/Xw9PJChWB137E9e5J9OmiRK75Pcd8EBNd+2/WXn+bCipI9w/mLr/Zamw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxx>
  • Delivery-date: Tue, 19 Jul 2022 18:23:10 +0000
  • Ironport-data: A9a23:ot7e8a0KLkfPFUmLTvbD5SFwkn2cJEfYwER7XKvMYLTBsI5bp2dRz WYaWjzQaPneYDH0Ld10O9nk9RgFscXRnN4yHlZvpC1hF35El5HIVI+TRqvS04J+DSFhoGZPt Zh2hgzodZhsJpPkjk7xdOKn9RGQ7InQLpLkEunIJyttcgFtTSYlmHpLlvUwx4VlmrBVOSvU0 T/Ji5CZaQXNNwJcaDpOsfrc8UI35pwehRtD1rAATaET1LPhvyF94KI3fcmZM3b+S49IKe+2L 86rIGaRpz6xE78FU7tJo56jGqE4aue60Tum0xK6b5OKkBlazhHe545gXBYqheW7vB3S9zx54 I0lWZVd0m7FNIWU8AgWe0Ew/y2TocSqUVIISJSymZX78qHIT5fj69Z1M2cqF6siwNx2PFxv9 qUcOCtXZynW0opawJrjIgVtruIKCZGxea864TRnxzyfCus6S5feRamM/cVfwDo7msFJG7DZe tYdbj1sKh/HZnWjOH9OUM54wLju2ye5L2QwRFG9/MLb50D6ygBr3aerG93SYtGQHu1en1qCp 3KA9GP8av0fHIPBkGTcoin37gPJtRL/ZNo3EZSkytkppU/UyUwJLDIYS3Lu9JFVjWb7AbqzM Xc85iMrpLN08EGtQcjwWzW5pmKJulgXXN84O/037kSBx7TZ5y6dB3MYVXhRZdo+rsg0SDc2k FiTkLvBBzZirbmUQnK17aqPoHW5Pi19BXAGTT8JS00C+daLiIMuiFTJR9VqEq+wh/X0Hy39x 3aBqy1Wr7Yek88Nkbm69FbvgjSwq5yPRQkwji3LV2es9StlZ4qoYYO55Fyd5vFFRLt1VXGEt XkA3sSbt+YHCMnXkDTXGLlUWra0+/yCLTvQx0Z1GIUs/Cis/Hjlep1M5DZ5JwFiNcNslSLVX XI/cDh5vPd7VEZGp4cuC25tI6zGFZTdKOk=
  • Ironport-hdrordr: A9a23:oSjJVqkMCknOIFcaMmiWvFfeLAfpDfN1iWdD5ihNYBxZY6Wkfp +V8cjzhCWftN9OYhodcIi7SdK9qXO1z+8X3WGIVY3SEDUOy1HYVr2KirGSjAEIeheOu9K1sJ 0NT0EQMqyWMbEXt6fHCUyDYq4dKbq8ge6VbIXlvhFQpGhRAskOgTuRSDzra3GeLzM2Z6bRYa Dsgvav0ADQHEj/AP7aOlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPcf2F mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFcdcsvy5zXIISdOUmRIXee r30lAd1gNImjXsl1SO0F7QMs/boW8TAjHZuAelaDDY0LHErXoBerZ8bMRiA1rkAgMbza9BOO gg5RPni7NHSRzHhyjz/N7OSlVjkVe1u2MrlaoJg2VYSpZ2Us4YkWWxxjImLH4sJlON1GkcKp gmMOjMoPJNNV+KZXHQuWdihNSqQ3QoBx+DBkwPoNac3TRalG1wixJw/r1Uol4QsJYmD5VU7e XNNapl0LlIU88NdKp4QOMMW9G+BGDBSQ/FdGiSPVPkHqcaPG+lke+93JwloOWxPJAYxpo7n5 rMFFteqG4pYkrrTdaD2ZVamyq9N1lVnQ6dvv22y6IJyoEUHoCbQBFrYGpe4PeIsrEYHtDRXe q1NdZfH+LjRFGebLp04w==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYm5OTwIuHiu1fjUi8OMah1wkU8K2GAd4A
  • Thread-topic: [PATCH] xen/mem_sharing: support forks with active vPMU state

On 19/07/2022 18:18, Tamas K Lengyel wrote:
> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
> index d2c03a1104..2b5d64a60d 100644
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -529,6 +529,16 @@ void vpmu_initialise(struct vcpu *v)
>          put_vpmu(v);
>  }
>  
> +void vpmu_allocate_context(struct vcpu *v)
> +{
> +    struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +
> +    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
> +        return;
> +
> +    alternative_call(vpmu_ops.allocate_context, v);

You need to fill in an AMD pointer, or make this conditional.

All alternatives have NULL pointers turned into UDs.

Should be a two-liner on the AMD side.

> diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
> index 8612f46973..31dc0ee14b 100644
> --- a/xen/arch/x86/cpu/vpmu_intel.c
> +++ b/xen/arch/x86/cpu/vpmu_intel.c
>  static int core2_vpmu_verify(struct vcpu *v)
> @@ -474,7 +485,11 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
>                                      sizeof(uint64_t) * fixed_pmc_cnt;
>  
>      vpmu->context = core2_vpmu_cxt;
> +    vpmu->context_size = sizeof(struct xen_pmu_intel_ctxt) +
> +                         fixed_pmc_cnt * sizeof(uint64_t) +
> +                         arch_pmc_cnt * sizeof(struct xen_pmu_cntr_pair);

This wants deduplicating with the earlier calculation, surely?

>      vpmu->priv_context = p;
> +    vpmu->priv_context_size = sizeof(uint64_t);
>  
>      if ( !has_vlapic(v->domain) )
>      {
> @@ -882,6 +897,7 @@ static int cf_check vmx_vpmu_initialise(struct vcpu *v)
>  
>  static const struct arch_vpmu_ops __initconst_cf_clobber core2_vpmu_ops = {
>      .initialise = vmx_vpmu_initialise,
> +    .allocate_context = core2_vpmu_alloc_resource,

core2_vpmu_alloc_resource() needs to gain a cf_check to not explode on
TGL/SPR.

>      .do_wrmsr = core2_vpmu_do_wrmsr,
>      .do_rdmsr = core2_vpmu_do_rdmsr,
>      .do_interrupt = core2_vpmu_do_interrupt,
> diff --git a/xen/arch/x86/include/asm/vpmu.h b/xen/arch/x86/include/asm/vpmu.h
> index e5709bd44a..14d0939247 100644
> --- a/xen/arch/x86/include/asm/vpmu.h
> +++ b/xen/arch/x86/include/asm/vpmu.h
> @@ -106,8 +109,10 @@ void vpmu_lvtpc_update(uint32_t val);
>  int vpmu_do_msr(unsigned int msr, uint64_t *msr_content, bool is_write);
>  void vpmu_do_interrupt(struct cpu_user_regs *regs);
>  void vpmu_initialise(struct vcpu *v);
> +void vpmu_allocate_context(struct vcpu *v);
>  void vpmu_destroy(struct vcpu *v);
>  void vpmu_save(struct vcpu *v);
> +void vpmu_save_force(void *arg);

Needs the cf_check to compile.

>  int vpmu_load(struct vcpu *v, bool_t from_guest);
>  void vpmu_dump(struct vcpu *v);
>  
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 8f9d9ed9a9..39cd03abf7 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1653,6 +1653,50 @@ static void copy_vcpu_nonreg_state(struct vcpu 
> *d_vcpu, struct vcpu *cd_vcpu)
>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>  }
>  
> +static int copy_vpmu(struct vcpu *d_vcpu, struct vcpu *cd_vcpu)
> +{
> +    struct vpmu_struct *d_vpmu = vcpu_vpmu(d_vcpu);
> +    struct vpmu_struct *cd_vpmu = vcpu_vpmu(cd_vcpu);
> +
> +    if ( !vpmu_are_all_set(d_vpmu, VPMU_INITIALIZED | 
> VPMU_CONTEXT_ALLOCATED) )
> +        return 0;
> +    if ( !vpmu_is_set(cd_vpmu, VPMU_CONTEXT_ALLOCATED) )
> +    {
> +        vpmu_allocate_context(cd_vcpu);
> +        if ( !vpmu_is_set(cd_vpmu, VPMU_CONTEXT_ALLOCATED) )
> +            return -ENOMEM;

vpmu_allocate_context() already checks VPMU_CONTEXT_ALLOCATED.  But
isn't the double check here redundant?

The subsequent check looks like you want to pass the hook's return value
up through vpmu_allocate_context().

(And if you feel like turning it from bool-as-int to something more
sane, say -errno, that would also be great.)

Otherwise, looks plausible.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.