[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 11/14] shr_pages field is MEM_SHARING-only


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 23 Feb 2022 17:07:39 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dk/wJhaCBPsVN834p80E6Rx8B0I9ymHYeAZt7XTwLVE=; b=bQHQBOJxcQUJjfBshn3SiU9inATEkk9E84+HNDAvMRmjrcUdL8/q2r3lJWGvqYYvk1sffA5XCTaIREzIFuYUxVIVa4qqMWx//upjOodMnJA0gIRyC3/iB0PwRTamwRn60V7JdRFnnHKE0rrrq8ftTlYE0oswOmpREOnqqrYPSI09wBZ53MmPblhIUG5xrPNUAfpR458UiJZhtt55rbqrqdzRYjLj7OxfHIRhBsuI3WeGbEE4/jXCvDOpzlE2XCBnb6PDBQns2ZOwUqDVnVQ1FoQM7XdJikE2f0d7JyyL0ntWajA7DOUznmEQiWuCIitEghUGg0j537PxLZev7qqDiQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=REkXlgEfNQZTJhYzhD5pFP194TG9ZyuUZi/icszPoQb4jMtEzacANdbCh3psAKjNB6yPJjxDwcfMIOsLW/YQZ6KX2Bo87lv6oCh93U7+TDuOyYWuhHLW25ro2wEaCSksIAY/MVgpsFwPYNkbHSKldjvArlBTOYcc+ZywPZgIw8EolaMmmdznlXjpPveh61oJVsWSZhuUeS6CbQuEq4+DcLYCIVegcShx8yXzTdci2p46A7Q2yk1QXIbaTdNGpItHjkKFb6n9mqYUKNylXj3+m/uidB0O5ajVe/OPg7o6JhlWz4Pwkss0t7rtyax3Uk3hNUrXfCx/l4ZZ2xRmZOKcog==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 23 Feb 2022 16:07:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23.02.2022 17:04, Jan Beulich wrote:
> Conditionalize it and its uses accordingly. The main goal though is to
> demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
> particular means the last remaining use of p2m_lock() in this cases goes
> away.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> Reviewed-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>

Forgot to add here:

---
v2: Re-base (drop clearing of field in getdomaininfo()).

Jan

> ---
> I was on the edge of introducing a helper for atomic_read(&d->shr_pages)
> but decided against because of dump_domains() not being able to use it
> sensibly (I really want to omit the output field altogether there when
> !MEM_SHARING).
> 
> --- a/xen/arch/x86/mm/p2m-basic.c
> +++ b/xen/arch/x86/mm/p2m-basic.c
> @@ -159,7 +159,6 @@ void p2m_teardown(struct p2m_domain *p2m
>  {
>  #ifdef CONFIG_HVM
>      struct page_info *pg;
> -#endif
>      struct domain *d;
>  
>      if ( !p2m )
> @@ -169,16 +168,17 @@ void p2m_teardown(struct p2m_domain *p2m
>  
>      p2m_lock(p2m);
>  
> +#ifdef CONFIG_MEM_SHARING
>      ASSERT(atomic_read(&d->shr_pages) == 0);
> +#endif
>  
> -#ifdef CONFIG_HVM
>      p2m->phys_table = pagetable_null();
>  
>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>          d->arch.paging.free_page(d, pg);
> -#endif
>  
>      p2m_unlock(p2m);
> +#endif
>  }
>  
>  void p2m_final_teardown(struct domain *d)
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -109,7 +109,9 @@ void getdomaininfo(struct domain *d, str
>      info->tot_pages         = domain_tot_pages(d);
>      info->max_pages         = d->max_pages;
>      info->outstanding_pages = d->outstanding_pages;
> +#ifdef CONFIG_MEM_SHARING
>      info->shr_pages         = atomic_read(&d->shr_pages);
> +#endif
>      info->paged_pages       = atomic_read(&d->paged_pages);
>      info->shared_info_frame =
>          gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -274,9 +274,16 @@ static void dump_domains(unsigned char k
>          printk("    refcnt=%d dying=%d pause_count=%d\n",
>                 atomic_read(&d->refcnt), d->is_dying,
>                 atomic_read(&d->pause_count));
> -        printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u 
> paged_pages=%u "
> -               "dirty_cpus={%*pbl} max_pages=%u\n",
> -               domain_tot_pages(d), d->xenheap_pages, 
> atomic_read(&d->shr_pages),
> +        printk("    nr_pages=%u xenheap_pages=%u"
> +#ifdef CONFIG_MEM_SHARING
> +               " shared_pages=%u"
> +#endif
> +               " paged_pages=%u"
> +               " dirty_cpus={%*pbl} max_pages=%u\n",
> +               domain_tot_pages(d), d->xenheap_pages,
> +#ifdef CONFIG_MEM_SHARING
> +               atomic_read(&d->shr_pages),
> +#endif
>                 atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
>                 d->max_pages);
>          printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -385,7 +385,11 @@ struct domain
>      unsigned int     outstanding_pages; /* pages claimed but not possessed */
>      unsigned int     max_pages;         /* maximum value for 
> domain_tot_pages() */
>      unsigned int     extra_pages;       /* pages not included in 
> domain_tot_pages() */
> +
> +#ifdef CONFIG_MEM_SHARING
>      atomic_t         shr_pages;         /* shared pages */
> +#endif
> +
>      atomic_t         paged_pages;       /* paged-out pages */
>  
>      /* Scheduling. */
> 
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.