[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/5] x86/paging: fold most HAP and shadow final teardown


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 16 Mar 2023 14:28:25 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GKcMv2fIGRp0BwlC7+WzQ0t6smEP2wsWIPqdiHGu/40=; b=RrsXX5xPog/K+SUWeIkVOWGHizdQs63ChikjnH0y6zI2utdACzzhaQIrwnS7/pMH4T3BNoNYFzCsY1Ur+Lvd8JoCdCA0f5sX4x4BG9dTufOt/9pt9yIf6dMDPjAr7Cg5DyX0OmOrwxHMIUNgdOwYzuGdcB76Twu/jcdoX1ByOAM1AWgHPHlcQFu6Pku/igXhfnM2Comod7mVNaMkoF9bKHyHomopUX/6YU0UjAr8mkL26Ziz5XO9vuoXVDNSZdyNxxtGJcycooUc7LlbrM+X9mO/7OvG4AiUVhEWmPhFN3Cfl2EVULcR11ejohucEOTz9gDFOoHwCgznQftiB04utg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EGLeJyXvyny7tYddb+nycQko0fyHJhgW7TwhxCLC4UYiK4vXq9ryGURGn/7gATDeYPu+pXb7qZ0T82gh4Xk63IeSzBUGnLmk7RVV5yu1oEs+/v+Br69ZLym3lLahZGd+vhK/8Jk/RgRMz8kqhSOWd6olOCzxNQ6ky9AXOaME/zXZ2/AYNN3JBYf2AXMQfBkMjZj5Xmd30H48lVChIhl8CQpGiUMIN4XnqgIthj1IGHHR63pw0IpFCoEFEOgTufzwuDubBqeqk+yQrsqRgHytn5xg1zPzCwgU8mwbXohuwboWNg3XdIuFkDNwfaYMObsLeMe50iUy1QMXuZ9fkKajEg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Thu, 16 Mar 2023 13:28:59 +0000
  • Ironport-data: A9a23:4trM4q5Gf5ZOaRq50JW7fQxRtA3GchMFZxGqfqrLsTDasY5as4F+v jROXGiDaanZZTSnKNokaI7j/RxX6MfQmtFjTwE9pS4zHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0 fv/uMSaM1K+s9JOGjt8B5mr9VU+7JwehBtC5gZlPasS4geA/5UoJMl3yZ+ZfiOQrrZ8RoZWd 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m8 /hFFwEpUja5meuZxbC7Sddmwek5BZy+VG8fkikIITDxK98DGMiGZpqQoNhS0XE3m9xEGuvYa 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooiOSF3Nn9I7RmQe1PmUmVv CTe9nnRCRAGLt2PjzGC9xpAg8eWxXqiAd1JSefQGvhCp2CCxy8eSyYqamSWkcSpm0u7RM4AA hlBksYphe1onKCxdfH/VRClpH+PvjYHRsFdVeY97Wml2qfSpgqUGGUAZjpAc8A98t87QyQw0 V2ElM+vAiZg2JWKTVqN+7HSqim9UQAJLGoqdSICCwwf7LHeTJobixvOSpNmD/Szh9isQTXom WnS8G45mqkZitMN2+Oj51fbjjmwp5/PCAko+gHQWWHj5QR8DGK4W7GVBZHgxa4oBO6kopOp5 hDoR+D2ADgyMKyw
  • Ironport-hdrordr: A9a23:nNLBBavLzRaINzFh7P92Jys/7skDstV00zEX/kB9WHVpm6yj+v xG/c5rsCMc7Qx6ZJhOo7+90cW7L080lqQFg7X5X43DYOCOggLBQL2KhbGI/9SKIVycygcy78 Zdm6gVMqyLMbB55/yKnTVRxbwbsaW6GKPDv5ag8590JzsaD52Jd21Ce36m+ksdfnggObMJUK Cyy+BgvDSadXEefq2AdwI4t7iqnaysqHr+CyR2fiIa1A==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Mar 16, 2023 at 01:57:45PM +0100, Jan Beulich wrote:
> On 16.03.2023 13:24, Roger Pau Monné wrote:
> > On Mon, Jan 09, 2023 at 02:39:19PM +0100, Jan Beulich wrote:
> >> HAP does a few things beyond what's common, which are left there at
> >> least for now. Common operations, however, are moved to
> >> paging_final_teardown(), allowing shadow_final_teardown() to go away.
> >>
> >> While moving (and hence generalizing) the respective SHADOW_PRINTK()
> >> drop the logging of total_pages from the 2nd instance - the value is
> >> necessarily zero after {hap,shadow}_set_allocation() - and shorten the
> >> messages, in part accounting for PAGING_PRINTK() logging __func__
> >> already.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> The remaining parts of hap_final_teardown() could be moved as well, at
> >> the price of a CONFIG_HVM conditional. I wasn't sure whether that was
> >> deemed reasonable.
> >> ---
> >> v2: Shorten PAGING_PRINTK() messages. Adjust comments while being
> >>     moved.
> >>
> >> --- a/xen/arch/x86/include/asm/shadow.h
> >> +++ b/xen/arch/x86/include/asm/shadow.h
> >> @@ -78,9 +78,6 @@ int shadow_domctl(struct domain *d,
> >>  void shadow_vcpu_teardown(struct vcpu *v);
> >>  void shadow_teardown(struct domain *d, bool *preempted);
> >>  
> >> -/* Call once all of the references to the domain have gone away */
> >> -void shadow_final_teardown(struct domain *d);
> >> -
> >>  void sh_remove_shadows(struct domain *d, mfn_t gmfn, int fast, int all);
> >>  
> >>  /* Adjust shadows ready for a guest page to change its type. */
> >> --- a/xen/arch/x86/mm/hap/hap.c
> >> +++ b/xen/arch/x86/mm/hap/hap.c
> >> @@ -268,8 +268,8 @@ static void hap_free(struct domain *d, m
> >>  
> >>      /*
> >>       * For dying domains, actually free the memory here. This way less 
> >> work is
> >> -     * left to hap_final_teardown(), which cannot easily have preemption 
> >> checks
> >> -     * added.
> >> +     * left to paging_final_teardown(), which cannot easily have 
> >> preemption
> >> +     * checks added.
> >>       */
> >>      if ( unlikely(d->is_dying) )
> >>      {
> >> @@ -552,18 +552,6 @@ void hap_final_teardown(struct domain *d
> >>      for (i = 0; i < MAX_NESTEDP2M; i++) {
> >>          p2m_teardown(d->arch.nested_p2m[i], true, NULL);
> >>      }
> >> -
> >> -    if ( d->arch.paging.total_pages != 0 )
> >> -        hap_teardown(d, NULL);
> >> -
> >> -    p2m_teardown(p2m_get_hostp2m(d), true, NULL);
> >> -    /* Free any memory that the p2m teardown released */
> >> -    paging_lock(d);
> >> -    hap_set_allocation(d, 0, NULL);
> >> -    ASSERT(d->arch.paging.p2m_pages == 0);
> >> -    ASSERT(d->arch.paging.free_pages == 0);
> >> -    ASSERT(d->arch.paging.total_pages == 0);
> >> -    paging_unlock(d);
> >>  }
> >>  
> >>  void hap_vcpu_teardown(struct vcpu *v)
> >> --- a/xen/arch/x86/mm/paging.c
> >> +++ b/xen/arch/x86/mm/paging.c
> >> @@ -842,10 +842,45 @@ int paging_teardown(struct domain *d)
> >>  /* Call once all of the references to the domain have gone away */
> >>  void paging_final_teardown(struct domain *d)
> >>  {
> >> -    if ( hap_enabled(d) )
> >> +    bool hap = hap_enabled(d);
> >> +
> >> +    PAGING_PRINTK("%pd start: total = %u, free = %u, p2m = %u\n",
> >> +                  d, d->arch.paging.total_pages,
> >> +                  d->arch.paging.free_pages, d->arch.paging.p2m_pages);
> >> +
> >> +    if ( hap )
> >>          hap_final_teardown(d);
> >> +
> >> +    /*
> >> +     * Remove remaining paging memory.  This can be nonzero on certain 
> >> error
> >> +     * paths.
> >> +     */
> >> +    if ( d->arch.paging.total_pages )
> >> +    {
> >> +        if ( hap )
> >> +            hap_teardown(d, NULL);
> >> +        else
> >> +            shadow_teardown(d, NULL);
> > 
> > For a logical PoV, shouldn't hap_teardown() be called before
> > hap_final_teardown()?
> 
> Yes and no: The meaning of "final" has changed - previously it meant "the
> final parts of tearing down" while now it means "the parts of tearing
> down which must be done during final cleanup". I can't think of a better
> name, so I left "hap_final_teardown" as it was.
> 
> > Also hap_final_teardown() already contains a call to hap_teardown() if
> > total_pages != 0, so this is just redundant in the HAP case?
> 
> Well, like in shadow_final_teardown() there was such a call prior to this
> change, but there's none left now.
> 
> > Maybe we want to pull that hap_teardown() out of hap_final_teardown()
> 
> That's what I'm doing here.

Oh, sorry, I've missed that chunk.  Then:

Reviewed-by: Roger Pau Monné <roge.rpau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.