[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 18 Jul 2019 10:55:15 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GOMVgC93xhGFiFrapiM5TykPrZhSksEdFNbVfu2dmgE=; b=Lf85Tyg1oFz0yTFZTFNloHcBq1F84d/B+bGofY7muT74NoGsdd0pDotishbU3eMV9X9QgPgHyf10lnb0WxXpq/9kjPHJCeSwXOaT7NolM6mpCf4uwyFAnxs7YmNOVlmL1ZHkikc6QTB9SzvtONWe9dNa7KEoTrnMlSRI4OFmuMroMeko5ZUbNYvyx7NXfiylZbq0SA1C1tvisg0SXrHiJQQEPGhruFU5L8tZPROVDgmdcbWDR6qj+/pu/w6nVPzkm0k2L6X9Pb2EuuQa+ul8wX0jSkf2FtTgbvd1hm0gtbsJe6uDg0eYFrrlV1dwxaOLBs5Jz0k1dBa8NGqF+nRNAg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hI8mc7nfFTxmg9TpesBsIknRSYeJZZt/Qm9y4yp4X6LUZ5cu+8mJI7oqtC9O+ppcVMTUkTmkLmDaZr4jivcWW2fE2uI8OGX/OhysnBkKJMI0Sb2Z6gLc20T1tDTbttpVE6DpTVCUwuGvoZCoXrRfKWLp1PG0kz2DaOZof3T057WQvdCdwUYSDaaNIuregr3qYt3pjsSLh+tBcLdhEKlQ95PrH1TsGzMPWcuRjQfts/97HjMQsFgV1tlzFLSmoIWOZdD+Ks4T1eyZjmSF16csLAPHqQtoDMqYIQqEvJGRdVYXs6JqZF9AOeLd91hvjoNLqjEkrjQHgvipnIIPqQ8scg==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 18 Jul 2019 10:56:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVPNaxkDvYm35RQ027yAvXjVGEzqbQNNsA
  • Thread-topic: [Xen-devel] [PATCH v6 5/5] x86/mem_sharing: style cleanup

On 17.07.2019 21:33, Tamas K Lengyel wrote:
> @@ -136,8 +137,8 @@ static inline bool _page_lock(struct page_info *page)
>               cpu_relax();
>           nx = x + (1 | PGT_locked);
>           if ( !(x & PGT_validated) ||
> -             !(x & PGT_count_mask) ||
> -             !(nx & PGT_count_mask) )
> +                !(x & PGT_count_mask) ||
> +                !(nx & PGT_count_mask) )
>               return false;
>       } while ( cmpxchg(&page->u.inuse.type_info, x, nx) != x );

Aren't you screwing up indentation here? It looks wrong both in my
mail client's view and on the list archives, whereas. Furthermore
this is code you've introduced earlier in the series, so it should
be got right there, not here.

> @@ -225,7 +225,7 @@ rmap_init(struct page_info *page)
>   #define HASH(domain, gfn)       \
>       (((gfn) + (domain)) % RMAP_HASHTAB_SIZE)
>   
> -/* Conversions. Tuned by the thresholds. Should only happen twice
> +/* Conversions. Tuned by the thresholds. Should only happen twice
>    * (once each) during the lifetime of a shared page */

Please fix the comment style as a whole, not just the stray trailing
blank.

> @@ -288,13 +288,13 @@ rmap_count(struct page_info *pg)
>   }
>   
>   /* The page type count is always decreased after removing from the rmap.
> - * Use a convert flag to avoid mutating the rmap if in the middle of an
> + * Use a convert flag to avoid mutating the rmap if in the middle of an
>    * iterator, or if the page will be soon destroyed anyways. */

Same here.

>   static inline void
>   rmap_del(gfn_info_t *gfn_info, struct page_info *page, int convert)
>   {
>       if ( RMAP_USES_HASHTAB(page) && convert &&
> -         (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )
> +            (rmap_count(page) <= RMAP_LIGHT_SHARED_PAGE) )

Here you again seem to be screwing up correct indentation. There are
more such instances, so I guess I'll leave it to you to go over the
whole patch once more.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.